path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
docs/neural-network-case-study.ipynb
###Markdown .. _neural-networks-case-study: Generating Neural Network DiagramsThe following explores how Toyplot's graph visualization can be used to generate high-quality diagrams of neural networks. Network DataFirst, we will define the edges (weights) in our network, by explicitly listing the source and target for each edge: ###Code import numpy import toyplot numpy.random.seed(1234) edges = numpy.array([ ["x0", "a0"], ["x0", "a1"], ["x0", "a2"], ["x0", "a3"], ["x1", "a0"], ["x1", "a1"], ["x1", "a2"], ["x1", "a3"], ["x2", "a0"], ["x2", "a1"], ["x2", "a2"], ["x2", "a3"], ["a0", "y0"], ["a0", "y1"], ["a1", "y0"], ["a1", "y1"], ["a2", "y0"], ["a2", "y1"], ["a3", "y0"], ["a3", "y1"], ]) ###Output _____no_output_____ ###Markdown Network LayoutAs a straw-man, we can quickly render a graph using just the edge data: ###Code canvas, axes, mark = toyplot.graph(edges) ###Output _____no_output_____ ###Markdown Clearly, this needs work - Toyplot's default force-directed layout algorithm obscures the fact that our neural network is organized in layers. What we want is to put all of the `x` nodes in the first (input) layer, all of the `a` nodes in a second (hidden) layer, and all of the `y` nodes in the last (output) layer. Since Toyplot doesn't have a graph layout algorithm that can do that for us, we'll have to compute the vertex coordinates ourselves: ###Code vertex_ids = numpy.unique(edges) layer_map = {"x": 0, "a": -1, "y": -2} offset_map = {"x": 0.5, "a": 0, "y": 1} vcoordinates = [] for vertex_id in vertex_ids: layer = vertex_id[0] column = int(vertex_id[1:]) x = column + offset_map[layer] y = layer_map[layer] vcoordinates.append((x, y)) vcoordinates = numpy.array(vcoordinates) ###Output _____no_output_____ ###Markdown Now, we can see what the graph looks like with our explicitly defined coordinates: ###Code canvas, axes, mark = toyplot.graph(edges, vcoordinates=vcoordinates) ###Output _____no_output_____ ###Markdown Vertex and Edge StylesWith the graph layout looking better, we can begin to work on the appearance of the vertices and edges: ###Code canvas, axes, mark = toyplot.graph( edges, ecolor="black", tmarker=">", vcolor="white", vcoordinates=vcoordinates, vmarker="o", vsize=50, vstyle={"stroke":"black"}, width=500, height=500, ) # So we can control the aspect ratio of the figure using the canvas width & height axes.aspect=None # Prevent large vertex markers from falling outside the canvas axes.padding=50 ###Output _____no_output_____ ###Markdown In many cases, we might not want to see the vertex labels: ###Code canvas, axes, mark = toyplot.graph( edges, ecolor="black", tmarker=">", vcolor="white", vcoordinates=vcoordinates, vlshow=False, vmarker="o", vsize=50, vstyle={"stroke":"black"}, width=500, height=500, ) axes.aspect=None axes.padding=50 ###Output _____no_output_____ ###Markdown Or we might want to substitute our own, explicit vertex labels, to illustrate the values of individual activation units during network evaluation: ###Code vertex_values = numpy.random.uniform(size=len(vertex_ids)) vertex_labels = ["%.2f" % value for value in vertex_values] canvas, axes, mark = toyplot.graph( edges, ecolor="black", tmarker=">", vcolor="white", vcoordinates=vcoordinates, vlabel=vertex_labels, vmarker="o", vsize=50, vstyle={"stroke":"black"}, width=500, height=500, ) axes.aspect=None axes.padding=50 ###Output _____no_output_____ ###Markdown Edge WeightsWe might also want to display the network edge weights. Edge *middle* markers are a good choice to do this: ###Code edge_weights = numpy.random.uniform(size=len(edges)) mstyle = {"fill": "white"} lstyle = {"font-size": "12px"} mmarkers = [toyplot.marker.create(shape="s", label="%.1f" % weight, size=30, mstyle=mstyle, lstyle=lstyle) for weight in edge_weights] canvas, axes, mark = toyplot.graph( edges, ecolor="black", mmarker=mmarkers, tmarker=">", vcolor="white", vcoordinates=vcoordinates, vlabel=vertex_labels, vmarker="o", vsize=50, vstyle={"stroke":"black"}, width=500, height=500, ) axes.aspect=None axes.padding=50 ###Output _____no_output_____ ###Markdown Note that the middle markers are aligned with the edges, making the weight values difficult to read. To fix this, we can set an explicit orientation for the middle markers: ###Code mmarkers = [toyplot.marker.create(angle=0, shape="s", label="%.1f" % weight, size=30, mstyle=mstyle, lstyle=lstyle) for weight in edge_weights] canvas, axes, mark = toyplot.graph( edges, ecolor="black", mmarker=mmarkers, tmarker=">", vcolor="white", vcoordinates=vcoordinates, vlabel=vertex_labels, vmarker="o", vsize=50, vstyle={"stroke":"black"}, width=500, height=500, ) axes.aspect=None axes.padding=50 ###Output _____no_output_____ ###Markdown Now however, many of the middle markers overlap, making it appear as if there are fewer weights than edges. One way to address this is to randomly reposition the markers so that they rarely overlap: ###Code mposition = numpy.random.uniform(0.1, 0.8, len(edges)) canvas, axes, mark = toyplot.graph( edges, ecolor="black", mmarker=mmarkers, mposition=mposition, tmarker=">", vcolor="white", vcoordinates=vcoordinates, vlabel=vertex_labels, vmarker="o", vsize=50, vstyle={"stroke":"black"}, width=500, height=500, ) axes.aspect=None axes.padding=50 ###Output _____no_output_____ ###Markdown Note that the `mposition` argument is a value between zero and one that positions each middle marker anywhere along its edge from beginning to end, respectively. Layer-Only DiagramsOnce a network reaches a certain level of complexity, it is typical to only diagram the layers in the network, instead of all the activation units. Here, we define per-layer data for a simple layer-only diagram: ###Code layers = [ "<b>conv1</b><br/>3&#215;3 convolutional", "<b>pool1</b><br/>max pooling", "<b>fc_1</b><br/>4096 dense", "<b>fc_2</b><br/>1000 dense softmax", ] vertex_ids = numpy.arange(len(layers)) edges = numpy.column_stack(( vertex_ids[:-1], vertex_ids[1:], )) vcoordinates = numpy.column_stack(( numpy.zeros_like(layers, dtype="float"), numpy.arange(0, -len(layers), -1), )) ###Output _____no_output_____ ###Markdown In this case, it's useful to use Toyplot's special rectangular markers for the graph nodes: ###Code canvas, axes, mark = toyplot.graph( edges, ecolor="black", tmarker=">", vcoordinates=vcoordinates, vlabel=layers, vmarker=toyplot.marker.create("r3x1", lstyle={"font-size":"12px"}, size=50), vstyle={"stroke":"black", "fill":"white"}, width=200, height=400, ) axes.padding = 50 ###Output _____no_output_____
building-ai.ipynb
###Markdown [Elements of AI: Building AI](https://buildingai.elementsofai.com/) Getting started with AI ###Code # Install necessary packages withing the current environment !python -m pip install numpy !python -m pip install -U scikit-learn ###Output Requirement already satisfied: numpy in /Users/ak/.pyenv/versions/3.9.0/envs/elementsofAI-buildingAI/lib/python3.9/site-packages (1.20.1) Requirement already satisfied: scikit-learn in /Users/ak/.pyenv/versions/3.9.0/envs/elementsofAI-buildingAI/lib/python3.9/site-packages (0.24.1) Requirement already satisfied: scipy>=0.19.1 in /Users/ak/.pyenv/versions/3.9.0/envs/elementsofAI-buildingAI/lib/python3.9/site-packages (from scikit-learn) (1.6.1) Requirement already satisfied: threadpoolctl>=2.0.0 in /Users/ak/.pyenv/versions/3.9.0/envs/elementsofAI-buildingAI/lib/python3.9/site-packages (from scikit-learn) (2.1.0) Requirement already satisfied: numpy>=1.13.3 in /Users/ak/.pyenv/versions/3.9.0/envs/elementsofAI-buildingAI/lib/python3.9/site-packages (from scikit-learn) (1.20.1) Requirement already satisfied: joblib>=0.11 in /Users/ak/.pyenv/versions/3.9.0/envs/elementsofAI-buildingAI/lib/python3.9/site-packages (from scikit-learn) (1.0.1) ###Markdown II.Optimization Exercise 1: Listing pineapple routes -- AdvancedImagine that you've been assigned the task to plan the route of a container ship loaded with pineapples. The ship starts in Panama, loaded with delicious Fairtrade pineapples. There are four other ports, New York, Casablanca, Amsterdam, and Helsinki, where pineapple-craving citizens are eagerly waiting. The ship must visit each of the four destination ports exactly once, but the order in which each port is visited is free. The goal is to minimize the carbon emissions, which means that a shorter route is better than a longer one.To solve this problem, it is enough to list all the possible routes that start from Panama and visit each of the other ports exactly once, calculate the carbon emissions of each route, and print out the one with the least emissions.Before we try to find the optimal route, let's start by listing all the alternative routes. After all, it wouldn't make sense to stop at any port more than once.Write a program that takes a list (in this case, the names of the ports) and prints out all the possible orderings of them. The mathematical term for such orderings is a permutation. Note that your program should work for an input list of any length. The order in which the permutations are printed doesn't matter, but they should all begin with Panama (PAN).The format of the output should be such that each permutation is printed on its own row as one string with the port names separated by spaces. You can use the join function as follows: `print(' '.join([portnames[i] for i in route]))`. ###Code from itertools import permutations as perm portnames = ["PAN", "AMS", "CAS", "NYC", "HEL"] def permutations(route, ports): for route in perm(ports): print("PAN " + " ".join([portnames[i] for i in route])) # this will start the recursion with 0 as the first stop permutations([0], list(range(1, len(portnames)))) ###Output PAN AMS CAS NYC HEL PAN AMS CAS HEL NYC PAN AMS NYC CAS HEL PAN AMS NYC HEL CAS PAN AMS HEL CAS NYC PAN AMS HEL NYC CAS PAN CAS AMS NYC HEL PAN CAS AMS HEL NYC PAN CAS NYC AMS HEL PAN CAS NYC HEL AMS PAN CAS HEL AMS NYC PAN CAS HEL NYC AMS PAN NYC AMS CAS HEL PAN NYC AMS HEL CAS PAN NYC CAS AMS HEL PAN NYC CAS HEL AMS PAN NYC HEL AMS CAS PAN NYC HEL CAS AMS PAN HEL AMS CAS NYC PAN HEL AMS NYC CAS PAN HEL CAS AMS NYC PAN HEL CAS NYC AMS PAN HEL NYC AMS CAS PAN HEL NYC CAS AMS ###Markdown Exercise 2: Pineapple route emissions -- AdvancedHaving listed the alternatives, next we can calculate the carbon emissions for each of them.Modify the code so that it finds the route with minimum carbon emissions and prints it out. Again, the program should work for any number of ports. You can assume that the distances between the ports are given in an array of the appropriate size so that the distance between ports i and j is found in `D[i][j]`. ###Code from itertools import permutations as perm portnames = ["PAN", "AMS", "CAS", "NYC", "HEL"] # https://sea-distances.org/ # nautical miles converted to km D = [ [0, 8943, 8019, 3652, 10545], [8943, 0, 2619, 6317, 2078], [8019, 2619, 0, 5836, 4939], [3652, 6317, 5836, 0, 7825], [10545, 2078, 4939, 7825, 0], ] # https://timeforchange.org/co2-emissions-shipping-goods # assume 20g per km per metric ton (of pineapples) co2 = 0.020 # DATA BLOCK ENDS # these variables are initialised to nonsensical values # your program should determine the correct values for them smallest = 1000000 bestroute = [0, 0, 0, 0, 0] def permutations(route, ports): global smallest, bestroute for r in perm(ports): r = route + list(r) # em = co2 * (D[r[0]][r[1]] + D[r[1]][r[2]] + D[r[2]][r[3]] + D[r[3]][r[4]]) em = co2 * sum(D[i][j] for i, j in zip(r[:-1], r[1:])) if em < smallest: smallest = em bestroute = r def main(): # this will start the recursion permutations([0], list(range(1, len(portnames)))) # print the best route and its emissions print(" ".join([portnames[i] for i in bestroute]) + " %.1f kg" % smallest) main() ###Output PAN NYC CAS AMS HEL 283.7 kg ###Markdown III.Hill climbing Exercise 3: Reach the highest summit -- AdvancedLet the elevation at each point on the mountain be stored in array h of size 100. The elevation at the leftmost point is thus stored in `h[0]` and the elevation at the rightmost point is stored in `h[99]`.The following program starts at a random position and keeps going to the right until Venla can no longer go up. However, perhaps the mountain is a bit rugged which means it's necessary to look a bit further ahead.Edit the program so that Venla doesn't stop climbing as long as she can go up by moving up to five steps either left or right. If there are multiple choices within five steps that go up, any one of them is good. To check how your climbing algorithm works in action, you can plot the results of your hill climbing using the Plot button. The summit will be marked with a blue triangle. ###Code import math import random # just for generating random mountains # generate random mountains w = [0.05, random.random() / 3, random.random() / 3] h = [ 1.0 + math.sin(1 + x / 0.6) * w[0] + math.sin(-0.3 + x / 9.0) * w[1] + math.sin(-0.2 + x / 30.0) * w[2] for x in range(100) ] def climb(x, h): # keep climbing until we've found a summit summit = False steps_max = 5 # range to check # Edit the program so that Venla doesn't stop climbing as long as she can go up by moving up to five steps either left or right. while not summit: summit = True for x_new in range(max(0, x - steps_max), min(99, x + steps_max)): if h[x_new] > h[x]: x = x_new # here is higher, go here summit = False # and keep going return x def main(h): # start at a random place x0 = random.randint(1, 98) x = climb(x0, h) return x0, x main(h) ###Output _____no_output_____ ###Markdown Exercise 4: Probabilities -- AdvancedWrite a program that prints "I love" followed by one word: the additional word should be 'dogs' with 80% probability, 'cats' with 10% probability, and 'bats' with 10% probability. ###Code import random x = random.random() if x < 0.8: favourite = "dogs" elif x < 0.9: favourite = "cats" else: favourite = "bats" print("I love", favourite) ###Output I love dogs ###Markdown Exercise 5: Warm-up Temperature -- Advanced**Simulated Annealing: the math**The probability of accepting the new solution with score `S_new` when the current solution has score `S_old` is given by the formula:`prob = exp(–(S_old – S_new)÷T)`where T is the temperature. (Remember that the temperature is an abstract concept that ideally starts high and gradually decreases towards zero.) The function `exp(x)` is the exponent function which can also be written mathematically as `e^x` (the so called Euler's constant e ≅ 2.71828 raised to power x).Suppose the current solution has score S_old = 150 and you try a small modification to create a new solution with score S_new = 140. In the greedy solution, this new solution wouldn't be accepted because it would mean a decrease in the score. In simulated annealing, the new solution is accepted with a certain probability as explained above.Modify the accept_prob function so that it returns the probability of accepting the new state using simulated annealing. The program should take the two score values (the current and the new) and the temperature value as arguments. ###Code import random from numpy import exp # from math import e def accept_prob(S_old, S_new, T): # this is the acceptance "probability" in the greedy hill-climbing method # where new solutions are accepted if and only if they are better # than the old one. # change it to be the acceptance probability in simulated annealing return 1.0 if S_new > S_old else exp(-(S_old - S_new) / T) # the above function will be used as follows. this is shown just for # your information; you don't have to change anything here def accept(S_old, S_new, T): if random.random() < accept_prob(S_old, S_new, T): print(True) else: print(False) ###Output _____no_output_____ ###Markdown Exercise 6: Simulated Annealing --Intermediate1D simulated annealing: modify the program below to use simulated annealing instead of plain hill climbing. In simulated annealing the probability of accepting a solution that lowers the score is given by `prob = exp(-(S_old - S_new)/T)`. Setting the temperature T and gradually decreasing can be done in many ways, some of which lead to better outcomes than others. A good choice in this case is for example: `T = 2*max(0, ((steps-step*1.2)/steps))**3`.The code below uses the plain hill-climbing strategy to only go up towards a peak. As you can see, the hill-climbing strategy tends to get stuck in local optima. ###Code import math, random # just for generating random mountains import numpy as np n = 10000 # size of the problem: number of possible solutions x = 0, ..., n-1 # generate random mountains def mountains(n): h = [0] * n for i in range(50): c = random.randint(20, n - 20) w = random.randint(3, int(math.sqrt(n / 5))) ** 2 s = random.random() h[max(0, c - w) : min(n, c + w)] = [ h[i] + s * (w - abs(c - i)) for i in range(max(0, c - w), min(n, c + w)) ] # scale the height so that the lowest point is 0.0 and the highest peak is 1.0 low = min(h) high = max(h) h = [y - low for y in h] h = [y / (high - low) for y in h] return h h = mountains(n) # start at a random place x0 = random.randint(1, n - 1) x = x0 # keep climbing for 5000 steps steps = 5000 def main(h, x): n = len(h) # the climbing starts here for step in range(steps): # this is our temperature to to be used for simulated annealing # it starts large and decreases with each step. you don't have to change this T = 2 * max(0, ((steps - step * 1.2) / steps)) ** 3 # let's try randomly moving (max. 1000 steps) left or right # making sure we don't fall off the edge of the world at 0 or n-1 # the height at this point will be our candidate score, S_new # while the height at our current location will be S_old x_new = random.randint(max(0, x - 1000), min(n - 1, x + 1000)) if h[x_new] > h[x]: x = x_new # the new position is higher, go there else: if T != 0 and random.random() <= np.exp(-(h[x] - h[x_new]) / T): x = x_new # if T == 0: # pass # elif random.random() <= np.exp(-(h[x] - h[x_new])/T): # x = x_new return x x = main(h, x0) print("Ended up at %d, highest point is %d" % (x, np.argmax(h))) ###Output Ended up at 4958, highest point is 883 ###Markdown --AdvancedLet's use simulated annealing to solve a simple two-dimensional optimization problem. The following code runs 50 optimization tracks in parallel (at the same time). It currently only looks around the current solution and only accepts moves that go up. Modify the program so that it uses simulated annealing.Remember that the probability of accepting a solution that lowers the score is given by `prob = exp(–(S_old - S_new)/T)`. Remember to also adjust the temperature in a way that it decreases as the simulation goes on, and to handle T=0 case correctly.Your goal is to ensure that on the average, at least 30 of the optimization tracks find the global optimum (the highest peak). ###Code import numpy as np import random N = 100 # size of the problem is N x N steps = 3000 # total number of iterations tracks = 50 # generate a landscape with multiple local optima def generator(x, y, x0=0.0, y0=0.0): return ( np.sin((x / N - x0) * np.pi) + np.sin((y / N - y0) * np.pi) + 0.07 * np.cos(12 * (x / N - x0) * np.pi) + 0.07 * np.cos(12 * (y / N - y0) * np.pi) ) x0 = np.random.random() - 0.5 y0 = np.random.random() - 0.5 h = np.fromfunction(np.vectorize(generator), (N, N), x0=x0, y0=y0, dtype=int) peak_x, peak_y = np.unravel_index(np.argmax(h), h.shape) # starting points x = np.random.randint(0, N, tracks) y = np.random.randint(0, N, tracks) def main(): global x global y for step in range(steps): # add a temperature schedule here T = 2 * max(0, ((steps - step * 1.2) / steps)) ** 3 # update solutions on each search track for i in range(tracks): # try a new solution near the current one x_new = np.random.randint(max(0, x[i] - 2), min(N, x[i] + 2 + 1)) y_new = np.random.randint(max(0, y[i] - 2), min(N, y[i] + 2 + 1)) S_old = h[x[i], y[i]] S_new = h[x_new, y_new] # change this to use simulated annealing if S_new > S_old: x[i], y[i] = x_new, y_new # new solution is better, go there else: if T != 0 and random.random() <= np.exp(-(S_old - S_new) / T): x[i], y[i] = x_new, y_new # Number of tracks found the peak print(sum([x[j] == peak_x and y[j] == peak_y for j in range(tracks)])) main() ###Output 39 ###Markdown Dealing with uncertainty I.Probability fundamentals Exercise 7: Flip the coin -- AdvancedWrite a program that generates 10000 random zeros and ones where the probability of one is p1 and the probability of zero is 1-p1 (hint: `np.random.choice([0,1], p=[1-p1, p1], size=10000)`), counts the number of occurrences of 5 consecutive ones ("11111") in the sequence, and outputs this number as a return value. Check that for p1 = 2/3, the count is close to 10000 x (2/3)^5 ≈ 1316.9. ###Code import numpy as np def generate(p1): # change this so that it generates 10000 random zeros and ones # where the probability of one is p1 seq = np.empty(10000) seq = np.random.choice([0, 1], p=[1 - p1, p1], size=10000) return seq def count(seq): # insert code to return the number of occurrences of 11111 in the sequence seq = "".join(map(str, seq)) tofind = "11111" found_num = 0 i = seq.find(tofind) while i != -1: found_num += 1 i = seq.find(tofind, i + 1) return found_num def main(p1): seq = generate(p1) return count(seq) print(main(2 / 3)) ###Output 1352 ###Markdown *The probability of "11111" at any given position in the sequence can be calculated as (2/3)^5 ≈ 0.13169. The number of occurrences is close to 10000 times this: 1316.9. To be more precise, the expected number of occurrences is about 0.13169 x 9996 ≈ 1316.3, because there are only 9996 places for a subsequence of length five in a sequence of 10000. The actual number will usually (in fact, with over 99% probability) be somewhere between 1230 and 1404. We check the solution allowing for an even wider margin that covers 99.99% of the cases.* Exercise 8: Fishing in the Nordics -- AdvancedSuppose we also happen to know the gender of the lottery winner. Here are same OECD statistics as above broken down by gender:|Country |Population |Male fishers |Female fishers |Fishers (total)||:----------|:----------|:--------------|:--------------|:--------------||Denmark |5,615,000 |1822 |69 |1891 ||Finland |5,439,000 |2575 |77 |2652 ||Iceland |324,000 |3400 |400 |3800 ||Norway |5,080,000 |11,291 |320 |11,611 ||Sweden |9,609,000 |1731 |26 |1757 ||TOTAL |26,067,000 |20,819 |892 |21,711 |Write a function that uses the above numbers and tries to guess the nationality of the winner when we know that the winner is a fisher and their gender (either female or male).The argument of the function should be the gender of the winner ('female' or 'male'). The return value of the function should be a pair (country, probability) where country is the most likely nationality of the winner and probability is the probability of the country being the nationality of the winner. ###Code countries = ["Denmark", "Finland", "Iceland", "Norway", "Sweden"] populations = [5615000, 5439000, 324000, 5080000, 9609000] male_fishers = [1822, 2575, 3400, 11291, 1731] female_fishers = [69, 77, 400, 320, 26] def guess(winner_gender): """guess the nationality of the winner when we know that the winner is a fisher and their gender """ # P(nat ∣ male_fisher)= male_fishers(nat)÷fishers(total) # P(nat ∣ female_fisher)= female_fishers(nat)÷fishers(total) if winner_gender == "female": fishers = female_fishers else: fishers = male_fishers # write your solution here guess = None biggest = 0.0 for country, fisher in zip(countries, fishers): frac = fisher / sum(fishers) * 100 if frac > biggest: guess = country biggest = frac return (guess, biggest) def main(): country, fraction = guess("male") print( "if the winner is male, my guess is he's from %s; probability %.2f%%" % (country, fraction) ) country, fraction = guess("female") print( "if the winner is female, my guess is she's from %s; probability %.2f%%" % (country, fraction) ) main() ###Output if the winner is male, my guess is he's from Norway; probability 54.23% if the winner is female, my guess is she's from Iceland; probability 44.84% ###Markdown II.The Bayes Rule Exercise 9: Block or not -- AdvancedLet's suppose you have a social media account on Instagram, Twitter, or some other platform. (Just in case you don't, it doesn't matter. We'll fill you in with the relevant information.) You check your account and notice that you have a new follower – this means that another user has decided to start following you to see things that you post. You don't recognize the person, and their username (or "handle" as it's called) is a little strange: John37330190. You don't want to have creepy bots following you, so you wonder. To decide whether you should block the new follower, you decide to use the Bayes rule!Suppose we know the probability that a new follower is a bot. You'll be writing a program that takes this value as an input. For now, let's just call this value P(bot). You'll also be given the probability that the username of a bot account includes an 8-digit number, which we'll call P(8-digits | bot), as well as the same probability for human (non-bot) accounts, P(8-digits | human).To use the Bayes rule, we'll also need to know the probability that a new follower (can be either bot or human) has an 8-digit number in their username, P(8-digits). The nice thing is that we can calculate P(8-digits) from the above information. The formula is as follows:`P(8-digits) = P(8-digits | bot) x P(bot) + P(8-digits | human) x P(human)`Remember that you can get P(human) simply as 1–P(bot), since these are the only options. (We consider business and other accounts as "human" as long as they aren't bots.)Write a program that takes as input the probability of a follower being a bot (pbot), the probability of a bot having a username with 8 digits (p8_bot), and the probability of a human having a username with 8 digits (p8_human). The values for these inputs are free for you to choose, but they have to be probabilitites, so they have to be between 0 and 1.Using the numbers you give the program calculate P(8-digits) and then use it and the Bayes rule to calculate and print out the probability of the new follower being a bot, P(bot | 8-digits):`P(bot | 8-digits) = P(8-digits | bot) x P(bot) / P(8-digits)`. ###Code def bot8(pbot, p8_bot, p8_human): # P(8-digits) = P(8-digits | bot) x P(bot) + P(8-digits | human) x P(human) p8 = p8_bot * pbot + p8_human * (1 - pbot) # P(bot | 8-digits) = P(8-digits | bot) x P(bot) / P(8-digits) pbot_8 = p8_bot * pbot / p8 print(pbot_8) # you can change these values to test your program with different values pbot = 0.1 p8_bot = 0.8 p8_human = 0.05 bot8(pbot, p8_bot, p8_human) ###Output 0.64 ###Markdown III.Naive Bayes classifier Exercise 10: Naive Bayes classifier -- AdvancedWe have two dice in our desk drawer. One is a normal, plain die with six sides such that each of the sides comes up with equal 1/6 probability. The other one is a loaded die that also has six sides, but that however gives the outcome 6 with every second try on the average, the other five sides being equally probable.Thus with the first, normal die the probabilities of each side are the same, 0.167 (or 16.7 %). With the second, loaded die, the probability of 6 is 0.5 (or 50 %) and each of the other five sides has probability 0.1 (or 10 %).The following program gets as its input the choice of the die and then simulates a sequence of ten rolls.Your task: starting from the odds 1:1, use the naive Bayes method to update the odds after each outcome to decide which of the dice is more likely. Edit the function bayes so that it returns True if the most likely die is the loaded one, and False otherwise. Remember to be careful with the indices when accessing list elements! ###Code import numpy as np p1 = [1 / 6, 1 / 6, 1 / 6, 1 / 6, 1 / 6, 1 / 6] # normal p2 = [0.1, 0.1, 0.1, 0.1, 0.1, 0.5] # loaded def roll(loaded): if loaded: print("Rolling a loaded die") p = p2 else: print("Rolling a normal die") p = p1 # roll the dice 10 times # add 1 to get dice rolls from 1 to 6 instead of 0 to 5 sequence = np.random.choice(6, size=10, p=p) + 1 for roll in sequence: print("rolled %d" % roll) return sequence def bayes(sequence): """ Starting from the odds 1:1, use the naive Bayes method to update the odds after each outcome to decide which of the dice is more likely Edit the function bayes so that it returns True if the most likely die is the loaded one, and False otherwise. """ odds = 1.0 # start with odds 1:1 for roll in sequence: odds *= p2[roll - 1] / p1[roll - 1] # edit here to update the odds return True if odds > 1 else False sequence = roll(False) # False = normal die, try changing to True if bayes(sequence): print("I think loaded") else: print("I think normal") ###Output Rolling a normal die rolled 6 rolled 4 rolled 1 rolled 4 rolled 2 rolled 6 rolled 6 rolled 6 rolled 6 rolled 4 I think loaded ###Markdown Machine learning I.Linear regression Exercise 11: Real estate price predictions -- AdvancedEdit the following program so that it can process multiple cabins that may be described by any number of details (like five below), at the same time. You can assume that each of the lists contained in the list x and the coefficients c contain the same number of elements. ###Code # input values for three mökkis: size, size of sauna, distance to water, number of indoor bathrooms, # proximity of neighbors X = [[66, 5, 15, 2, 500], [21, 3, 50, 1, 100], [120, 15, 5, 2, 1200]] c = [3000, 200, -50, 5000, 100] # coefficient values def predict(X, c): for cabin in range(len(X)): price = sum(map(lambda xx, cc: xx * cc, X[cabin], c)) print(price) predict(X, c) ###Output 258250 76100 492750 ###Markdown Exercise 12: Least squares --AdvancedWrite a program that calculates the squared error for multiple sets of coefficient values and prints out the index of the set that yields the smallest squared error: this is a poor man's version of the least squares method where we only consider a fixed set of alternative coefficient vectors instead of finding the global optimum. ###Code import numpy as np # data X = np.array([[66, 5, 15, 2, 500], [21, 3, 50, 1, 100], [120, 15, 5, 2, 1200]]) y = np.array([250000, 60000, 525000]) # alternative sets of coefficient values c = np.array( [ [3000, 200, -50, 5000, 100], [2000, -250, -100, 150, 250], [3000, -100, -150, 0, 150], ] ) def find_best(X, y, c): smallest_error = np.Inf best_index = -1 for ind, coeff in enumerate(c): sqerr = sum((y - X @ coeff) ** 2) if sqerr < smallest_error: best_index = ind smallest_error = sqerr print("the best set is set %d" % best_index) find_best(X, y, c) ###Output the best set is set 1 ###Markdown Exercise 13: Predictions with more data -- AdvancedWrite a program that reads cabin details and prices from a CSV file (a standard format for tabular data) and fits a linear regression model to it. The program should be able to handle any number of data points (cabins) described by any number of features (like size, size of sauna, number of bathrooms, ...).You can read a CSV file with the function `np.genfromtxt(datafile, skip_header=1)`. This will return a numpy array that contains the feature data in the columns preceding the last one, and the price data in the last column. The option skip_header=1 just means that the first line in the file is supposed to contain just the column names and shouldn't be included in the actual data.The output of the program should be the **estimated** coefficients and the **predicted or "fitted"** prices for the same set of cabins used to estimate the parameters. So if you fit the model using data for six cabins with known prices, the program will print out the prices that the model predicts for those six cabins (even if the actual prices are already given in the data).Note that here we will not actually only simulate the file input using Python's **io.StringIO** function that takes an input string and pretends that the contents is coming from a file. In practice, you would just name the input file that contains the data in the same format as the string input below. ###Code import numpy as np from io import StringIO input_string = """ 25 2 50 1 500 127900 39 3 10 1 1000 222100 13 2 13 1 1000 143750 82 5 20 2 120 268000 130 6 10 2 600 460700 115 6 10 1 550 407000 """ np.set_printoptions( precision=1 ) # this just changes the output settings for easier reading def fit_model(input_file): # read the data in and fit it. the values below are placeholder values c = np.asarray([]) # coefficients of the linear regression x = np.asarray([]) # input data to the linear regression # This will return a numpy array that contains the feature data in the columns preceding the last one, # and the price data in the last column. data = np.genfromtxt(input_file, skip_header=1) x = data[:, :-1] y = data[:, -1] c = np.linalg.lstsq(x, y)[0] print(c) print(x @ c) # simulate reading a file input_file = StringIO(input_string) fit_model(input_file) ###Output [2989.6 800.6 -44.8 3890.8 99.8] [127907.6 222269.8 143604.5 268017.6 460686.6 406959.9] <ipython-input-44-e41c6fd3422e>:24: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions. To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`. c = np.linalg.lstsq(x, y)[0] ###Markdown Exercise 14: Training data vs test data -- AdvancedWrite a program that reads data about one set of cabins (training data), estimates linear regression coefficients based on it, then reads data about another set of cabins (test data), and predicts the prices in it. Note that both data sets contain the actual prices, but the program should ignore the prices in the second set. They are given only for comparison.You can read the data into the program the same way as in the previous exercise.You should then separate the feature and price data that you have just read from the file into two separate arrays names `x_train` and `y_train`, so that you can use them as argument to `np.linalg.lstsq`.The program should work even if the number of features used to describe the cabins differs from five (as long as the same number of features are given in each file).The output should be the set of coefficients for the linear regression and the predicted prices for the second set of cabins. ###Code import numpy as np from io import StringIO train_string = """ 25 2 50 1 500 127900 39 3 10 1 1000 222100 13 2 13 1 1000 143750 82 5 20 2 120 268000 130 6 10 2 600 460700 115 6 10 1 550 407000 """ test_string = """ 36 3 15 1 850 196000 75 5 18 2 540 290000 """ def main(): np.set_printoptions( precision=1 ) # this just changes the output settings for easier reading # read in the training data and separate it to x_train and y_train # simulate reading a file input_train, input_test = StringIO(train_string), StringIO(test_string) data_train, data_test = np.genfromtxt(input_train, skip_header=1), np.genfromtxt( input_test, skip_header=1 ) x_train = data_train[:, :-1] y_train = data_train[:, -1] # fit a linear regression model to the data and get the coefficients c = np.asarray([]) c = np.linalg.lstsq(x_train, y_train)[0] # read in the test data and separate x_test from it x_test = np.asarray([]) x_test = data_test[:, :-1] # print out the linear regression coefficients print(c) # this will print out the predicted prics for the two new cabins in the test data set print(x_test @ c) main() ###Output [2989.6 800.6 -44.8 3890.8 99.8] [198102.4 289108.3] <ipython-input-45-3ec61d2d8b33>:32: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions. To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`. c = np.linalg.lstsq(x_train, y_train)[0] ###Markdown II.The nearest neighbor method Exercise 15: Vector distances -- AdvancedYou are given an array x_train with multiple input vectors (the "training data") and another array x_test with one more input vector (the "test data"). Find the vector in x_train that is most similar to the vector in x_test. In other words, find the nearest neighbor of the test data point x_test.The code template gives the function dist to calculate the distance between any two vectors. What you need to add is the implementation of the function nearest that takes the arrays x_train and x_test and prints the index (as an integer between 0, ..., len(x_train)-1) of the nearest neighbor. ###Code import numpy as np x_train = np.random.rand(10, 3) # generate 10 random vectors of dimension 3 x_test = np.random.rand(3) # generate one more random vector of the same dimension def dist(a, b): sum = 0 for ai, bi in zip(a, b): sum = sum + (ai - bi) ** 2 return np.sqrt(sum) def nearest(x_train, x_test): nearest = -1 min_distance = np.Inf # add a loop here that goes through all the vectors in x_train and finds the one that # is nearest to x_test. return the index (between 0, ..., len(x_train)-1) of the nearest # neighbor for i, x in enumerate(x_train): distance = dist(x, x_test) if distance < min_distance: min_distance = distance nearest = i print(nearest) nearest(x_train, x_test) ###Output 3 ###Markdown Exercise 16: Nearest neighbor --In the basic nearest neighbor classifier, the only thing that matters is the class label of the nearest neighbor. But the nearest neighbor may sometimes be noisy or otherwise misleading. Therefore, it may be better to also consider the other nearby data points in addition to the nearest neighbor.This idea leads us to the so called k-nearest neighbor method, where we consider all the k nearest neighbors. If k=3, for example, we'd take the three nearest points and choose the class label based on the majority class among them.The program below uses the library **sklearn** to create random data. Our input variable X has two features (compare to, say, cabin size and cabin price) and our target variable y is binary: it is either 0 or 1 (again think, for example, "is the cabin awesome or not.")Complete the following program so that it finds the three nearest data points (k=3) for each of the test data points and classifies them based on the majority class among the neighbors. Currently it generates the random data, splits it into training and test sets, calculates the distances between each test set items and the training set items, but it fails to classify the test set items according to the correct class, setting them all to belong to class 0. Instead of looking at just the nearest neighbor's class, it should use three neighbors and pick the majority class (the most common) class among the three neighbours, and use that as the class for the test item. ###Code ## INTERM import numpy as np from sklearn.datasets import make_blobs from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split # create random data with two classes X, y = make_blobs(n_samples=16, n_features=2, centers=2, center_box=(-2, 2)) # scale the data so that all values are between 0.0 and 1.0 X = MinMaxScaler().fit_transform(X) # split two data points from the data as test data and # use the remaining n-2 points as the training data X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=2) # place-holder for the predicted classes y_predict = np.empty(len(y_test), dtype=np.int64) # produce line segments that connect the test data points # to the nearest neighbors for drawing the chart lines = [] # distance function def dist(a, b): sum = 0 for ai, bi in zip(a, b): sum = sum + (ai - bi) ** 2 return np.sqrt(sum) def main(X_train, X_test, y_train, y_test): global y_predict global lines # process each of the test data points for i, test_item in enumerate(X_test): # calculate the distances to all training points distances = [dist(train_item, test_item) for train_item in X_train] # find the index of the nearest neighbor nearest = np.argmin(distances) # create a line connecting the points for the chart lines.append(np.stack((test_item, X_train[nearest]))) # add your code here: # y_predict[i] = 0 # this just classifies everything as 0 y_predict[i] = y_train[nearest] print(y_predict) main(X_train, X_test, y_train, y_test) ## ADVANCED import numpy as np from sklearn.datasets import make_blobs from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split # create random data with two classes X, Y = make_blobs(n_samples=16, n_features=2, centers=2, center_box=(-2, 2)) # scale the data so that all values are between 0.0 and 1.0 X = MinMaxScaler().fit_transform(X) # split two data points from the data as test data and # use the remaining n-2 points as the training data X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=2) # place-holder for the predicted classes y_predict = np.empty(len(y_test), dtype=np.int64) # produce line segments that connect the test data points # to the nearest neighbors for drawing the chart lines = [] # distance function def dist(a, b): sum = 0 for ai, bi in zip(a, b): sum = sum + (ai - bi) ** 2 return np.sqrt(sum) def main(X_train, X_test, y_train, y_test): global y_predict global lines k = 3 # classify our test items based on the classes of 3 nearest neighbors # process each of the test data points for i, test_item in enumerate(X_test): # calculate the distances to all training points distances = [dist(train_item, test_item) for train_item in X_train] # add your code here # nearest = np.argmin(distances) # this just finds the nearest neighbour (so k=1) y_train_k = [y_train[i] for i in np.argpartition(distances, k)[:k]] nearest = np.bincount(y_train_k).argmax() # create a line connecting the points for the chart # you may change this to do the same for all the k nearest neigbhors if you like # but it will not be checked in the tests lines.append(np.stack((test_item, X_train[nearest]))) y_predict[i] = nearest # 0 # this just classifies everything as 0 print(y_predict) main(X_train, X_test, y_train, y_test) ###Output [0 1] ###Markdown III. Working with text Exercise 17: Bag of words -- AdvancedYour task is to write a program that calculates the distances (or differences) between every pair of lines in the This Little Piggy rhyme and find the most similar pair. Use the Manhattan distance as your distance metric.You can start by building a numpy array with all the distances. Notice that the diagonal elements (elements at positions [i, j] with i=j) will be equal to zero because each row is equal to itself. To avoid selecting them, you can assign the value np.inf (the maximum possible floating point value). Note that to do this, it's necessary to make sure the type of the array is float. A convenient and fast way to get the index of the element with the lowest value in a 2D array (or in fact, any dimension) is by the functionnp.unravel_index(np.argmin(dist), dist.shape))where dist is the array. This will return the index of the lowest valued element as a list of length two (assuming the array is two-dimensional). ###Code import numpy as np data = [ [1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1], [1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1], [1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1], [1, 1, 1, 0, 1, 3, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1], ] def distance(row1, row2): return sum(abs(i - j) for i, j in zip(row1, row2)) def find_nearest_pair(data): N = len(data) dist = np.empty((N, N), dtype=float) # for i in range(N): # for j in range(N): # dist[i, j] = np.inf if i == j else distance(data[i], data[j]) # shorter version: dist = np.array( [ np.array( [distance(sent1, sent2) if sent1 != sent2 else np.inf for sent1 in data] ) for sent2 in data ] ) print(np.unravel_index(np.argmin(dist), dist.shape)) find_nearest_pair(data) ###Output (2, 3) ###Markdown Exercise 18: TF-IDF -- IntermediateModify the following program to print out the tf-idf values for each document and each word. The following code calculates the tf and df values, so you'll just need to combine them according to the correct formula. There are three documents (sentences) and a total of eight terms (unique words), so the output should be three lists of eight tf-idf values each. ###Code # Modify the following program to print out the tf-idf values for each document and each word. # DATA BLOCK text = """he really really loves coffee my sister dislikes coffee my sister loves tea""" import math def main(text): # split the text first into lines and then into lists of words docs = [line.split() for line in text.splitlines()] N = len(docs) # create the vocabulary: the list of words that appear at least once vocabulary = list(set(text.split())) df = {} tf = {} for word in vocabulary: # tf: number of occurrences of word w in document divided by document length # note: tf[word] will be a list containing the tf of each word for each document # for example tf['he'][0] contains the term frequence of the word 'he' in the first # document tf[word] = [doc.count(word) / len(doc) for doc in docs] # df: number of documents containing word w df[word] = sum([word in doc for doc in docs]) / N # loop through documents to calculate the tf-idf values for doc_index, doc in enumerate(docs): tfidf = [] for word in vocabulary: to_append = tf[word][doc_index] * math.log(1 / df[word], 10) if to_append != 0: tfidf.append(to_append) print(tfidf) main(text) ###Output [0.19084850188786498, 0.09542425094393249, 0.03521825181113625, 0.03521825181113625] [0.04402281476392031, 0.04402281476392031, 0.04402281476392031, 0.11928031367991561] [0.04402281476392031, 0.04402281476392031, 0.04402281476392031, 0.11928031367991561] ###Markdown -- AdvancedWrite a program that uses the tf-idf vectors to find the most similar pair of lines in a given data set. You can test your solution with the example text below. Note, however, that your solution will be tested on other data sets too, so make sure you don't make use of any special properties of the example data (like there being four lines of text). ###Code text = """Humpty Dumpty sat on a wall Humpty Dumpty had a great fall all the king's horses and all the king's men couldn't put Humpty together again""" import math import numpy as np def main(text): # 1. split the text into words, and get a list of unique words that appear in it # a short one-liner to separate the text into sentences (with words lower-cased to make words equal # despite casing) can be done with # docs = [line.lower().split() for line in text.split('\n')] text = text.lower() voc = list(set(text.split())) docs = [line.split() for line in text.split("\n")] # 2. go over each unique word and calculate its term frequency, and its document frequency # The document frequency of a word is the number of documents that contain at least one occurrence of the word tf = dict() df = dict() for word in voc: tf[word] = [doc.count(word) / len(doc) for doc in docs] df[word] = sum([word in doc for doc in docs]) / len(docs) # 3. after you have your term frequencies and document frequencies, go over each line in the text and # calculate its TF-IDF representation, which will be a vector tfdf = [] # TF-IDF vector or all documents for doc_index, doc in enumerate(docs): tfidf = [] for word in voc: to_append = tf[word][doc_index] * math.log(1 / df[word], 10) # adding even 0 values as otherwise vectors have different length tfidf.append(to_append) tfdf.append(tfidf) # 4. after you have calculated the TF-IDF representations for each line in the text, you need to # calculate the distances between each line to find which are the closest. def distance(row1, row2): return sum(abs(i - j) for i, j in zip(row1, row2)) def find_nearest_pair(data): N = len(data) dist = np.empty((N, N), dtype=float) # SAME: dist = np.array([np.array([distance(sent1, sent2) if sent1 != sent2 else np.inf for sent1 in data]) for sent2 in data]) for i in range(N): for j in range(N): dist[i, j] = np.inf if i == j else distance(data[i], data[j]) print(np.unravel_index(np.argmin(dist), dist.shape)) find_nearest_pair(tfdf) main(text) ###Output (0, 1) ###Markdown IV.Overfitting Exercise 19: Looking out for overfitting -- AdvancedThe program below uses the k-nearest neighbors algorithm. The idea is to not only look at the single nearest training data point (neighbor) but for example the five nearest points, if k=5. The normal nearest neighbor classifier amounts to using k=1.Write a program that does the classification for some value of k and prints out the training and testing accuracy.Hint: You can get the model accuracy for a given set using the function knn.score.Try different values of k to answer the questions below. ###Code from sklearn.neighbors import KNeighborsClassifier from sklearn.datasets import make_moons from sklearn.model_selection import train_test_split import numpy as np # do not edit this # create fake data x, y = make_moons( n_samples=500, random_state=42, noise=0.3 # the number of observations ) x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.33, random_state=42 ) # Create a classifier and fit it to our data knn = KNeighborsClassifier(n_neighbors=42) # <-- that's the k! knn.fit(x_train, y_train) train_acc = knn.score(x_train, y_train) test_acc = knn.score(x_test, y_test) print(f"training accuracy: {train_acc}") print(f"testing accuracy: {test_acc}") ###Output training accuracy: 0.9253731343283582 testing accuracy: 0.9090909090909091 ###Markdown **What would be a reasonable baseline accuracy your model should outperform in order for it to be considered useful?**- [x] 0.50- [ ] 0.25- [ ] any performance that is better than all wrong is enough as a baselineThere are two classes, and the data points are evenly split among them. Assigning every point to either class, or picking a class randomly would result in a 50% accuracy.**Which of the following values of k do you think was "best"?**- [ ] the choice of k doesn't matter- [ ] k = 1- [ ] k = 250- [x] k = 42**Why?**- [ ] it gave the lowest training accuracy- [ ] it gave the highest training accuracy- [x] it gave the highest testing accuracy- [ ] it gave the lowest testing accuracy- [ ] the choice of k doesn't matter**Is it possible to have a higher test set accuracy than training set accuracy?**- [x] yes- [ ] no Neural networks I.Logistic regression Exercise 20: Logistic regression -- AdvancedYou are given a set of three input values and you also have multiple alternative sets of three coefficients. Calculate the predicted output value using the linear formula combined with the logistic activation function.Do this with all the alternative sets of coefficients. Which of the coefficient sets yields the highest sigmoid output? ###Code import math import numpy as np x = np.array([4, 3, 0]) c1 = np.array([-0.5, 0.1, 0.08]) c2 = np.array([-0.2, 0.2, 0.31]) c3 = np.array([0.5, -0.1, 2.53]) def sigmoid(z): # add your implementation of the sigmoid function here # Sigmoid function: s(z) = 1÷(1+exp(−z)) z = -z # exp(z) does not accept "-z" as argument print(1 / (1 + math.exp(z))) # calculate the output of the sigmoid for x with all three coefficients sigmoid(x @ c1) sigmoid(x @ c2) sigmoid(x @ c3) # <-- this one ###Output 0.1544652650835347 0.45016600268752216 0.8455347349164652 ###Markdown II.From logistic regression to neural networks Exercise 21: Neural Networks -- AdvancedWe have trained a simple neural network with a larger set of cabin price data. The network predicts the price of the cabin based on the attributes of the cabin. The network consists of an input layer with five nodes, a hidden layer with two nodes, a second hidden layer with two nodes, and finally an output layer with a single node. In addition, there is a single bias node for each hidden layer and the output layer.The program below uses the weights of this trained network to perform what is called a forward pass of the neural network. The forward pass is running the input variables through the neural network to obtain output, in this case the price of a cabin of given attributes.The program is incomplete though. The bias nodes are not used in the version below, and the activation functions for the hidden layers and the output layer have not been properly defined.Modify the program to use the bias nodes, and to use the ReLU activation function for the hidden nodes, and a linear (identity) activation for the output node. ReLU activation function returns either the input value of the function, or zero, whichever is the largest, and linear activation just returns the input as output. After these are done, get a prediction for the price of a cabin which is described by the following feature vector `[74, 5, 10, 2, 100]`. ###Code import numpy as np w0 = np.array( [ [1.19627687e01, 2.60163283e-01], [4.48832507e-01, 4.00666119e-01], [-2.75768443e-01, 3.43724167e-01], [2.29138536e01, 3.91783025e-01], [-1.22397711e-02, -1.03029800e00], ] ) w1 = np.array([[11.5631751, 11.87043684], [-0.85735419, 0.27114237]]) w2 = np.array([[11.04122165], [10.44637262]]) b0 = np.array([-4.21310294, -0.52664488]) b1 = np.array([-4.84067881, -4.53335139]) b2 = np.array([-7.52942418]) x = np.array( [ [111, 13, 12, 1, 161], [125, 13, 66, 1, 468], [46, 6, 127, 2, 961], [80, 9, 80, 2, 816], [33, 10, 18, 2, 297], [85, 9, 111, 3, 601], [24, 10, 105, 2, 1072], [31, 4, 66, 1, 417], [56, 3, 60, 1, 36], [49, 3, 147, 2, 179], ] ) y = np.array( [ 335800.0, 379100.0, 118950.0, 247200.0, 107950.0, 266550.0, 75850.0, 93300.0, 170650.0, 149000.0, ] ) def hidden_activation(z): # ReLU activation. fix this! # ReLU activation function returns either the input value of the function, or zero, whichever is the largest return np.maximum(0, z) def output_activation(z): # identity (linear) activation. fix this! # linear activation just returns the input as output return z x_test = [[72, 2, 25, 3, 450], [60, 3, 15, 1, 300], [74, 5, 10, 2, 100]] for item in x_test: h1_in = ( np.dot(item, w0) + b0 ) # this calculates the linear combination of inputs and weights. it is missing the bias term, fix it! h1_out = hidden_activation(h1_in) # apply activation function h2_in = ( np.dot(h1_out, w1) + b1 ) # the output of the previous layer is the input for this layer. it is missing the bias term, fix it! h2_out = hidden_activation(h2_in) out_in = np.dot(h2_out, w2) + b2 out = output_activation(out_in) print(out) ###Output [230008.7] [183615.4] [232721.4] ###Markdown **What price does the neural network predict for the cabin in question?**- roughly 233000**What type of a machine learning problem is this?**- [ ] unsupervised learning- [x] supervised learning- [ ] reinforcement learning**How can we make sure we are not overfitting the neural network to the data?**- [ ] neural network will always overfit because there are too many parameters for a linear problem like this- [ ] use the full set of cabin data as a training set, and a small subset of it as a testing set- [x] use cross-validation Exercise 21: Neural Networks -- AdvancedWe have trained a simple neural network with a larger set of cabin price data. The network predicts the price of the cabin based on the attributes of the cabin. The network consists of an input layer with five nodes, a hidden layer with two nodes, a second hidden layer with two nodes, and finally an output layer with a single node. In addition, there is a single bias node for each hidden layer and the output layer.The program below uses the weights of this trained network to perform what is called a forward pass of the neural network. The forward pass is running the input variables through the neural network to obtain output, in this case the price of a cabin of given attributes.The program is incomplete though. The program only does the forward pass up to the first hidden layer and is missing the second hidden layer and the output layer.Modify the program to do a full forward pass and print out the price prediction. To do this, write out the remaining forward pass operations and use the ReLU activation function for the hidden nodes, and a linear (identity) activation for the output node. ReLU activation function returns either the input value of the function, or zero, whichever is the largest, and linear activation just returns the input as output. After these are done, get a prediction for the price of a cabin which is described by the following feature vector `[82, 2, 65, 3, 516]`. ###Code import numpy as np w0 = np.array( [ [1.19627687e01, 2.60163283e-01], [4.48832507e-01, 4.00666119e-01], [-2.75768443e-01, 3.43724167e-01], [2.29138536e01, 3.91783025e-01], [-1.22397711e-02, -1.03029800e00], ] ) w1 = np.array([[11.5631751, 11.87043684], [-0.85735419, 0.27114237]]) w2 = np.array([[11.04122165], [10.44637262]]) b0 = np.array([-4.21310294, -0.52664488]) b1 = np.array([-4.84067881, -4.53335139]) b2 = np.array([-7.52942418]) x = np.array( [ [111, 13, 12, 1, 161], [125, 13, 66, 1, 468], [46, 6, 127, 2, 961], [80, 9, 80, 2, 816], [33, 10, 18, 2, 297], [85, 9, 111, 3, 601], [24, 10, 105, 2, 1072], [31, 4, 66, 1, 417], [56, 3, 60, 1, 36], [49, 3, 147, 2, 179], ] ) y = np.array( [ 335800.0, 379100.0, 118950.0, 247200.0, 107950.0, 266550.0, 75850.0, 93300.0, 170650.0, 149000.0, ] ) def hidden_activation(z): # ReLU activation. fix this! return np.maximum(0, z) def output_activation(z): # identity (linear) activation. fix this! return z x_test = [[82, 2, 65, 3, 516]] for item in x_test: h1_in = ( np.dot(item, w0) + b0 ) # this calculates the linear combination of inputs and weights h1_out = hidden_activation(h1_in) # apply activation function # fill out the missing parts: # the output of the first hidden layer, h1_out, will need to go through # the second hidden layer with weights w1 and bias b1 h2_in = np.dot(h1_out, w1) + b1 h2_out = hidden_activation(h2_in) # apply activation function # and finally to the output layer with weights w2 and bias b2. # remember correct activations: relu in the hidden layers and linear (identity) in the output out_in = np.dot(h2_out, w2) + b2 out = output_activation(out_in) print(out) ###Output [257136.4]
Feature Engineering/.ipynb_checkpoints/#1 Missing Values - Mean-Median-Mode Imputation-checkpoint.ipynb
###Markdown Missing Values - Feature Engineering Lifecycle of a Data Science Projects1. Data Collection Stratergy -- from company, 3rd party APi's, Surveys2. Feature Engineering -- Handling Missing Values What are the different types of Missing Data? 1. Missing Completely at Random, MCAR: A variable is missing completely at random (MCAR) if the probability of being missing is the same for all the observations. When data is MCAR, there is absolutely no relationship between the data missing and any other values, observed or missing, within the dataset. In other words, those missing data points are a random subset of the data. There is nothing systematic going on that makes some data more likely to be missing than other. ###Code import pandas as pd import numpy as np data = pd.read_csv('titanic.csv') data.head() data.isnull().sum() #missing completely at random data[data['Embarked'].isnull()] ###Output _____no_output_____ ###Markdown 2. Missing data not at random(MNAR): Systematic missing Values There is absolutely some relationship between the data missing and any other values, observed or missing, within the dataset. ###Code data[data['Cabin'].isnull()] # convert the values of NaN in Cabin to 1 (if true) and 0 (if false) data['Cabin_null'] = np.where(data['Cabin'].isnull(),1,0) # find the percentage of null values data['Cabin_null'].mean() data.groupby(['Survived'])['Cabin_null'].mean() # 0 - Not Survived # 1 - Survived ###Output _____no_output_____ ###Markdown Missing at Random (MAR) All the techniques of handling, missing values1. Mean/ Median/Mode replacement2. Random Sample Imputation3. Capturing NAN values with a new feature4. End of Distribution imputation5. Arbitrary imputation6. Frequent categories imputation Mean/ Median /Mode imputation When should we apply? Mean/median imputation has the assumption that the data are missing completely at random(MCAR). We solve this by replacing the NAN with the most frequent occurance of the variables ###Code df=pd.read_csv('titanic.csv',usecols=['Age','Fare','Survived']) df.head() #see the percentage of missing values df.isnull().mean() def impute_nan(df,variable,median): df[variable+"_median"]=df[variable].fillna(median) median=df.Age.median() median impute_nan(df,'Age',median) df.head() # check the standard deviation print(df['Age'].std()) print(df['Age_median'].std()) import matplotlib.pyplot as plt %matplotlib inline fig = plt.figure() ax = fig.add_subplot(111) df['Age'].plot(kind='kde', ax=ax) df.Age_median.plot(kind='kde', ax=ax, color='red') lines, labels = ax.get_legend_handles_labels() ax.legend(lines, labels, loc='best') ###Output _____no_output_____
4.2 feature_extraction and visualization.ipynb
###Markdown Remove problematic classes Classes of Background music, Theme and Soundtrack proved to be problematic, with hight rates of confusion with the other classes. To make the prediction more robust, samples that are labeled with these classes are removed from the dataset. ###Code df = df_archive print (df.shape) label_dict = { 'Background_music':0, 'Theme_music':1, 'Jingle':2, 'Soundtrack_music':3, 'Lullaby':4, 'Video_game_music':5, 'Christmas_music':6, 'Dance_music':7, 'Wedding_music':8} # 'Birthday_music':9} to_remove = [0, 1, 3] df['label'] = df['file_name'].apply(lambda x: label_dict[x[:-4].split('_', 1)[1]]) count = 0 for l in to_remove: df = df[df['label'] != l] df = df.reset_index(drop=True) # set index back to sequential #del(df['label']) print (df.shape) df.head() df.to_csv('extracted_features/df_features_cutted_classes_mfcc7_cutted_features.csv', index=False) ###Output _____no_output_____ ###Markdown Vizualizing features (T-SNE) Visualizing obtained features using T-SNE. ###Code # Re-label after removing classes new_label_dict = { #'Background_music':0, # 'Theme_music':0, 'Jingle':0, #'Soundtrack_music':1, 'Lullaby':1, 'Video_game_music':2, 'Christmas_music':3, 'Dance_music':4, 'Wedding_music':5} # 'Birthday_music':9} df['label'] = df['file_name'].apply(lambda x: new_label_dict[x[:-4].split('_', 1)[1]]) print(df.shape) df.head() %%time tsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300) tsne_results = tsne.fit_transform(df[df.columns[1:-2]].values) df_tsne = df.copy() df_tsne['x-tsne'] = tsne_results[:,0] df_tsne['y-tsne'] = tsne_results[:,1] plt.figure(figsize=(10,7)) plt.title('t-SNE: 71 features visualized') # using ' + str(n_comp) + ' PCA components' plot = plt.scatter(df_tsne['x-tsne'], df_tsne['y-tsne'], c=df_tsne['label'], cmap=plt.cm.get_cmap("Paired", 6)) cbar = plt.colorbar(ticks=range(6)) cbar.set_ticklabels(list(new_label_dict.keys())) plt.clim(-0.5, 5.5) plt.show() ###Output _____no_output_____
src/notebooks/analysis/.ipynb_checkpoints/inspect_results-checkpoint.ipynb
###Markdown Imports ###Code from nipgutils.misc import load, save from databases.datasets import PersonStackedMuPoTsDataset, PersonStackedMucoTempDataset, Mpi3dTestDataset, Mpi3dTrainDataset from databases.joint_sets import CocoExJoints, MuPoTSJoints, JointSet from util.pose import _calc_limb_length from databases import mpii_3dhp, mupots_3d import os %matplotlib notebook def plfig(lines=1, full_width=False): if lines==1 and not full_width: plt.figure() elif full_width: plt.figure(figsize=(9.5, 3*lines)) else: plt.figure(figsize=(4.5, 3*lines)) ###Output _____no_output_____ ###Markdown Bone stabilityVisualize the length of a bone in training/test set.It seems in Muco-Temp (and therefore in MPII-3DHP), bones are normalized except for: left/right hip-knee (30-40) spine-neck (~10) neck-shoulder (5-10)hip is also the average of lefthip/right, which is not the same as in the captury videos. It seems left/right hip joints are synthetic and are not coming directly from captury. ###Code class FullMpiiSet(JointSet): NAMES = np.array(['spine3', 'spine4', 'spine2', 'spine', 'pelvis', 'neck', 'head', 'head_top', 'left_clavicle', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand', 'right_clavicle', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand', 'left_hip', 'left_knee', 'left_ankle', 'left_foot', 'left_toe', 'right_hip' , 'right_knee', 'right_ankle', 'right_foot', 'right_toe']) NUM_JOINTS=28 NAMES.flags.writeable = False full_names = ['spine3', 'spine4', 'spine2', 'spine', 'pelvis', 'neck', 'head', 'head_top', 'left_clavicle', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand', 'right_clavicle', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand', 'left_hip', 'left_knee', 'left_ankle', 'left_foot', 'left_toe', 'right_hip' , 'right_knee', 'right_ankle', 'right_foot', 'right_toe'] parent1 = np.array([3, 1, 4, 5, 5, 2, 6, 7, 6, 9, 10, 11, 12, 6, 14, 15, 16, 17, 5, 19, 20, 21, 22, 5, 24, 25, 26, 27 ])-1 parent2 = np.array([4, 3, 5, 5, 5, 1, 2, 6, 2, 6, 9, 10, 11, 2, 6, 14, 15, 16, 4, 5, 19, 20, 21, 4, 5, 24, 25, 26]) -1 for i in range(len(full_names)): print('%15s %15s %s' % (full_names[i], full_names[parent1[i]], full_names[parent2[i]])) val_data = PersonStackedMucoTempDataset('hrnet', 'normal', 'megadepth_at_hrnet') test_data = PersonStackedMuPoTsDataset('hrnet', 'normal', 'all') refine_results = load('results_smoothed_83-2955.pkl') class RefResults: index= refine_results['index'] poses3d= refine_results['pred'] refine_data = RefResults() mpi_train_data = Mpi3dTrainDataset('hrnet', 'normal', 'megadepth_at_hrnet', True, 2) mpi_test_data = Mpi3dTestDataset('hrnet', 'normal', 'megadepth_at_hrnet') gt = mupots_3d.load_gt_annotations(16) validFrame = gt['isValidFrame'] BONES = [['left_ankle', 'left_knee'], ['left_hip', 'left_knee'], ['left_hip', 'hip'], ['hip', 'spine'], ['spine', 'head/nose'], ['left_shoulder', 'left_elbow']] # BONES = [['right_ankle', 'right_knee'], ['right_hip', 'right_knee'], ['right_hip', 'hip'], # ['hip', 'spine'], ['spine', 'neck'], ['right_shoulder', 'right_elbow']] # BONES = [['neck', 'right_shoulder'], ['right_shoulder', 'right_elbow'], ['right_elbow', 'right_wrist']] # BONES = [['spine', 'neck'], ['neck', 'head/nose'], ['head/nose', 'head_top']] # BONES=[['left_ankle', 'left_knee'], ['left_hip', 'left_knee'],['left_shoulder', 'left_elbow']] joint_set = MuPoTSJoints() data = mpi_train_data seqs = np.unique(data.index.seq) seq = np.random.choice(seqs) # seq='16/2' print(seq) inds = data.index.seq==seq plfig(1, False) # plt.subplot(1,3,1) names=['ankle-knee', 'knee-hip', 'elbow-shoulder'] for i,bone in enumerate(BONES): lens = _calc_limb_length(data.poses3d[inds], joint_set, [bone]) plt.plot(lens, label=bone[0]) print(np.std(lens)) # ax2=plt.gca().twinx() # ax2.plot(gt['occlusions'][:,2, joint_set.index_of('left_shoulder')], color='black') plt.legend() ###Output _____no_output_____ ###Markdown mupots: 16/2 - jumps, all frames are valid ###Code # Mupots gt vs pred joint_set = MuPoTSJoints() seqs = np.unique(test_data.index.seq) seq = np.random.choice(seqs) print(seq) inds = test_data.index.seq==seq assert np.all(refine_data.index.seq[inds]==seq) bones = [['left_ankle', 'left_knee'], ['left_knee', 'left_hip', ], ['left_hip', 'hip'], ['right_wrist', 'right_elbow'], ['right_elbow', 'right_shoulder', ], ['right_shoulder', 'neck']] plfig(2, True) for i, bone in enumerate(bones): plt.subplot(2,3,i+1) lens = _calc_limb_length(test_data.poses3d[inds], joint_set, [bone]) plt.plot(lens, label='gt') lens = _calc_limb_length(refine_data.poses3d[inds], joint_set, [bone]) plt.plot(lens, label='pred') plt.title('%s %s' % (bone[0], bone[1])) plt.tight_layout() ###Output _____no_output_____ ###Markdown Mpii-train data full joints ###Code sub=7 seq=1 annot = load(os.path.join(mpii_3dhp.MPII_3DHP_PATH, 'S%d' % sub, 'Seq%d' % seq, 'annot.mat')) annot3 = list([x[0].reshape((-1, 28, 3)).astype('float32') for x in annot['annot3']]) lhip = joint_set.index_of('left_hip') rhip = joint_set.index_of('right_hip') np.std(np.linalg.norm((p[:,rhip]+p[:,lhip])/2-p[:,joint_set.index_of('pelvis')], axis=1)) # is there any joint the knee is normalized to? answer: No lhip = joint_set.index_of('left_knee') p = annot3[7] for i in range(28): print(i, FullMpiiSet.NAMES[i], np.std(np.linalg.norm(p[:,lhip]-p[:,i], axis=1))) # BONES = [['left_hip', 'pelvis'], ['neck', 'left_clavicle'], ['spine4', 'neck']] joint_set = FullMpiiSet() plfig() for bone in BONES: lens = _calc_limb_length(annot3[9], joint_set, [bone]) plt.plot(lens, label=bone[0]) print(np.std(lens)) plt.legend() ###Output _____no_output_____
03-Data-Warehousing/03-01-Vertex-AI-BigQuery.ipynb
###Markdown Vizualizing BigQuery data in a Jupyter notebook[BigQuery](https://cloud.google.com/bigquery/docs/) is a petabyte-scale analytics data warehouse that you can use to run SQL queries over vast amounts of data in near realtime.Data visualization tools can help you make sense of your BigQuery data and help you analyze the data interactively. You can use visualization tools to help you identify trends, respond to them, and make predictions using your data. In this tutorial, you use the BigQuery Python client library and pandas in a Jupyter notebook to visualize data in the BigQuery natality sample table. Using Jupyter magics to query BigQuery dataThe BigQuery Python client library provides a magic command that allows you to run queries with minimal code.The BigQuery client library provides a cell magic, `%%bigquery`. The `%%bigquery` magic runs a SQL query and returns the results as a pandas `DataFrame`. ###Code %%bigquery SELECT * FROM `bigquery-public-data.samples.natality` LIMIT 10 ###Output Query complete after 0.00s: 100%|██████████| 2/2 [00:00<00:00, 459.55query/s] Downloading: 100%|██████████| 10/10 [00:03<00:00, 3.23rows/s] ###Markdown The following cell executes a query of the BigQuery natality public dataset and returns the total births by year. ###Code %%bigquery SELECT source_year AS year, COUNT(is_male) AS birth_count FROM `bigquery-public-data.samples.natality` GROUP BY year ORDER BY year ASC ###Output _____no_output_____ ###Markdown The following command to runs the same query, but this time the results are saved to a variable. The variable name, `total_births`, is given as an argument to the `%%bigquery`. The results can then be used for further analysis and visualization. ###Code %%bigquery total_births SELECT source_year AS year, COUNT(is_male) AS birth_count FROM `bigquery-public-data.samples.natality` GROUP BY year ORDER BY year ASC ###Output _____no_output_____ ###Markdown The next cell uses the pandas `DataFrame.plot` method to visualize the query results as a bar chart. See the [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/visualization.html) to learn more about data visualization with pandas. ###Code total_births.plot(kind='bar', x='year', y='birth_count', figsize=(15, 7)); ###Output _____no_output_____ ###Markdown Run the following query to retrieve the number of births by weekday. Because the `wday` (weekday) field allows null values, the query excludes records where wday is null. ###Code %%bigquery births_by_weekday SELECT wday, SUM(CASE WHEN is_male THEN 1 ELSE 0 END) AS male_births, SUM(CASE WHEN is_male THEN 0 ELSE 1 END) AS female_births FROM `bigquery-public-data.samples.natality` WHERE wday IS NOT NULL GROUP BY wday ORDER BY wday ASC ###Output _____no_output_____ ###Markdown Visualize the query results using a line chart. ###Code births_by_weekday.plot(x='wday'); ###Output _____no_output_____ ###Markdown Using Python to query BigQuery dataMagic commands allow you to use minimal syntax to interact with BigQuery. Behind the scenes, `%%bigquery` uses the BigQuery Python client library to run the given query, convert the results to a pandas `Dataframe`, optionally save the results to a variable, and finally display the results. Using the BigQuery Python client library directly instead of through magic commands gives you more control over your queries and allows for more complex configurations. The library's integrations with pandas enable you to combine the power of declarative SQL with imperative code (Python) to perform interesting data analysis, visualization, and transformation tasks.To use the BigQuery Python client library, start by importing the library and initializing a client. The BigQuery client is used to send and receive messages from the BigQuery API. ###Code from google.cloud import bigquery client = bigquery.Client() ###Output _____no_output_____ ###Markdown Use the [`Client.query`](https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.client.Client.htmlgoogle.cloud.bigquery.client.Client.query) method to run a query. Execute the following cell to run a query to retrieve the annual count of plural births by plurality (2 for twins, 3 for triplets, etc.). ###Code sql = """ SELECT plurality, COUNT(1) AS count, year FROM `bigquery-public-data.samples.natality` WHERE NOT IS_NAN(plurality) AND plurality > 1 GROUP BY plurality, year ORDER BY count DESC """ df = client.query(sql).to_dataframe() df.head() ###Output _____no_output_____ ###Markdown To chart the query results in your `DataFrame`, run the following cell to pivot the data and create a stacked bar chart of the count of plural births over time. ###Code pivot_table = df.pivot(index='year', columns='plurality', values='count') pivot_table.plot(kind='bar', stacked=True, figsize=(15, 7)); ###Output _____no_output_____ ###Markdown Run the following query to retrieve the count of births by the number of gestation weeks. ###Code sql = """ SELECT gestation_weeks, COUNT(1) AS count FROM `bigquery-public-data.samples.natality` WHERE NOT IS_NAN(gestation_weeks) AND gestation_weeks <> 99 GROUP BY gestation_weeks ORDER BY gestation_weeks """ df = client.query(sql).to_dataframe() ###Output _____no_output_____ ###Markdown Finally, chart the query results in your `DataFrame`. ###Code ax = df.plot(kind='bar', x='gestation_weeks', y='count', figsize=(15,7)) ax.set_title('Count of Births by Gestation Weeks') ax.set_xlabel('Gestation Weeks') ax.set_ylabel('Count'); ###Output _____no_output_____ ###Markdown Vizualizing BigQuery data in a Jupyter notebook[BigQuery](https://cloud.google.com/bigquery/docs/) is a petabyte-scale analytics data warehouse that you can use to run SQL queries over vast amounts of data in near realtime.Data visualization tools can help you make sense of your BigQuery data and help you analyze the data interactively. You can use visualization tools to help you identify trends, respond to them, and make predictions using your data. In this tutorial, you use the BigQuery Python client library and pandas in a Jupyter notebook to visualize data in the BigQuery natality sample table. Using Jupyter magics to query BigQuery dataThe BigQuery Python client library provides a magic command that allows you to run queries with minimal code.The BigQuery client library provides a cell magic, `%%bigquery`. The `%%bigquery` magic runs a SQL query and returns the results as a pandas `DataFrame`. ###Code %%bigquery SELECT * FROM `bigquery-public-data.samples.natality` LIMIT 5 ###Output _____no_output_____ ###Markdown The following cell executes a query of the BigQuery natality public dataset and returns the total births by year. ###Code %%bigquery SELECT source_year AS year, COUNT(is_male) AS birth_count FROM `bigquery-public-data.samples.natality` GROUP BY year ORDER BY year ASC ###Output _____no_output_____ ###Markdown The following command to runs the same query, but this time the results are saved to a variable. The variable name, `total_births`, is given as an argument to the `%%bigquery`. The results can then be used for further analysis and visualization. ###Code %%bigquery total_births SELECT source_year AS year, COUNT(is_male) AS birth_count FROM `bigquery-public-data.samples.natality` GROUP BY year ORDER BY year ASC ###Output _____no_output_____ ###Markdown The next cell uses the pandas `DataFrame.plot` method to visualize the query results as a bar chart. See the [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/visualization.html) to learn more about data visualization with pandas. ###Code total_births.plot(kind='bar', x='year', y='birth_count', figsize=(15, 7)); ###Output _____no_output_____ ###Markdown Run the following query to retrieve the number of births by weekday. Because the `wday` (weekday) field allows null values, the query excludes records where wday is null. ###Code %%bigquery births_by_weekday SELECT wday, SUM(CASE WHEN is_male THEN 1 ELSE 0 END) AS male_births, SUM(CASE WHEN is_male THEN 0 ELSE 1 END) AS female_births FROM `bigquery-public-data.samples.natality` WHERE wday IS NOT NULL GROUP BY wday ORDER BY wday ASC ###Output _____no_output_____ ###Markdown Visualize the query results using a line chart. ###Code births_by_weekday.plot(x='wday'); ###Output _____no_output_____ ###Markdown Using Python to query BigQuery dataMagic commands allow you to use minimal syntax to interact with BigQuery. Behind the scenes, `%%bigquery` uses the BigQuery Python client library to run the given query, convert the results to a pandas `Dataframe`, optionally save the results to a variable, and finally display the results. Using the BigQuery Python client library directly instead of through magic commands gives you more control over your queries and allows for more complex configurations. The library's integrations with pandas enable you to combine the power of declarative SQL with imperative code (Python) to perform interesting data analysis, visualization, and transformation tasks.To use the BigQuery Python client library, start by importing the library and initializing a client. The BigQuery client is used to send and receive messages from the BigQuery API. ###Code from google.cloud import bigquery client = bigquery.Client() ###Output _____no_output_____ ###Markdown Use the [`Client.query`](https://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.client.Client.htmlgoogle.cloud.bigquery.client.Client.query) method to run a query. Execute the following cell to run a query to retrieve the annual count of plural births by plurality (2 for twins, 3 for triplets, etc.). ###Code sql = """ SELECT plurality, COUNT(1) AS count, year FROM `bigquery-public-data.samples.natality` WHERE NOT IS_NAN(plurality) AND plurality > 1 GROUP BY plurality, year ORDER BY count DESC """ df = client.query(sql).to_dataframe() df.head() ###Output _____no_output_____ ###Markdown To chart the query results in your `DataFrame`, run the following cell to pivot the data and create a stacked bar chart of the count of plural births over time. ###Code pivot_table = df.pivot(index='year', columns='plurality', values='count') pivot_table.plot(kind='bar', stacked=True, figsize=(15, 7)); ###Output _____no_output_____ ###Markdown Run the following query to retrieve the count of births by the number of gestation weeks. ###Code sql = """ SELECT gestation_weeks, COUNT(1) AS count FROM `bigquery-public-data.samples.natality` WHERE NOT IS_NAN(gestation_weeks) AND gestation_weeks <> 99 GROUP BY gestation_weeks ORDER BY gestation_weeks """ df = client.query(sql).to_dataframe() ###Output _____no_output_____ ###Markdown Finally, chart the query results in your `DataFrame`. ###Code ax = df.plot(kind='bar', x='gestation_weeks', y='count', figsize=(15,7)) ax.set_title('Count of Births by Gestation Weeks') ax.set_xlabel('Gestation Weeks') ax.set_ylabel('Count'); ###Output _____no_output_____
CNN_invariance/Scale Invariance in CNNs.ipynb
###Markdown Arun Das Research Fellow Secure AI and Autonomy Laboratory University of Texas at San Antonio Scale Invariance in CNNs Over the course of history, convolution operation has helped accelerate science and signal processing in a variety of ways. With the advent of deep learning, computer vision researchers began exploring the use of 2D and 3D convolutional neural networks (CNNs) directly on 2D or 3D images to reduce the parameters involved with fully connected deep neural networks. With large amount of data and computation at their disposal, supervised CNN learning algorithms tackled problems which were almost impossible to generalize in the past decade.CNNs are impressive feature extractors, extracting features heirarchically from the training images during the learning process. First few layers close to the input data learns kernels related to high contrast points, edges, and lines. Layers further in the network learns to map these primitive kernels together to understand countours and other shapes. This heirarchical way of learning by representation enables complex pattern recognition that was impossible using traditional signal processing and machine learning algorithms.Invariances in input data distribution used for training is mapped in to the CNN as weights, which are infact learned by the kernels. For example, if a face classifier is trained on images with face cropped, aligned, and centered in the center of the image, the CNN will learn to map the input pixels accordingly, and generalize on providing impressive results on faces which are preprocessed and centered properly. However, the interesting question arises on the robustness of CNNs on slighly invariant input images which are from outside the data distribution. This is where our discussion on invariance starts - and in my opinion, the many questions we ask are translated from this bigger topic of robustness and safe artificial intelligence (AI).For the scope of this study, we specifically focus on scale invariance issues of CNNs. Import Libraries ###Code from __future__ import print_function import argparse import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms from torch.autograd import Variable from torchsummary import summary import numpy as np import matplotlib.pyplot as plt from torchvision.utils import make_grid import math import seaborn as sns import pandas as pd from PIL import Image #from skimage.transform.radon_transform import fft from scipy import fftpack %matplotlib inline ###Output _____no_output_____ ###Markdown Define the hyperparameters We define the hyperparameters as keys in an `args` dictionary. This way, it is easy to add and remove hyperparameters, and also to use them. ###Code args={} kwargs={} args['batch_size']=1000 args['test_batch_size']=1000 args['epochs']=20 # The number of Epochs is the number of times you go # through the full dataset. args['lr']=0.01 # Learning rate is how fast it will decend. args['momentum']=0.5 # SGD momentum (default: 0.5) Momentum is a moving # average of our gradients (helps to keep direction). args['seed']=1 # random seed args['log_interval']=40 args['cuda']=True # False if you don't have a CUDA w/ NVIDIA GPU available. args['train_now']=False ###Output _____no_output_____ ###Markdown Define custom scaling function ###Code class CustomScaling(object): """Rotate image by a fixed angle which is ready for tranform.Compose() """ def __init__(self, scale, angle=0, translate=[0,0], shear=0): self.scale = scale self.angle = angle self.translate = translate self.shear = shear def __call__(self, img): return transforms.ToTensor()( transforms.functional.affine( transforms.ToPILImage()(img), self.angle, self.translate, self.scale, self.shear)) ###Output _____no_output_____ ###Markdown Define data loaders Scale to 45% of the image ###Code class LeNet5(nn.Module): def __init__(self): super(LeNet5, self).__init__() # Convolution (In LeNet-5, 32x32 images are given # as input. Hence padding of 2 is done below) self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1, padding=2) self.max_pool_1 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv2 = nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5, stride=1, padding=2) self.max_pool_2 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv3 = nn.Conv2d(in_channels=16, out_channels=120, kernel_size=5, stride=1, padding=2) self.fc1 = nn.Linear(7*7*120, 120) # convert matrix with 16*5*5 (= 400) features to a matrix of 120 features (columns) self.fc2 = nn.Linear(120, 84) # convert matrix with 120 features to a matrix of 84 features (columns) self.fc3 = nn.Linear(84, 10) # convert matrix with 84 features to a matrix of 10 features (columns) def forward(self, x): # convolve, then perform ReLU non-linearity x = F.relu(self.conv1(x)) # max-pooling with 2x2 grid x = self.max_pool_1(x) # Conv2 + ReLU x = F.relu(self.conv2(x)) # max-pooling with 2x2 grid x = self.max_pool_2(x) # Conv3 + ReLU x = F.relu(self.conv3(x)) x = x.view(-1, 7*7*120) # FC-1, then perform ReLU non-linearity x = F.relu(self.fc1(x)) # FC-2, then perform ReLU non-linearity x = F.relu(self.fc2(x)) # FC-3 x = self.fc3(x) return F.log_softmax(x, dim=1) model = LeNet5() if args['cuda']: model.cuda() summary(model, (1, 28, 28)) scale = 0.45 # Specifies the scaling factor of images. # Define the train and test loader # Here we are adding our CustomRotation function to the transformations train_loader = torch.utils.data.DataLoader( datasets.MNIST('data/', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), CustomScaling(scale), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args['batch_size'], shuffle=True, **kwargs) test_loader = torch.utils.data.DataLoader( datasets.MNIST('data/', train=False, transform=transforms.Compose([ transforms.ToTensor(), CustomScaling(scale), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args['test_batch_size'], shuffle=False, **kwargs) ## try out stuff # transforms.functional.affine(img=transforms.functional.to_pil_image(example_data[0]), # angle=0, translate=(0,0), # scale=0.4, shear=0) examples = enumerate(test_loader) batch_idx, (example_data, example_targets) = next(examples) print("Predicted Class: ", np.argmax(model.forward(example_data[0].unsqueeze_(0).cuda()).cpu().detach().numpy())) plt.imshow(example_data[0].cuda().cpu().detach().numpy()[0], cmap='gray') # transforms.functional.to_pil_image(example_data[0]) def train(epoch): model.train() for batch_idx, (data, target) in enumerate(train_loader): if args['cuda']: data, target = data.cuda(), target.cuda() #Variables in Pytorch are differenciable. data, target = Variable(data), Variable(target) #This will zero out the gradients for this batch. optimizer.zero_grad() output = model(data) # Calculate the loss The negative log likelihood loss. # It is useful to train a classification problem with C classes. loss = F.nll_loss(output, target) #dloss/dx for every Variable loss.backward() #to do a one-step update on our parameter. optimizer.step() #Print out the loss periodically. if batch_idx % args['log_interval'] == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.data)) def test(): model.eval() test_loss = 0 correct = 0 for data, target in test_loader: if args['cuda']: data, target = data.cuda(), target.cuda() with torch.no_grad(): # volatile was removed and now # has no effect. Use `with torch.no_grad():` instead. data= Variable(data) target = Variable(target) output = model(data) # sum up batch loss # size_average and reduce args will # be deprecated, please use reduction='sum' instead. test_loss += F.nll_loss(output, target, reduction='sum').data # get the index of the max log-probability pred = output.data.max(1, keepdim=True)[1] correct += pred.eq(target.data.view_as(pred)).long().cpu().sum() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) ###Output _____no_output_____ ###Markdown Train the CNN model on normal MNIST images We'll use stocastic gradient descend (SGD) as the optimizer and use momentum to lead the way. The hyperparameters are passed using `args` dictionary and the required key. ###Code # optimizer = optim.SGD(model.parameters(), # lr=args['lr'], momentum=args['momentum']) optimizer = optim.Adam(model.parameters(), lr=args['lr']) # Training loop. # Change `args['log_interval']` if you want to change logging behavior. # We test the network in each epoch. # Setting the bool `args['train_now']` to not run training all the time. # We'll save the weights and use the saved weights instead of # training the network everytime we load the jupyter notebook. args['train_now'] = False args['epochs'] = 30 if args['train_now']: for epoch in range(1, args['epochs'] + 1): train(epoch) test() torch.save(model.state_dict(), 'models/lenet5_normal_mnist.pytrh') else: if args['cuda']: device = torch.device("cuda") model.load_state_dict(torch.load('models/lenet5_normal_mnist.pytrh')) model.to(device) else: model.load_state_dict(torch.load('models/lenet5_normal_mnist.pytrh')) model.eval() ###Output _____no_output_____ ###Markdown Kernel weight visualizations Inorder to understand how the network learns, it is not only important to log the training and testing accuracies but also to visualize what the network learns. As we get over the deep learning hype, we should invest time in learning the intricate features which makes these networks what they are. As a first step, we shall write a custom visualization function to plot the kernels and activations of the CNN - whatever the size. This is a key piece of code that will drive us forward and unfortunately isn't available in Pytorch or internet :) So custom indeed. ###Code def custom_boxplot(kernels, path=None, cols=None, size=None, verbose=False): """Statistical analysis using BoxPlot for weight and activation matrices learned during the optimization process. Works for any size of kernels. Arguments ========= kernels: Weight or activation matrix. Must be a high dimensional Numpy array. Tensors will not work. path: Path to save the visualizations. cols: Number of columns (doesn't work completely yet.) size: Tuple input for size. For example: size=(5,5) verbose: Print information about the input. Example ======= kernels = model.conv1.weight.cpu().detach().clone() kernels = kernels - kernels.min() kernels = kernels / kernels.max() custom_boxplot(kernels, 'results/conv1_weights_boxplot.png', 5, size=(25,5)) """ def set_size(w,h, ax=None): """ w, h: width, height in inches """ if not ax: ax=plt.gca() l = ax.figure.subplotpars.left r = ax.figure.subplotpars.right t = ax.figure.subplotpars.top b = ax.figure.subplotpars.bottom figw = float(w)/(r-l) figh = float(h)/(t-b) ax.figure.set_size_inches(figw, figh) kernelshape = kernels.shape if verbose: print("Shape of input kernel: ", kernelshape) if cols==None: cols = 6 rows = np.int(np.ceil(kernelshape[0]/cols)) pos = range(1, kernelshape[0]+1) k=0 fig = plt.figure(1) fig.tight_layout() for i in range(kernelshape[0]): ax = fig.add_subplot(rows,cols,pos[k]) w_vol = np.reshape(kernels[k].cpu().detach().clone().numpy(), (kernelshape[1], kernelshape[2]*kernelshape[3])) w_vol_df = pd.DataFrame(w_vol.T) if verbose: msd = zip(w_vol_df.mean(), w_vol_df.std()) for i, values in enumerate(msd): print("For kernel Volume %d" %i) print("Mean+-SD: %0.2f+-%0.2f" %values) print('----------------------') w_vol_df.boxplot(ax=ax) title_boxplot = 'Kernel ' + str(i) plt.title( title_boxplot ) k+=1 if k==kernelshape: break if size: size_h,size_w = size set_size(size_h,size_w,ax) if path: plt.savefig(path, dpi=100) plt.show() def custom_viz(kernels, path=None, cols=None, size=None, verbose=False, axis=False, cmap='gray'): """Visualize weight and activation matrices learned during the optimization process. Works for any size of kernels. Arguments ========= kernels: Weight or activation matrix. Must be a high dimensional Numpy array. Tensors will not work. path: Path to save the visualizations. cols: Number of columns (doesn't work completely yet.) size: Tuple input for size. For example: size=(5,5) verbose: Print information about the input. axis: Plot axis for images. cmap: Color map for output images. Example ======= kernels = model.conv1.weight.cpu().detach().clone() kernels = kernels - kernels.min() kernels = kernels / kernels.max() custom_viz(kernels, 'results/conv1_weights.png', 5) """ def set_size(w,h, ax=None): """ w, h: width, height in inches """ if not ax: ax=plt.gca() l = ax.figure.subplotpars.left r = ax.figure.subplotpars.right t = ax.figure.subplotpars.top b = ax.figure.subplotpars.bottom figw = float(w)/(r-l) figh = float(h)/(t-b) ax.figure.set_size_inches(figw, figh) N = kernels.shape[0] C = kernels.shape[1] total_cols = N*C pos = range(1,total_cols + 1) if verbose: print("Shape of input: ", kernels.shape) if cols==None: req_cols = C num_rows = N elif cols: req_cols = cols # Account for more rows while diving total cols # with requested number of cols in the figure # Hence, using np.ceil to get the largest int # from the quotient of division. num_rows = int(np.ceil(total_cols/req_cols)) elif C>1: # Check for 1D arrays and such. Mostly not needed. req_cols = C fig = plt.figure(1) fig.tight_layout() k=0 for i in range(kernels.shape[0]): for j in range(kernels.shape[1]): img = kernels[i][j] ax = fig.add_subplot(num_rows,req_cols,pos[k]) if cmap: ax.imshow(img, cmap=cmap) else: ax.imshow(img) if axis: plt.axis('on') elif axis==False: plt.axis('off') k = k+1 if size: size_h,size_w = size set_size(size_h,size_w,ax) if path: plt.savefig(path, dpi=100) plt.show() examples = enumerate(test_loader) batch_idx, (example_data, example_targets) = next(examples) print("Predicted Class: ", np.argmax(model.forward(example_data[0].unsqueeze_(0).cuda()).cpu().detach().numpy())) ###Output Predicted Class: 7 ###Markdown example_data[0].unsqueeze_(0) ###Code class SuperLeNet5(nn.Module): def __init__(self): super(SuperLeNet5, self).__init__() # Convolution (In LeNet-5, 32x32 images are given # as input. Hence padding of 2 is done below) self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=9, stride=1, padding=2) self.max_pool_1 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv2 = nn.Conv2d(in_channels=6, out_channels=16, kernel_size=7, stride=1, padding=2) self.max_pool_2 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv3 = nn.Conv2d(in_channels=16, out_channels=120, kernel_size=5, stride=1, padding=2) # conv for 2nd branch self.b2conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1, padding=2) self.fc1 = nn.Linear(5*5*120, 120) # convert matrix with 16*5*5 (= 400) features to a matrix of 120 features (columns) self.fc2 = nn.Linear(120, 84) # convert matrix with 120 features to a matrix of 84 features (columns) self.fc3 = nn.Linear(84, 10) # convert matrix with 84 features to a matrix of 10 features (columns) def forward(self, x): # convolve, then perform ReLU non-linearity x = F.relu(self.conv1(x)) # max-pooling with 2x2 grid x = self.max_pool_1(x) # Conv2 + ReLU x = F.relu(self.conv2(x)) # max-pooling with 2x2 grid x = self.max_pool_2(x) # Conv3 + ReLU x = F.relu(self.conv3(x)) x = x.view(-1, 5*5*120) # FC-1, then perform ReLU non-linearity x = F.relu(self.fc1(x)) # FC-2, then perform ReLU non-linearity x = F.relu(self.fc2(x)) # FC-3 x = self.fc3(x) return F.log_softmax(x, dim=1) model_super = SuperLeNet5() if args['cuda']: model_super.cuda() summary(model_super, (1, 28, 28)) scale = 1 # Specifies the scaling factor of images. # Define the train and test loader # Here we are adding our CustomRotation function to the transformations train_loader = torch.utils.data.DataLoader( datasets.MNIST('data/', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), CustomScaling(scale), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args['batch_size'], shuffle=True, **kwargs) test_loader = torch.utils.data.DataLoader( datasets.MNIST('data/', train=False, transform=transforms.Compose([ transforms.ToTensor(), CustomScaling(scale), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args['test_batch_size'], shuffle=False, **kwargs) def train(epoch): model_super.train() for batch_idx, (data, target) in enumerate(train_loader): if args['cuda']: data, target = data.cuda(), target.cuda() #Variables in Pytorch are differenciable. data, target = Variable(data), Variable(target) #This will zero out the gradients for this batch. optimizer.zero_grad() output = model_super(data) # Calculate the loss The negative log likelihood loss. # It is useful to train a classification problem with C classes. loss = F.nll_loss(output, target) #dloss/dx for every Variable loss.backward() #to do a one-step update on our parameter. optimizer.step() #Print out the loss periodically. if batch_idx % args['log_interval'] == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.data)) def test(): model_super.eval() test_loss = 0 correct = 0 for data, target in test_loader: if args['cuda']: data, target = data.cuda(), target.cuda() with torch.no_grad(): # volatile was removed and now # has no effect. Use `with torch.no_grad():` instead. data= Variable(data) target = Variable(target) output = model_super(data) # sum up batch loss # size_average and reduce args will # be deprecated, please use reduction='sum' instead. test_loss += F.nll_loss(output, target, reduction='sum').data # get the index of the max log-probability pred = output.data.max(1, keepdim=True)[1] correct += pred.eq(target.data.view_as(pred)).long().cpu().sum() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) # optimizer = optim.SGD(model.parameters(), # lr=args['lr'], momentum=args['momentum']) optimizer = optim.Adam(model_super.parameters(), lr=args['lr']) # Training loop. # Change `args['log_interval']` if you want to change logging behavior. # We test the network in each epoch. # Setting the bool `args['train_now']` to not run training all the time. # We'll save the weights and use the saved weights instead of # training the network everytime we load the jupyter notebook. args['train_now'] = True if args['train_now']: for epoch in range(1, args['epochs'] + 1): train(epoch) test() torch.save(model_super.state_dict(), 'models/superlenet5_normal_mnist.pytrh') else: if args['cuda']: device = torch.device("cuda") model_super.load_state_dict(torch.load('models/superlenet5_normal_mnist.pytrh')) model_super.to(device) else: model_super.load_state_dict(torch.load('models/superlenet5_normal_mnist.pytrh')) model_super.eval() ###Output Train Epoch: 1 [0/60000 (0%)] Loss: 2.304572 Train Epoch: 1 [40000/60000 (67%)] Loss: 0.313608 Test set: Average loss: 0.1684, Accuracy: 9461/10000 (94%) Train Epoch: 2 [0/60000 (0%)] Loss: 0.171967 Train Epoch: 2 [40000/60000 (67%)] Loss: 0.156084 Test set: Average loss: 0.1269, Accuracy: 9606/10000 (96%) Train Epoch: 3 [0/60000 (0%)] Loss: 0.098832 Train Epoch: 3 [40000/60000 (67%)] Loss: 0.089912 Test set: Average loss: 0.0934, Accuracy: 9720/10000 (97%) Train Epoch: 4 [0/60000 (0%)] Loss: 0.070504 Train Epoch: 4 [40000/60000 (67%)] Loss: 0.067379 Test set: Average loss: 0.0802, Accuracy: 9750/10000 (97%) Train Epoch: 5 [0/60000 (0%)] Loss: 0.068391 Train Epoch: 5 [40000/60000 (67%)] Loss: 0.082421 Test set: Average loss: 0.1051, Accuracy: 9698/10000 (96%) Train Epoch: 6 [0/60000 (0%)] Loss: 0.047800 Train Epoch: 6 [40000/60000 (67%)] Loss: 0.051924 Test set: Average loss: 0.0844, Accuracy: 9753/10000 (97%) Train Epoch: 7 [0/60000 (0%)] Loss: 0.061874 Train Epoch: 7 [40000/60000 (67%)] Loss: 0.056092 Test set: Average loss: 0.0706, Accuracy: 9789/10000 (97%) Train Epoch: 8 [0/60000 (0%)] Loss: 0.045849 Train Epoch: 8 [40000/60000 (67%)] Loss: 0.047674 Test set: Average loss: 0.1013, Accuracy: 9719/10000 (97%) Train Epoch: 9 [0/60000 (0%)] Loss: 0.050103 Train Epoch: 9 [40000/60000 (67%)] Loss: 0.047874 Test set: Average loss: 0.0781, Accuracy: 9789/10000 (97%) Train Epoch: 10 [0/60000 (0%)] Loss: 0.056856 Train Epoch: 10 [40000/60000 (67%)] Loss: 0.029147 Test set: Average loss: 0.0804, Accuracy: 9798/10000 (97%) Train Epoch: 11 [0/60000 (0%)] Loss: 0.038802 Train Epoch: 11 [40000/60000 (67%)] Loss: 0.039156 Test set: Average loss: 0.0679, Accuracy: 9818/10000 (98%) Train Epoch: 12 [0/60000 (0%)] Loss: 0.030280 Train Epoch: 12 [40000/60000 (67%)] Loss: 0.044896 Test set: Average loss: 0.0768, Accuracy: 9791/10000 (97%) Train Epoch: 13 [0/60000 (0%)] Loss: 0.041343 Train Epoch: 13 [40000/60000 (67%)] Loss: 0.039112 Test set: Average loss: 0.0938, Accuracy: 9729/10000 (97%) Train Epoch: 14 [0/60000 (0%)] Loss: 0.044047 Train Epoch: 14 [40000/60000 (67%)] Loss: 0.041777 Test set: Average loss: 0.1072, Accuracy: 9732/10000 (97%) Train Epoch: 15 [0/60000 (0%)] Loss: 0.049226 Train Epoch: 15 [40000/60000 (67%)] Loss: 0.022261 Test set: Average loss: 0.0924, Accuracy: 9775/10000 (97%) Train Epoch: 16 [0/60000 (0%)] Loss: 0.043483 Train Epoch: 16 [40000/60000 (67%)] Loss: 0.059081 Test set: Average loss: 0.0804, Accuracy: 9793/10000 (97%) Train Epoch: 17 [0/60000 (0%)] Loss: 0.042040 Train Epoch: 17 [40000/60000 (67%)] Loss: 0.016627 Test set: Average loss: 0.0747, Accuracy: 9812/10000 (98%) Train Epoch: 18 [0/60000 (0%)] Loss: 0.018499 Train Epoch: 18 [40000/60000 (67%)] Loss: 0.029946 Test set: Average loss: 0.0951, Accuracy: 9783/10000 (97%) Train Epoch: 19 [0/60000 (0%)] Loss: 0.038755 Train Epoch: 19 [40000/60000 (67%)] Loss: 0.015826 Test set: Average loss: 0.0865, Accuracy: 9797/10000 (97%) Train Epoch: 20 [0/60000 (0%)] Loss: 0.032997 Train Epoch: 20 [40000/60000 (67%)] Loss: 0.034328 Test set: Average loss: 0.0834, Accuracy: 9797/10000 (97%) Train Epoch: 21 [0/60000 (0%)] Loss: 0.018436 Train Epoch: 21 [40000/60000 (67%)] Loss: 0.024998 Test set: Average loss: 0.0827, Accuracy: 9801/10000 (98%) Train Epoch: 22 [0/60000 (0%)] Loss: 0.030537 Train Epoch: 22 [40000/60000 (67%)] Loss: 0.021434 Test set: Average loss: 0.0979, Accuracy: 9782/10000 (97%) Train Epoch: 23 [0/60000 (0%)] Loss: 0.026683 Train Epoch: 23 [40000/60000 (67%)] Loss: 0.013146 Test set: Average loss: 0.0833, Accuracy: 9809/10000 (98%) Train Epoch: 24 [0/60000 (0%)] Loss: 0.024426 Train Epoch: 24 [40000/60000 (67%)] Loss: 0.066471 Test set: Average loss: 0.1074, Accuracy: 9760/10000 (97%) Train Epoch: 25 [0/60000 (0%)] Loss: 0.028854 Train Epoch: 25 [40000/60000 (67%)] Loss: 0.030368 Test set: Average loss: 0.0942, Accuracy: 9788/10000 (97%) Train Epoch: 26 [0/60000 (0%)] Loss: 0.026185 Train Epoch: 26 [40000/60000 (67%)] Loss: 0.030596 Test set: Average loss: 0.0871, Accuracy: 9802/10000 (98%) Train Epoch: 27 [0/60000 (0%)] Loss: 0.048548 Train Epoch: 27 [40000/60000 (67%)] Loss: 0.050810 Test set: Average loss: 0.1035, Accuracy: 9755/10000 (97%) Train Epoch: 28 [0/60000 (0%)] Loss: 0.037022 Train Epoch: 28 [40000/60000 (67%)] Loss: 0.052682 Test set: Average loss: 0.1054, Accuracy: 9773/10000 (97%) Train Epoch: 29 [0/60000 (0%)] Loss: 0.013192 Train Epoch: 29 [40000/60000 (67%)] Loss: 0.021835 Test set: Average loss: 0.0926, Accuracy: 9808/10000 (98%) Train Epoch: 30 [0/60000 (0%)] Loss: 0.016078 Train Epoch: 30 [40000/60000 (67%)] Loss: 0.017413 Test set: Average loss: 0.1095, Accuracy: 9795/10000 (97%) ###Markdown Scale the image to 29% ###Code scale = 0.29 # Specifies the scaling factor of images. # Define the train and test loader # Here we are adding our CustomRotation function to the transformations train_loader = torch.utils.data.DataLoader( datasets.MNIST('data/', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), CustomScaling(scale), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args['batch_size'], shuffle=True, **kwargs) test_loader = torch.utils.data.DataLoader( datasets.MNIST('data/', train=False, transform=transforms.Compose([ transforms.ToTensor(), CustomScaling(scale), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args['test_batch_size'], shuffle=False, **kwargs) examples = enumerate(test_loader) batch_idx, (example_data, example_targets) = next(examples) print("Predicted Class: ", np.argmax(model_super.forward(example_data[0].unsqueeze_(0).cuda()).cpu().detach().numpy())) plt.imshow(example_data[0].cuda().cpu().detach().numpy()[0], cmap='gray') # transforms.functional.to_pil_image(example_data[0]) ###Output Predicted Class: 7 ###Markdown Duper Model ###Code class DuperLeNet5(nn.Module): def __init__(self): super(DuperLeNet5, self).__init__() # Convolution (In LeNet-5, 32x32 images are given # as input. Hence padding of 2 is done below) self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=3, stride=1, padding=2) self.max_pool_1 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv2 = nn.Conv2d(in_channels=6, out_channels=16, kernel_size=4, stride=1, padding=2) self.max_pool_2 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv3 = nn.Conv2d(in_channels=16, out_channels=120, kernel_size=3, stride=1, padding=2) self.fc1 = nn.Linear(10*10*120, 120) # convert matrix with 16*5*5 (= 400) features to a matrix of 120 features (columns) self.fc2 = nn.Linear(120, 84) # convert matrix with 120 features to a matrix of 84 features (columns) self.fc3 = nn.Linear(84, 10) # convert matrix with 84 features to a matrix of 10 features (columns) def forward(self, x): # convolve, then perform ReLU non-linearity x = F.relu(self.conv1(x)) # max-pooling with 2x2 grid x = self.max_pool_1(x) # Conv2 + ReLU x = F.relu(self.conv2(x)) # max-pooling with 2x2 grid x = self.max_pool_2(x) # Conv3 + ReLU x = F.relu(self.conv3(x)) x = x.view(-1, 10*10*120) # FC-1, then perform ReLU non-linearity x = F.relu(self.fc1(x)) # FC-2, then perform ReLU non-linearity x = F.relu(self.fc2(x)) # FC-3 x = self.fc3(x) return F.log_softmax(x, dim=1) model_duper = DuperLeNet5() if args['cuda']: model_duper.cuda() summary(model_duper, (1, 28, 28)) scale = 1 # Specifies the scaling factor of images. # Define the train and test loader # Here we are adding our CustomRotation function to the transformations train_loader = torch.utils.data.DataLoader( datasets.MNIST('data/', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), CustomScaling(scale), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args['batch_size'], shuffle=True, **kwargs) test_loader = torch.utils.data.DataLoader( datasets.MNIST('data/', train=False, transform=transforms.Compose([ transforms.ToTensor(), CustomScaling(scale), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args['test_batch_size'], shuffle=False, **kwargs) args['epochs']=10 def train(epoch): model_duper.train() for batch_idx, (data, target) in enumerate(train_loader): if args['cuda']: data, target = data.cuda(), target.cuda() #Variables in Pytorch are differenciable. data, target = Variable(data), Variable(target) #This will zero out the gradients for this batch. optimizer.zero_grad() output = model_duper(data) # Calculate the loss The negative log likelihood loss. # It is useful to train a classification problem with C classes. loss = F.nll_loss(output, target) #dloss/dx for every Variable loss.backward() #to do a one-step update on our parameter. optimizer.step() #Print out the loss periodically. if batch_idx % args['log_interval'] == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.data)) def test(): model_duper.eval() test_loss = 0 correct = 0 for data, target in test_loader: if args['cuda']: data, target = data.cuda(), target.cuda() with torch.no_grad(): # volatile was removed and now # has no effect. Use `with torch.no_grad():` instead. data= Variable(data) target = Variable(target) output = model_duper(data) # sum up batch loss # size_average and reduce args will # be deprecated, please use reduction='sum' instead. test_loss += F.nll_loss(output, target, reduction='sum').data # get the index of the max log-probability pred = output.data.max(1, keepdim=True)[1] correct += pred.eq(target.data.view_as(pred)).long().cpu().sum() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) # optimizer = optim.SGD(model.parameters(), # lr=args['lr'], momentum=args['momentum']) optimizer = optim.Adam(model_duper.parameters(), lr=args['lr']) # Training loop. # Change `args['log_interval']` if you want to change logging behavior. # We test the network in each epoch. # Setting the bool `args['train_now']` to not run training all the time. # We'll save the weights and use the saved weights instead of # training the network everytime we load the jupyter notebook. args['train_now'] = True if args['train_now']: for epoch in range(1, args['epochs'] + 1): train(epoch) test() torch.save(model_duper.state_dict(), 'models/duperlenet5_normal_mnist.pytrh') else: if args['cuda']: device = torch.device("cuda") model_duper.load_state_dict(torch.load('models/duperlenet5_normal_mnist.pytrh')) model_duper.to(device) else: model_duper.load_state_dict(torch.load('models/duperlenet5_normal_mnist.pytrh')) model_duper.eval() scale = 0.2 # Specifies the scaling factor of images. # Define the train and test loader # Here we are adding our CustomRotation function to the transformations train_loader = torch.utils.data.DataLoader( datasets.MNIST('data/', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), CustomScaling(scale), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args['batch_size'], shuffle=True, **kwargs) test_loader = torch.utils.data.DataLoader( datasets.MNIST('data/', train=False, transform=transforms.Compose([ transforms.ToTensor(), CustomScaling(scale), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args['test_batch_size'], shuffle=False, **kwargs) examples = enumerate(test_loader) batch_idx, (example_data, example_targets) = next(examples) print("Predicted Class: ", np.argmax(model_duper.forward(example_data[0].unsqueeze_(0).cuda()).cpu().detach().numpy())) plt.imshow(example_data[0].cuda().cpu().detach().numpy()[0], cmap='gray') # transforms.functional.to_pil_image(example_data[0]) ###Output Predicted Class: 5
projects/transformerdesigner/power/TransformerDesigner-Power.ipynb
###Markdown Transformer Designer - PowerReferences* [Transformer Design and Manufacturing Manual - Wolpert](http://www.vintagewindings.com/gen%20pop/8299543VW8335/TransDesign%201/Wolpert-PowerTransformers.pdf)* [Engineering a Transformer - Stancor](http://www.vintagewindings.com/gen%20pop/8299543VW8335/TransDesign%202/TransformerEngineering%20Stancor-1945.pdf)* [Electronic Transformers and Circuits - Lee](www.tubebooks.org/books/lee_1955_electronic_transformers_and_circuits.pdf)* [Flux Lines to Tesla](http://www.translatorscafe.com/cafe/EN/units-converter/magnetic-flux-density/10-1/line%2Finch%C2%B2-tesla/) Using Wolpert Equations without losses$$\frac{E_s}{E_p} = \frac{N_s}{N_p}$$$$\frac{I_s}{I_p} = \frac{N_P}{N_S}$$$E_s$ = Secondary voltage$E_p$ = Primary voltage$I_s$ = Secondary current$I_p$ = Primary voltage$N_s$ = Secondary turns$N_p$ = Primary turns$$VA = E_p * I_p = E_s * I_s$$ Equations with losses$$VA = E_s * I_s$$$$I_p = \frac{VA * efficiency}{E_p}$$$$Area_{effective} = Area_{core} * stackingFactor$$$Area_{effective}$ = core area (tongue width * stack height)stackingFactor = use 0.92 for 1x1 interleave and 0.95 for butt stack$$N_p = \frac{E_p * 10^8}{4.44 * B * A * F}$$4.44 is a constant for sine wave operationB = flux density in $\frac{lines}{inch^2}$A = $Area_{effective}$F = line frequency$$N_s = \frac{N_p}{E_p} * lossFactor * E_s$$lossFactor = factor to adjust turns to compensate for losses$$T_{rise} = \frac{totalLoss}{0.1*{(\frac{weight}{1.073})}^\frac{2}{3}}$$$$\%_{regulation} = \frac{E_{noLoad} - E_{fullLoad}}{E_{fullLoad}} * 100\%$$ Calculation Steps* Sum desired secondary VAs, voltage * current* Calculate primary current including efficiency* Calculate effective core area using stacking factor* Calculate primary turns* Calculate secondary turns* Calculate primary and secondary wire diameters* Calculate primary and secondary layer count* Loop * Calculate all windings mean path length * Calculate all windings resistance * Calculate all winding voltage drops * Find secondary turn count that minimizes abs($V_{desired} - V_{out}$)* Calculate winding weight* Calculate transformer weight including extras: lead wire, bells, brackets* Calculate temperature rise above ambient, ($T_{ambient} + T_{rise}$) < 105C, Wolpert p25 Additional Considerations* Choice of flux density affects primary turn count, which results in an integer * Voltage ratios are a fractional number of 2 integer divisions * By changing the flux density, you can minutely alter the secondary output voltages * The fluxFind() method scans flux densities and returns density with minimal output error * While fluxFind() finds optimal density, you may opt for lower density to avoid saturation when using recycled, or unknown, laminations* Lowering circularMilsPerAmp results in smaller gauge wire with higher operating temperature. The default value of 800 is fairly conservative. If you're design doesn't quite fit, lower circularMilsPerAmp and see what happens to bobbin fill percentage and temperature rise * 29M6 GOSS laminations have coreLoss of 0.66 watts/lb, see Wolpert P24. Foster rep loosely stated their non-oriented lamination loss is 5W/lb at 13kGauss. You can see this in data sheets from AK Steel of ATI. In summary, non-oriented steel will have higher core losses hence higher temperature rise. However, they also have lower inrush current. If you're recycling laminations, chances are they are not oriented. You also don't know saturation, so choose a low flux density, which will increase turns and fill your bobbin. ###Code %matplotlib notebook import Winding,Transformer import matplotlib.pyplot as plt import numpy as np import math ###Output _____no_output_____ ###Markdown Here are the default transformer parameter values laminationVA = lamva circularMilsPerAmp = 800.0 coreLoss = 0.66 watts/lbs efficiency = 0.90 1/1.11 in wolpert p10 estimate w/o calculating primary leakage inductance lineFrequency = 60.0 stackingFactor = 0.92 stacking factor wolpert p11 0.92 1x1 interleave, 0.95 butt stack lossFactor = 0.95 1/1.05 in wolpert p11 isolationThickness = 0.003 1 mil kapton wrappingThickness = 0.015 weightExtra = 1.15 percentage of extra: bells, brackets, screws insulationLayers = 3 ###Code # Simple 12V 3A filament transformer %reload_ext autoreload primary = Winding.Winding('p',115.0,0.0) secondary = Winding.Winding('s', 12.6,3.0,taps=[50]) t = Transformer.Transformer([primary,secondary],50,have=1) t.circularMilsPerAmp = 600 #t.fluxDensity = 87000 t.fluxDensity = t.fluxFind(bmax=90000) t.compute() t.report() t.fluxTable() print t.gcode() # Simple 12V filament transformer # now use fluxfind %reload_ext autoreload primary = Winding.Winding('p',115.0,0.0) secondary = Winding.Winding('s', 12.6,3.0,taps=[50]) t = Transformer.Transformer([primary,secondary],50,have=0) t.fluxDensity = t.fluxFind(bmax=100000,inc=100) # this scans through flux densities and finds minimal error for output voltage t.compute() t.report() # Simple 12V filament transformer # now use fluxfind and force bigger wire gauge to improve regulation %reload_ext autoreload primary = Winding.Winding('p',115.0,0.0) secondary = Winding.Winding('s', 12.6,3.0,taps=[50]) t = Transformer.Transformer([primary,secondary],50,have=0) t.circularMilsPerAmp = 1000 t.fluxDensity = t.fluxFind(bmax=100000,inc=500) # this scans through flux densities and finds minimal error for output voltage t.compute() t.report() # Simple 12V filament transformer # now use fluxfind and choose from wire I have %reload_ext autoreload primary = Winding.Winding('p',115.0,0.0,fill=0) secondary = Winding.Winding('s', 12.6,3.0,taps=[50]) t = Transformer.Transformer([primary,secondary],50,have=1) t.fluxDensity = t.fluxFind(bmax=100000,inc=500) # this scans through flux densities and finds minimal error for output voltage t.compute() t.report() t.plot(1) print t.gcode() # 6V6GT Push-Pull AB2 Power Transformer, Fender Deluxe 5E3 # 1 12AX7, 1 12AY7, 2 6V6GT, 1 5Y3 %reload_ext autoreload primary = Winding.Winding('p',115.0,0.0) secondary5 = Winding.Winding('s', 5.0,2.0,fill=0) #filament rectifier secondary6 = Winding.Winding('s', 6.3,1.7,taps=[50]) #filaments 6v6,12ax7, 12ay7 secondary325 = Winding.Winding('s',325.0,0.125,taps=[50]) #plate secondary20 = Winding.Winding('s', 20.0,0.002,None) #fixed bias t = Transformer.Transformer([secondary5,secondary6,primary,secondary325,secondary20],65) # windings order by gauge t.coreLoss = 0.66 # watts/lb GOES lam t.wrappingThickness = 0.005 t.insulationLayers = 2 t.circularMilsPerAmp = 650 #t.fluxDensity = 75000 t.fluxDensity = t.fluxFind(bmax=100000,inc=500) # this scans through flux densities and finds minimal error for output voltage t.compute() t.report() t.fluxTable() t.plot(0) print t.gcode() t.fluxTable() t.fluxTable(sort='error') %reload_ext autoreload # here's a filament transformer design using EI150 lamination # I'm using this for a bench power supply # 5.0V@5A, 6.3V@8A, 12.6V@6A primary = Winding.Winding('p',115.0,0.0,None) secondary5a = Winding.Winding('s',5.0 ,2.5,[50]) secondary5b = Winding.Winding('s',5.0 ,2.5,[50]) secondary6a = Winding.Winding('s',6.3 ,4.0,[50]) secondary6b = Winding.Winding('s',6.3 ,4.0,[50]) secondary12a = Winding.Winding('s',12.6,3.0,[50]) secondary12b = Winding.Winding('s',12.6,3.0,[50]) t = Transformer.Transformer([primary,secondary6a,secondary6b,secondary12a,secondary12b,secondary5a,secondary5b],160,have=1) t.coreLoss = 0.8 # watts/lb, using AK DI-MAX M-13 at 12kG t.isolationThickness = 0.003 t.wrappingThickness = 0.005 t.insulationLayers = 2 t.fluxDensity = t.fluxFind(bmax=100000,inc=500) # this scans through flux densities and finds minimal error for output voltage t.compute() t.report() print t.gcode() # grid bias transformer primary = Winding.Winding('p',115.0,0.0,None) secondary = Winding.Winding('s',100.0 ,0.020,[50]) t = Transformer.Transformer([primary,secondary],7) t.coreLoss = 0.66 # watts/lbs, goes t.fluxDensity = t.fluxFind(bmax=100000) #t.fluxDensity = 90000 t.compute() t.report() t.fluxTable() t.fluxTable(sort='error') # power transformer for a flyback tube output stage screen bias 200V primary = Winding.Winding('p',115.0,0.0,None) secondary5 = Winding.Winding('s',5.0,2.0,None) secondary6 = Winding.Winding('s',6.3,2.0,[50]) secondary200 = Winding.Winding('s',200.0,0.05,None) secondary500 = Winding.Winding('s',500.0,0.100,[50]) t = Transformer.Transformer([secondary5,secondary6,primary,secondary500,secondary200],90) t.circularMilsPerAmp = 700 t.coreLoss = 0.88 # watts/lbs t.wrappingThickness = 0.05 t.fluxDensity = t.fluxFind() t.compute() t.report() t.fluxTable() t.fluxTable(sort='error') ###Output _____no_output_____
00_NumPy.ipynb
###Markdown NumPyIn this notebook I will explain some basic usage of numpy mainly for machine learning purposes. ###Code # First we need to import numpy, and usually rename it to `np` for convenience import numpy as np ###Output _____no_output_____ ###Markdown Vectors / MatricesThe most useful data structures in machine learning is vectors and matricies.Usually the input $X$ is provided as a matrix with shape of $(N, d)$, where $N$ is the number of samplesand $d$ is the number of features.The output is usually a vector $y$ with shape of $(d)$.**Note**: All the elements in a numpy array must have the same data type. Usually we use `np.float64`.When creating array with constants, add a decimal point `.` after integer to make it a float. (i.e. `1.` instead of `1`). ###Code # Create vector / matrix # Conventionally, use uppercase letter for matrix and lowercase letter for vector # 1-d array => vector w = np.array([1,2,3]) print("w = ", w) # 2-d array => matrix X = np.array([[1.,2.,3.], [4.,5.,6.]]) print("X = ", X) # shapes print("w.shape = ", w.shape) print("X.shape = ", X.shape) # Data type print("w.dtype = ", w.dtype) print("X.dtype = ", X.dtype) # Create zero matrix X = np.zeros(shape=[3, 4]) print("Zero matrix\n", X) # Random valued matrix (uniform distribution) X = np.random.rand(3, 4) print("Random matrix\n", X) # 1 matrix X = np.ones(shape=[3, 4]) print("1 matrix\n", X) # Matrix with arbitrary value X = np.empty(shape=[3, 4]) X.fill(3) print("3 matrix\n", X) # Identity matrix X = np.eye(3) print("Identity(3x3)\n", X) ###Output Zero matrix [[0. 0. 0. 0.] [0. 0. 0. 0.] [0. 0. 0. 0.]] Random matrix [[0.72729723 0.1936502 0.53600168 0.92075096] [0.3402273 0.43510733 0.96235618 0.38138813] [0.61589393 0.66103064 0.10208497 0.41653688]] 1 matrix [[1. 1. 1. 1.] [1. 1. 1. 1.] [1. 1. 1. 1.]] 3 matrix [[3. 3. 3. 3.] [3. 3. 3. 3.] [3. 3. 3. 3.]] Identity(3x3) [[1. 0. 0.] [0. 1. 0.] [0. 0. 1.]] ###Markdown Here are some practical examples. ###Code # Test data X = np.random.rand(100, 4) # Create weight vector `w` based on input N, d = X.shape w = np.zeros(shape=[d]) ###Output _____no_output_____ ###Markdown Vector / Matrix operations ###Code ####################################### # Vector operations ####################################### a = np.array([1., 2., 3., 4., 5.]) b = np.array([6., 7., 8., 9., 10.]) print('Vector operations\n-----------------') # scalar operations print("Scalar: ", a * 2 + 1) # element-wise operations print("a+b=", a + b) print("a*b=", a * b) # dot product print("a.b=", np.dot(a, b)) # L2-norm print("L2-norm: ||a||=", np.linalg.norm(a)) print("L2-norm square: ||a||^2=", np.square(np.linalg.norm(a))) print("L2-norm square(alternative): ", np.dot(a, a)) ####################################### # Matrix operations ####################################### A = np.array([[1., 2., 3.], [4., 5., 6]]) B = np.array([[7., 8.], [9., 10.], [11., 12.]]) C = np.array([[100, 200], [300, 400]]) print('\n\nMatrix operations\n-----------------') print('A = \n', A) print('B = \n', B) print('C = \n', C) # Transpose print('A^T = \n', A.T) # Inverse print('C^-1 = \n', np.linalg.inv(C)) # Multiplication print('A * B = \n', np.matmul(A, B)) print('B * A = \n', np.matmul(B, A)) ###Output Vector operations ----------------- Scalar: [ 3. 5. 7. 9. 11.] a+b= [ 7. 9. 11. 13. 15.] a*b= [ 6. 14. 24. 36. 50.] a.b= 130.0 L2-norm: ||a||= 7.416198487095663 L2-norm square: ||a||^2= 55.0 L2-norm square(alternative): 55.0 Matrix operations ----------------- A = [[1. 2. 3.] [4. 5. 6.]] B = [[ 7. 8.] [ 9. 10.] [11. 12.]] C = [[100 200] [300 400]] A^T = [[1. 4.] [2. 5.] [3. 6.]] C^-1 = [[-0.02 0.01 ] [ 0.015 -0.005]] A * B = [[ 58. 64.] [139. 154.]] B * A = [[ 39. 54. 69.] [ 49. 68. 87.] [ 59. 82. 105.]] ###Markdown Matrix slicing, expandingHow to change the dimension of matrices. ###Code # Slice print('1st column of A = ', A[:,0]) print('2nd row of A = ', A[1]) print('last column of A = ', A[:, -1]) print('1-2 columns of A = \n', A[:, 0:2]) # Split, usually used to extract training input and training labels input = A[:,:-1] labels = A[:,-1] print('input = \n', input) print('labels = ', labels) # Add one extra 1's column to A ones = np.ones(shape=[A.shape[0], 1]) print('Extended A = \n', np.hstack([ones, A])) ###Output 1st column of A = [1. 4.] 2nd row of A = [4. 5. 6.] last column of A = [3. 6.] 1-2 columns of A = [[1. 2.] [4. 5.]] input = [[1. 2.] [4. 5.]] labels = [3. 6.] Extended A = [[1. 1. 2. 3.] [1. 4. 5. 6.]]
HouseholdPowerConsumption.ipynb
###Markdown Licensed to the Apache Software Foundation (ASF) under oneor more contributor license agreements. See the NOTICE filedistributed with this work for additional informationregarding copyright ownership. The ASF licenses this fileto you under the Apache License, Version 2.0 (the"License"); you may not use this file except in compliancewith the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing,software distributed under the License is distributed on an"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANYKIND, either express or implied. See the License for thespecific language governing permissions and limitationsunder the License. Household Power Consumption ###Code import os import math import scipy as sp import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt import sklearn from sklearn import cluster from sklearn import neighbors import torch import torch.nn as nn import torch.optim as optim import scikit_wrappers cuda = False if torch.cuda.is_available(): print("Using CUDA...") cuda = True # GPU number gpu = 0 ###Output Using CUDA... ###Markdown DatasetThe dataset can be found here: [https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption](https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption). ###Code # Path to dataset data = 'Time Series/household_power_consumption.txt' # Dataset as a dataframe df = pd.read_csv(data, sep=';', decimal=',') # Replace missing values by the last seen values dataset = np.transpose(np.array(df))[2].reshape(1, 1, -1) for i in range(np.shape(dataset)[2]): if dataset[0, 0, i] == '?': dataset[0, 0, i] = dataset[0, 0, i-1] dataset = dataset.astype(float) # Training and testing partition (training set: first 1500000 measurements) train = dataset[:, :, :500000] test = dataset[:, :, 500000:] # Preprocessing: normalization mean = np.mean(dataset) var = np.var(dataset) dataset = (dataset - mean)/math.sqrt(var) train = (train - mean)/math.sqrt(var) test = (test - mean)/math.sqrt(var) print('Mean: ', np.mean(dataset)) print('Variance: ', np.var(dataset)) plt.figure(figsize=(30,10)) plt.xlabel('Timestep', fontsize=20) plt.xticks(fontsize=20) plt.yticks(fontsize=20) plt.plot(dataset[0, 0, 50000:51000], color='g') plt.show() plt.close() ###Output _____no_output_____ ###Markdown Feature Learning (Yearly Scale) Learning Parameters ###Code # Set to True to train a new model training = False # Prefix to path to the saved model model = 'Time Series/HouseholdPowerConsumption_yearly' hyperparameters = { "batch_size": 1, "channels": 30, "compared_length": None, "depth": 10, "nb_steps": 400, "in_channels": 1, "kernel_size": 3, "penalty": None, "early_stopping": None, "lr": 0.001, "nb_random_samples": 10, "negative_penalty": 1, "out_channels": 160, "reduced_size": 80, "cuda": cuda, "gpu": gpu } ###Output _____no_output_____ ###Markdown Training ###Code encoder_yearly = scikit_wrappers.CausalCNNEncoderClassifier() encoder_yearly.set_params(**hyperparameters) if training: encoder_yearly.fit_encoder(train, save_memory=True, verbose=True) encoder_yearly.save_encoder(model) else: encoder_yearly.load_encoder(model) torch.cuda.empty_cache() ###Output _____no_output_____ ###Markdown Computing RepresentationsWe compute in the following (or load them from local storage if they are already precomputed) the learned representations given by the yearly encoder on sliding windows of different sizes (a week, a quarter) for the whole dataset. ###Code compute_representations = False storage_train_day = 'Time Series/HouseholdPowerConsumption_representations_train_yearly_day.npy' storage_test_day = 'Time Series/HouseholdPowerConsumption_representations_test_yearly_day.npy' storage_train_quarter = 'Time Series/HouseholdPowerConsumption_representations_train_yearly_quarter.npy' storage_test_quarter = 'Time Series/HouseholdPowerConsumption_representations_test_yearly_quarter.npy' if compute_representations: train_features_day = encoder_yearly.encode_window(train, 1440) np.save(storage_train_day, train_features_day) test_features_day = encoder_yearly.encode_window(test, 1440) np.save(storage_test_day, test_features_day) train_features_quarter = encoder_yearly.encode_window(train, 12*7*1440, batch_size=25) np.save(storage_train_quarter, train_features_quarter) test_features_quarter = encoder_yearly.encode_window(test, 12*7*1440, batch_size=25) np.save(storage_test_quarter, test_features_quarter) else: train_features_day = np.load(storage_train_day) test_features_day = np.load(storage_test_day) train_features_quarter = np.load(storage_train_quarter) test_features_quarter = np.load(storage_test_quarter) ###Output _____no_output_____ ###Markdown Visualization ###Code # From http://abhay.harpale.net/blog/python/how-to-plot-multicolored-lines-in-matplotlib/ # to plot multicolored cruves in matplotlib def find_contiguous_colors(colors): # finds the continuous segments of colors and returns those segments segs = [] curr_seg = [] prev_color = '' for c in colors: if c == prev_color or prev_color == '': curr_seg.append(c) else: segs.append(curr_seg) curr_seg = [] curr_seg.append(c) prev_color = c segs.append(curr_seg) # the final one return segs kmeans = cluster.KMeans(n_clusters=6).fit(np.swapaxes(test_features_day[0, :, 76+5*1440:76+6*1440], 0, 1)) plt.figure(figsize=(30,10)) plt.title('Clustering based on yearly-scale learned representations computed on a day-long sliding window, from midnight to 23:59', fontsize=20) plt.xlabel('Hour of the day', fontsize=20) plt.ylabel('Minute-averaged active power (normalized)', fontsize=20) plt.xticks(fontsize=20) plt.yticks(fontsize=20) associated_colors = {0: 'blue', 1: 'green', 2: 'red', 3: 'yellow', 4: 'magenta', 5: 'black', 6: 'purple', 7: 'cyan', 8: 'pink', 9: 'orange', 10: 'grey', 11: 'fuchsia', 12: 'maroon', 13: 'navy'} colors = [associated_colors[l] for l in kmeans.labels_] segments = find_contiguous_colors(colors) start = 76+6*1440 beginning = True for seg in segments: end = start + len(seg) if beginning: hour_range = ((np.arange(start, end) - 76)%1440)/60 l, = plt.gca().plot(hour_range, test[0, 0, start:end], lw=2, c=seg[0]) beginning = False else: hour_range = ((np.arange(start-1, end) - 76)%1440)/60 plt.gca().axvline(x=hour_range[0]) l, = plt.gca().plot(hour_range, test[0, 0, start-1:end], lw=2, c=seg[0]) start = end plt.legend(fontsize=25) plt.savefig('electricity.eps') ###Output No handles with labels found to put in legend. ###Markdown Evaluation ###Code # Training and testing sets as PyTorch variables train_features_day = torch.from_numpy(train_features_day) test_features_day = torch.from_numpy(test_features_day) train_features_quarter = torch.from_numpy(train_features_quarter) test_features_quarter = torch.from_numpy(test_features_quarter) train = torch.from_numpy(train) test = torch.from_numpy(test) if torch.cuda.is_available(): train_features_day = train_features_day.cuda(gpu) test_features_day = test_features_day.cuda(gpu) train_features_quarter = train_features_quarter.cuda(gpu) test_features_quarter = test_features_quarter.cuda(gpu) train = train.cuda(gpu) test = test.cuda(gpu) ###Output _____no_output_____ ###Markdown Regression / Forecasting (24h)The task here is to predict the evolution of the average electricity consumption for the next 24 hours, compared to the one of the last 24 hours. ###Code # Computing values to predict means = np.array(pd.DataFrame(data=dataset[0, 0]).rolling(1440).mean())[:, 0] target = -means[:-1439] + means[1439:] train_target = target[1439:500000] test_target = target[500000+1439:] # Transferring targets to PyTorch tensors train_target = torch.from_numpy(train_target) test_target = torch.from_numpy(test_target) if torch.cuda.is_available(): train_target = train_target.cuda(gpu) test_target = test_target.cuda(gpu) ###Output _____no_output_____ ###Markdown Linear Regression using Learned RepresentationsWe use representations computed by the learned encoder on a day-long sliding window. ###Code regressor = nn.Linear(160, 1) regressor.double() if torch.cuda.is_available(): regressor.cuda(gpu) loss = nn.MSELoss() optimizer = optim.Adam(regressor.parameters(), lr=0.001) epochs = 400 %%time for i in range(epochs): l = loss(regressor(train_features_day[0].t()).squeeze(), train_target) l.backward() optimizer.step() optimizer.zero_grad() print("end") with torch.no_grad(): print(loss(regressor(test_features_day[0, :, :-1439].t()).squeeze(), test_target).data.cpu().numpy()) ###Output 0.08951327073870838 ###Markdown Linear Regression using the Raw Values of the Last 24 hours ###Code regressor = nn.Conv1d(1, 1, 1440) regressor.double() if torch.cuda.is_available(): regressor.cuda(gpu) loss = nn.MSELoss() optimizer = optim.Adam(regressor.parameters(), lr=0.0001) epochs = 400 %%time for i in range(epochs): l = loss(regressor(train).squeeze(), train_target) l.backward() optimizer.step() optimizer.zero_grad() print("end") with torch.no_grad(): print(loss(regressor(test).squeeze()[:-1439], test_target).data.cpu().numpy()) ###Output 0.08915801725356748 ###Markdown Regression / Forecasting (a Quarter)The task here is to predict the evolution of the average electricity consumption for the next quarter, compared to the one of the last quarter. ###Code # Computing values to predict means = np.array(pd.DataFrame(data=dataset[0, 0]).rolling(7*12*1440).mean())[:, 0] target = -means[:-7*12*1440 + 1] + means[7*12*1440 - 1:] train_target = target[7*12*1440 - 1:500000] test_target = target[500000 + 7*12*1440 - 1:] # Transferring targets to PyTorch tensors train_target = torch.from_numpy(train_target) test_target = torch.from_numpy(test_target) if torch.cuda.is_available(): train_target = train_target.cuda(gpu) test_target = test_target.cuda(gpu) ###Output _____no_output_____ ###Markdown Linear Regression using Learned RepresentationsWe use representations computed by the learned encoder on a quarter-long sliding window. ###Code regressor = nn.Linear(160, 1) regressor.double() if torch.cuda.is_available(): regressor.cuda(gpu) loss = nn.MSELoss() optimizer = optim.Adam(regressor.parameters(), lr=0.0001) epochs = 400 %%time for i in range(epochs): l = loss(regressor(train_features_quarter[0].t()).squeeze(), train_target) l.backward() optimizer.step() optimizer.zero_grad() print("end") with torch.no_grad(): print(loss(regressor(test_features_quarter[0, :, :-7*12*1440 + 1].t()).squeeze(), test_target).data.cpu().numpy()) ###Output 0.07255152957379928 ###Markdown Linear Regression using the Raw Values of the Last Quarter ###Code regressor = nn.Conv1d(1, 1, 1440*7*12) regressor.double() if torch.cuda.is_available(): regressor.cuda(gpu) loss = nn.MSELoss() optimizer = optim.Adam(regressor.parameters(), lr=0.001) epochs = 400 %%time for i in range(epochs): l = loss(regressor(train).squeeze(), train_target) l.backward() optimizer.step() optimizer.zero_grad() print("end") with torch.no_grad(): print(loss(regressor(test).squeeze()[:-7*12*1440 + 1], test_target).data.cpu().numpy()) ###Output 0.06257629058462644 ###Markdown Licensed to the Apache Software Foundation (ASF) under oneor more contributor license agreements. See the NOTICE filedistributed with this work for additional informationregarding copyright ownership. The ASF licenses this fileto you under the Apache License, Version 2.0 (the"License"); you may not use this file except in compliancewith the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing,software distributed under the License is distributed on an"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANYKIND, either express or implied. See the License for thespecific language governing permissions and limitationsunder the License. Household Power Consumption ###Code import os import math import scipy as sp import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt import sklearn from sklearn import cluster from sklearn import neighbors import torch import torch.nn as nn import torch.optim as optim import scikit_wrappers cuda = False if torch.cuda.is_available(): print("Using CUDA...") cuda = True # GPU number gpu = 0 ###Output Using CUDA... ###Markdown DatasetThe dataset can be found here: [https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption](https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption). ###Code # Path to dataset data = 'Time Series/household_power_consumption.txt' # Dataset as a dataframe df = pd.read_csv(data, sep=';', decimal=',') # Replace missing values by the last seen values dataset = np.transpose(np.array(df))[2].reshape(1, 1, -1) for i in range(np.shape(dataset)[2]): if dataset[0, 0, i] == '?': dataset[0, 0, i] = dataset[0, 0, i-1] dataset = dataset.astype(float) # Training and testing partition (training set: first 1500000 measurements) train = dataset[:, :, :500000] test = dataset[:, :, 500000:] # Preprocessing: normalization mean = np.mean(dataset) var = np.var(dataset) dataset = (dataset - mean)/math.sqrt(var) train = (train - mean)/math.sqrt(var) test = (test - mean)/math.sqrt(var) print('Mean: ', np.mean(dataset)) print('Variance: ', np.var(dataset)) plt.figure(figsize=(30,10)) plt.xlabel('Timestep', fontsize=20) plt.xticks(fontsize=20) plt.yticks(fontsize=20) plt.plot(dataset[0, 0, 50000:51000], color='g') plt.show() plt.close() ###Output _____no_output_____ ###Markdown Feature Learning (Yearly Scale) Learning Parameters ###Code # Set to True to train a new model training = False # Prefix to path to the saved model model = 'Time Series/HouseholdPowerConsumption_yearly' hyperparameters = { "batch_size": 1, "channels": 30, "compared_length": None, "depth": 10, "nb_steps": 400, "in_channels": 1, "kernel_size": 3, "penalty": None, "early_stopping": None, "lr": 0.001, "nb_random_samples": 10, "negative_penalty": 1, "out_channels": 160, "reduced_size": 80, "cuda": cuda, "gpu": gpu } ###Output _____no_output_____ ###Markdown Training ###Code encoder_yearly = scikit_wrappers.CausalCNNEncoderClassifier() encoder_yearly.set_params(**hyperparameters) if training: encoder_yearly.fit_encoder(train, save_memory=True, verbose=True) encoder_yearly.save_encoder(model) else: encoder_yearly.load_encoder(model) torch.cuda.empty_cache() ###Output _____no_output_____ ###Markdown Computing RepresentationsWe compute in the following (or load them from local storage if they are already precomputed) the learned representations given by the yearly encoder on sliding windows of different sizes (a week, a quarter) for the whole dataset. ###Code compute_representations = False storage_train_day = 'Time Series/HouseholdPowerConsumption_representations_train_yearly_day.npy' storage_test_day = 'Time Series/HouseholdPowerConsumption_representations_test_yearly_day.npy' storage_train_quarter = 'Time Series/HouseholdPowerConsumption_representations_train_yearly_quarter.npy' storage_test_quarter = 'Time Series/HouseholdPowerConsumption_representations_test_yearly_quarter.npy' if compute_representations: train_features_day = encoder_yearly.encode_window(train, 1440) np.save(storage_train_day, train_features_day) test_features_day = encoder_yearly.encode_window(test, 1440) np.save(storage_test_day, test_features_day) train_features_quarter = encoder_yearly.encode_window(train, 12*7*1440, batch_size=25) np.save(storage_train_quarter, train_features_quarter) test_features_quarter = encoder_yearly.encode_window(test, 12*7*1440, batch_size=25) np.save(storage_test_quarter, test_features_quarter) else: train_features_day = np.load(storage_train_day) test_features_day = np.load(storage_test_day) train_features_quarter = np.load(storage_train_quarter) test_features_quarter = np.load(storage_test_quarter) ###Output _____no_output_____ ###Markdown Visualization ###Code # From http://abhay.harpale.net/blog/python/how-to-plot-multicolored-lines-in-matplotlib/ # to plot multicolored cruves in matplotlib def find_contiguous_colors(colors): # finds the continuous segments of colors and returns those segments segs = [] curr_seg = [] prev_color = '' for c in colors: if c == prev_color or prev_color == '': curr_seg.append(c) else: segs.append(curr_seg) curr_seg = [] curr_seg.append(c) prev_color = c segs.append(curr_seg) # the final one return segs kmeans = cluster.KMeans(n_clusters=6).fit(np.swapaxes(test_features_day[0, :, 76+5*1440:76+6*1440], 0, 1)) plt.figure(figsize=(30,10)) plt.title('Clustering based on yearly-scale learned representations computed on a day-long sliding window, from midnight to 23:59', fontsize=20) plt.xlabel('Hour of the day', fontsize=20) plt.ylabel('Minute-averaged active power (normalized)', fontsize=20) plt.xticks(fontsize=20) plt.yticks(fontsize=20) associated_colors = {0: 'blue', 1: 'green', 2: 'red', 3: 'yellow', 4: 'magenta', 5: 'black', 6: 'purple', 7: 'cyan', 8: 'pink', 9: 'orange', 10: 'grey', 11: 'fuchsia', 12: 'maroon', 13: 'navy'} colors = [associated_colors[l] for l in kmeans.labels_] segments = find_contiguous_colors(colors) start = 76+6*1440 beginning = True for seg in segments: end = start + len(seg) if beginning: hour_range = ((np.arange(start, end) - 76)%1440)/60 l, = plt.gca().plot(hour_range, test[0, 0, start:end], lw=2, c=seg[0]) beginning = False else: hour_range = ((np.arange(start-1, end) - 76)%1440)/60 plt.gca().axvline(x=hour_range[0]) l, = plt.gca().plot(hour_range, test[0, 0, start-1:end], lw=2, c=seg[0]) start = end plt.legend(fontsize=25) plt.savefig('electricity.eps') ###Output No handles with labels found to put in legend. ###Markdown Evaluation ###Code # Training and testing sets as PyTorch variables train_features_day = torch.from_numpy(train_features_day) test_features_day = torch.from_numpy(test_features_day) train_features_quarter = torch.from_numpy(train_features_quarter) test_features_quarter = torch.from_numpy(test_features_quarter) train = torch.from_numpy(train) test = torch.from_numpy(test) if torch.cuda.is_available(): train_features_day = train_features_day.cuda(gpu) test_features_day = test_features_day.cuda(gpu) train_features_quarter = train_features_quarter.cuda(gpu) test_features_quarter = test_features_quarter.cuda(gpu) train = train.cuda(gpu) test = test.cuda(gpu) ###Output _____no_output_____ ###Markdown Regression / Forecasting (24h)The task here is to predict the evolution of the average electricity consumption for the next 24 hours, compared to the one of the last 24 hours. ###Code # Computing values to predict means = np.array(pd.DataFrame(data=dataset[0, 0]).rolling(1440).mean())[:, 0] target = -means[:-1439] + means[1439:] train_target = target[1439:500000] test_target = target[500000+1439:] # Transferring targets to PyTorch tensors train_target = torch.from_numpy(train_target) test_target = torch.from_numpy(test_target) if torch.cuda.is_available(): train_target = train_target.cuda(gpu) test_target = test_target.cuda(gpu) ###Output _____no_output_____ ###Markdown Linear Regression using Learned RepresentationsWe use representations computed by the learned encoder on a day-long sliding window. ###Code regressor = nn.Linear(160, 1) regressor.double() if torch.cuda.is_available(): regressor.cuda(gpu) loss = nn.MSELoss() optimizer = optim.Adam(regressor.parameters(), lr=0.001) epochs = 400 %%time for i in range(epochs): l = loss(regressor(train_features_day[0].t()).squeeze(), train_target) l.backward() optimizer.step() optimizer.zero_grad() print("end") with torch.no_grad(): print(loss(regressor(test_features_day[0, :, :-1439].t()).squeeze(), test_target).data.cpu().numpy()) ###Output 0.08951327073870838 ###Markdown Linear Regression using the Raw Values of the Last 24 hours ###Code regressor = nn.Conv1d(1, 1, 1440) regressor.double() if torch.cuda.is_available(): regressor.cuda(gpu) loss = nn.MSELoss() optimizer = optim.Adam(regressor.parameters(), lr=0.0001) epochs = 400 %%time for i in range(epochs): l = loss(regressor(train).squeeze(), train_target) l.backward() optimizer.step() optimizer.zero_grad() print("end") with torch.no_grad(): print(loss(regressor(test).squeeze()[:-1439], test_target).data.cpu().numpy()) ###Output 0.08915801725356748 ###Markdown Regression / Forecasting (a Quarter)The task here is to predict the evolution of the average electricity consumption for the next quarter, compared to the one of the last quarter. ###Code # Computing values to predict means = np.array(pd.DataFrame(data=dataset[0, 0]).rolling(7*12*1440).mean())[:, 0] target = -means[:-7*12*1440 + 1] + means[7*12*1440 - 1:] train_target = target[7*12*1440 - 1:500000] test_target = target[500000 + 7*12*1440 - 1:] # Transferring targets to PyTorch tensors train_target = torch.from_numpy(train_target) test_target = torch.from_numpy(test_target) if torch.cuda.is_available(): train_target = train_target.cuda(gpu) test_target = test_target.cuda(gpu) ###Output _____no_output_____ ###Markdown Linear Regression using Learned RepresentationsWe use representations computed by the learned encoder on a quarter-long sliding window. ###Code regressor = nn.Linear(160, 1) regressor.double() if torch.cuda.is_available(): regressor.cuda(gpu) loss = nn.MSELoss() optimizer = optim.Adam(regressor.parameters(), lr=0.0001) epochs = 400 %%time for i in range(epochs): l = loss(regressor(train_features_quarter[0].t()).squeeze(), train_target) l.backward() optimizer.step() optimizer.zero_grad() print("end") with torch.no_grad(): print(loss(regressor(test_features_quarter[0, :, :-7*12*1440 + 1].t()).squeeze(), test_target).data.cpu().numpy()) ###Output 0.07255152957379928 ###Markdown Linear Regression using the Raw Values of the Last Quarter ###Code regressor = nn.Conv1d(1, 1, 1440*7*12) regressor.double() if torch.cuda.is_available(): regressor.cuda(gpu) loss = nn.MSELoss() optimizer = optim.Adam(regressor.parameters(), lr=0.001) epochs = 400 %%time for i in range(epochs): l = loss(regressor(train).squeeze(), train_target) l.backward() optimizer.step() optimizer.zero_grad() print("end") with torch.no_grad(): print(loss(regressor(test).squeeze()[:-7*12*1440 + 1], test_target).data.cpu().numpy()) ###Output 0.06257629058462644 ###Markdown Licensed to the Apache Software Foundation (ASF) under oneor more contributor license agreements. See the NOTICE filedistributed with this work for additional informationregarding copyright ownership. The ASF licenses this fileto you under the Apache License, Version 2.0 (the"License"); you may not use this file except in compliancewith the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing,software distributed under the License is distributed on an"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANYKIND, either express or implied. See the License for thespecific language governing permissions and limitationsunder the License. Household Power Consumption ###Code import os import math import scipy as sp import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt import sklearn from sklearn import cluster from sklearn import neighbors import torch import torch.nn as nn import torch.optim as optim import scikit_wrappers cuda = False if torch.cuda.is_available(): print("Using CUDA...") cuda = True # GPU number gpu = 0 ###Output Using CUDA... ###Markdown DatasetThe dataset can be found here: [https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014](https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014). ###Code # Path to dataset data = 'Time Series/household_power_consumption.txt' # Dataset as a dataframe df = pd.read_csv(data, sep=';', decimal=',') # Replace missing values by the last seen values dataset = np.transpose(np.array(df))[2].reshape(1, 1, -1) for i in range(np.shape(dataset)[2]): if dataset[0, 0, i] == '?': dataset[0, 0, i] = dataset[0, 0, i-1] dataset = dataset.astype(float) # Training and testing partition (training set: first 1500000 measurements) train = dataset[:, :, :500000] test = dataset[:, :, 500000:] # Preprocessing: normalization mean = np.mean(dataset) var = np.var(dataset) dataset = (dataset - mean)/math.sqrt(var) train = (train - mean)/math.sqrt(var) test = (test - mean)/math.sqrt(var) print('Mean: ', np.mean(dataset)) print('Variance: ', np.var(dataset)) plt.figure(figsize=(30,10)) plt.xlabel('Timestep', fontsize=20) plt.xticks(fontsize=20) plt.yticks(fontsize=20) plt.plot(dataset[0, 0, 50000:51000], color='g') plt.show() plt.close() ###Output _____no_output_____ ###Markdown Feature Learning (Yearly Scale) Learning Parameters ###Code # Set to True to train a new model training = False # Prefix to path to the saved model model = 'Time Series/HouseholdPowerConsumption_yearly' hyperparameters = { "batch_size": 1, "channels": 30, "compared_length": None, "depth": 10, "nb_steps": 400, "in_channels": 1, "kernel_size": 3, "penalty": None, "early_stopping": None, "lr": 0.001, "nb_random_samples": 10, "negative_penalty": 1, "out_channels": 160, "reduced_size": 80, "cuda": cuda, "gpu": gpu } ###Output _____no_output_____ ###Markdown Training ###Code encoder_yearly = scikit_wrappers.CausalCNNEncoderClassifier() encoder_yearly.set_params(**hyperparameters) if training: encoder_yearly.fit_encoder(train, save_memory=True, verbose=True) encoder_yearly.save_encoder(model) else: encoder_yearly.load_encoder(model) torch.cuda.empty_cache() ###Output _____no_output_____ ###Markdown Computing RepresentationsWe compute in the following (or load them from local storage if they are already precomputed) the learned representations given by the yearly encoder on sliding windows of different sizes (a week, a quarter) for the whole dataset. ###Code compute_representations = False storage_train_day = 'Time Series/HouseholdPowerConsumption_representations_train_yearly_day.npy' storage_test_day = 'Time Series/HouseholdPowerConsumption_representations_test_yearly_day.npy' storage_train_quarter = 'Time Series/HouseholdPowerConsumption_representations_train_yearly_quarter.npy' storage_test_quarter = 'Time Series/HouseholdPowerConsumption_representations_test_yearly_quarter.npy' if compute_representations: train_features_day = encoder_yearly.encode_window(train, 1440) np.save(storage_train_day, train_features_day) test_features_day = encoder_yearly.encode_window(test, 1440) np.save(storage_test_day, test_features_day) train_features_quarter = encoder_yearly.encode_window(train, 12*7*1440, batch_size=25) np.save(storage_train_quarter, train_features_quarter) test_features_quarter = encoder_yearly.encode_window(test, 12*7*1440, batch_size=25) np.save(storage_test_quarter, test_features_quarter) else: train_features_day = np.load(storage_train_day) test_features_day = np.load(storage_test_day) train_features_quarter = np.load(storage_train_quarter) test_features_quarter = np.load(storage_test_quarter) ###Output _____no_output_____ ###Markdown Visualization ###Code # From http://abhay.harpale.net/blog/python/how-to-plot-multicolored-lines-in-matplotlib/ # to plot multicolored cruves in matplotlib def find_contiguous_colors(colors): # finds the continuous segments of colors and returns those segments segs = [] curr_seg = [] prev_color = '' for c in colors: if c == prev_color or prev_color == '': curr_seg.append(c) else: segs.append(curr_seg) curr_seg = [] curr_seg.append(c) prev_color = c segs.append(curr_seg) # the final one return segs kmeans = cluster.KMeans(n_clusters=6).fit(np.swapaxes(test_features_day[0, :, 76+5*1440:76+6*1440], 0, 1)) plt.figure(figsize=(30,10)) plt.title('Clustering based on yearly-scale learned representations computed on a day-long sliding window, from midnight to 23:59', fontsize=20) plt.xlabel('Hour of the day', fontsize=20) plt.ylabel('Minute-averaged active power (normalized)', fontsize=20) plt.xticks(fontsize=20) plt.yticks(fontsize=20) associated_colors = {0: 'blue', 1: 'green', 2: 'red', 3: 'yellow', 4: 'magenta', 5: 'black', 6: 'purple', 7: 'cyan', 8: 'pink', 9: 'orange', 10: 'grey', 11: 'fuchsia', 12: 'maroon', 13: 'navy'} colors = [associated_colors[l] for l in kmeans.labels_] segments = find_contiguous_colors(colors) start = 76+6*1440 beginning = True for seg in segments: end = start + len(seg) if beginning: hour_range = ((np.arange(start, end) - 76)%1440)/60 l, = plt.gca().plot(hour_range, test[0, 0, start:end], lw=2, c=seg[0]) beginning = False else: hour_range = ((np.arange(start-1, end) - 76)%1440)/60 plt.gca().axvline(x=hour_range[0]) l, = plt.gca().plot(hour_range, test[0, 0, start-1:end], lw=2, c=seg[0]) start = end plt.legend(fontsize=25) plt.savefig('electricity.eps') ###Output No handles with labels found to put in legend. ###Markdown Evaluation ###Code # Training and testing sets as PyTorch variables train_features_day = torch.from_numpy(train_features_day) test_features_day = torch.from_numpy(test_features_day) train_features_quarter = torch.from_numpy(train_features_quarter) test_features_quarter = torch.from_numpy(test_features_quarter) train = torch.from_numpy(train) test = torch.from_numpy(test) if torch.cuda.is_available(): train_features_day = train_features_day.cuda(gpu) test_features_day = test_features_day.cuda(gpu) train_features_quarter = train_features_quarter.cuda(gpu) test_features_quarter = test_features_quarter.cuda(gpu) train = train.cuda(gpu) test = test.cuda(gpu) ###Output _____no_output_____ ###Markdown Regression / Forecasting (24h)The task here is to predict the evolution of the average electricity consumption for the next 24 hours, compared to the one of the last 24 hours. ###Code # Computing values to predict means = np.array(pd.DataFrame(data=dataset[0, 0]).rolling(1440).mean())[:, 0] target = -means[:-1439] + means[1439:] train_target = target[1439:500000] test_target = target[500000+1439:] # Transferring targets to PyTorch tensors train_target = torch.from_numpy(train_target) test_target = torch.from_numpy(test_target) if torch.cuda.is_available(): train_target = train_target.cuda(gpu) test_target = test_target.cuda(gpu) ###Output _____no_output_____ ###Markdown Linear Regression using Learned RepresentationsWe use representations computed by the learned encoder on a day-long sliding window. ###Code regressor = nn.Linear(160, 1) regressor.double() if torch.cuda.is_available(): regressor.cuda(gpu) loss = nn.MSELoss() optimizer = optim.Adam(regressor.parameters(), lr=0.001) epochs = 400 %%time for i in range(epochs): l = loss(regressor(train_features_day[0].t()).squeeze(), train_target) l.backward() optimizer.step() optimizer.zero_grad() print("end") with torch.no_grad(): print(loss(regressor(test_features_day[0, :, :-1439].t()).squeeze(), test_target).data.cpu().numpy()) ###Output 0.08951327073870838 ###Markdown Linear Regression using the Raw Values of the Last 24 hours ###Code regressor = nn.Conv1d(1, 1, 1440) regressor.double() if torch.cuda.is_available(): regressor.cuda(gpu) loss = nn.MSELoss() optimizer = optim.Adam(regressor.parameters(), lr=0.0001) epochs = 400 %%time for i in range(epochs): l = loss(regressor(train).squeeze(), train_target) l.backward() optimizer.step() optimizer.zero_grad() print("end") with torch.no_grad(): print(loss(regressor(test).squeeze()[:-1439], test_target).data.cpu().numpy()) ###Output 0.08915801725356748 ###Markdown Regression / Forecasting (a Quarter)The task here is to predict the evolution of the average electricity consumption for the next quarter, compared to the one of the last quarter. ###Code # Computing values to predict means = np.array(pd.DataFrame(data=dataset[0, 0]).rolling(7*12*1440).mean())[:, 0] target = -means[:-7*12*1440 + 1] + means[7*12*1440 - 1:] train_target = target[7*12*1440 - 1:500000] test_target = target[500000 + 7*12*1440 - 1:] # Transferring targets to PyTorch tensors train_target = torch.from_numpy(train_target) test_target = torch.from_numpy(test_target) if torch.cuda.is_available(): train_target = train_target.cuda(gpu) test_target = test_target.cuda(gpu) ###Output _____no_output_____ ###Markdown Linear Regression using Learned RepresentationsWe use representations computed by the learned encoder on a quarter-long sliding window. ###Code regressor = nn.Linear(160, 1) regressor.double() if torch.cuda.is_available(): regressor.cuda(gpu) loss = nn.MSELoss() optimizer = optim.Adam(regressor.parameters(), lr=0.0001) epochs = 400 %%time for i in range(epochs): l = loss(regressor(train_features_quarter[0].t()).squeeze(), train_target) l.backward() optimizer.step() optimizer.zero_grad() print("end") with torch.no_grad(): print(loss(regressor(test_features_quarter[0, :, :-7*12*1440 + 1].t()).squeeze(), test_target).data.cpu().numpy()) ###Output 0.07255152957379928 ###Markdown Linear Regression using the Raw Values of the Last Quarter ###Code regressor = nn.Conv1d(1, 1, 1440*7*12) regressor.double() if torch.cuda.is_available(): regressor.cuda(gpu) loss = nn.MSELoss() optimizer = optim.Adam(regressor.parameters(), lr=0.001) epochs = 400 %%time for i in range(epochs): l = loss(regressor(train).squeeze(), train_target) l.backward() optimizer.step() optimizer.zero_grad() print("end") with torch.no_grad(): print(loss(regressor(test).squeeze()[:-7*12*1440 + 1], test_target).data.cpu().numpy()) ###Output 0.06257629058462644
deep-learning/tensor-flow-examples/notebooks/2_basic_classifiers/linear_regression.ipynb
###Markdown Linear Regression in TensorFlowCredits: Forked from [TensorFlow-Examples](https://github.com/aymericdamien/TensorFlow-Examples) by Aymeric Damien SetupRefer to the [setup instructions](http://nbviewer.ipython.org/github/donnemartin/data-science-ipython-notebooks/blob/master/deep-learning/tensor-flow-examples/Setup_TensorFlow.md) ###Code import tensorflow as tf import numpy import matplotlib.pyplot as plt rng = numpy.random # Parameters learning_rate = 0.01 training_epochs = 2000 display_step = 50 # Training Data train_X = numpy.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,7.042,10.791,5.313,7.997,5.654,9.27,3.1]) train_Y = numpy.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,2.827,3.465,1.65,2.904,2.42,2.94,1.3]) n_samples = train_X.shape[0] # tf Graph Input X = tf.placeholder("float") Y = tf.placeholder("float") # Create Model # Set model weights W = tf.Variable(rng.randn(), name="weight") b = tf.Variable(rng.randn(), name="bias") # Construct a linear model activation = tf.add(tf.mul(X, W), b) # Minimize the squared errors cost = tf.reduce_sum(tf.pow(activation-Y, 2))/(2*n_samples) #L2 loss optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) #Gradient descent # Initializing the variables init = tf.initialize_all_variables() # Launch the graph with tf.Session() as sess: sess.run(init) # Fit all training data for epoch in range(training_epochs): for (x, y) in zip(train_X, train_Y): sess.run(optimizer, feed_dict={X: x, Y: y}) #Display logs per epoch step if epoch % display_step == 0: print "Epoch:", '%04d' % (epoch+1), "cost=", \ "{:.9f}".format(sess.run(cost, feed_dict={X: train_X, Y:train_Y})), \ "W=", sess.run(W), "b=", sess.run(b) print "Optimization Finished!" print "cost=", sess.run(cost, feed_dict={X: train_X, Y: train_Y}), \ "W=", sess.run(W), "b=", sess.run(b) #Graphic display plt.plot(train_X, train_Y, 'ro', label='Original data') plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line') plt.legend() plt.show() from IPython.display import Image Image(filename='linearreg.png') ###Output _____no_output_____ ###Markdown Linear Regression in TensorFlow Updated for Python 3.6+Credits: Forked from [TensorFlow-Examples](https://github.com/aymericdamien/TensorFlow-Examples) by Aymeric Damien SetupRefer to the [setup instructions](http://nbviewer.ipython.org/github/donnemartin/data-science-ipython-notebooks/blob/master/deep-learning/tensor-flow-examples/Setup_TensorFlow.md) ###Code import tensorflow as tf import numpy import matplotlib.pyplot as plt rng = numpy.random # Parameters learning_rate = 0.01 training_epochs = 2000 display_step = 50 # Training Data train_X = numpy.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,7.042,10.791,5.313,7.997,5.654,9.27,3.1]) train_Y = numpy.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,2.827,3.465,1.65,2.904,2.42,2.94,1.3]) n_samples = train_X.shape[0] # tf Graph Input X = tf.placeholder("float") Y = tf.placeholder("float") # Create Model # Set model weights W = tf.Variable(rng.randn(), name="weight") b = tf.Variable(rng.randn(), name="bias") # Construct a linear model activation = tf.add(tf.multiply(X, W), b) # Minimize the squared errors cost = tf.reduce_sum(tf.pow(activation-Y, 2))/(2*n_samples) #L2 loss optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) #Gradient descent # Initializing the variables init = tf.initialize_all_variables() # Launch the graph with tf.Session() as sess: sess.run(init) # Fit all training data for epoch in range(training_epochs): for (x, y) in zip(train_X, train_Y): sess.run(optimizer, feed_dict={X: x, Y: y}) #Display logs per epoch step if epoch % display_step == 0: print ("Epoch:", '%04d' % (epoch+1), "cost=", \ "{:.9f}".format(sess.run(cost, feed_dict={X: train_X, Y:train_Y})), \ "W=", sess.run(W), "b=", sess.run(b)) print ("Optimization Finished!") print ("cost=", sess.run(cost, feed_dict={X: train_X, Y: train_Y}), \ "W=", sess.run(W), "b=", sess.run(b)) #Graphic display plt.plot(train_X, train_Y, 'ro', label='Original data') plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line') plt.legend() plt.show() from IPython.display import Image Image(filename='linearreg.png') ###Output _____no_output_____
Data_Science_Specialization_IBM/Machine_Learning_with_Python_IBM/week2_regression/ML0101EN-Reg-NoneLinearRegression-py-v1.ipynb
###Markdown Non Linear Regression AnalysisEstimated time needed: **20** minutes ObjectivesAfter completing this lab you will be able to:- Differentiate between Linear and non-linear regression- Use Non-linear regression model in Python If the data shows a curvy trend, then linear regression will not produce very accurate results when compared to a non-linear regression because, as the name implies, linear regression presumes that the data is linear. Let's learn about non linear regressions and apply an example on python. In this notebook, we fit a non-linear model to the datapoints corrensponding to China's GDP from 1960 to 2014. Importing required libraries ###Code import numpy as np import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Though Linear regression is very good to solve many problems, it cannot be used for all datasets. First recall how linear regression, could model a dataset. It models a linear relation between a dependent variable y and independent variable x. It had a simple equation, of degree 1, for example y = $2x$ + 3. ###Code x = np.arange(-5.0, 5.0, 0.1) ##You can adjust the slope and intercept to verify the changes in the graph y = 2*(x) + 3 y_noise = 2 * np.random.normal(size=x.size) ydata = y + y_noise #plt.figure(figsize=(8,6)) plt.plot(x, ydata, 'bo') plt.plot(x,y, 'r') plt.ylabel('Dependent Variable') plt.xlabel('Independent Variable') plt.show() ###Output _____no_output_____ ###Markdown Non-linear regressions are a relationship between independent variables $x$ and a dependent variable $y$ which result in a non-linear function modeled data. Essentially any relationship that is not linear can be termed as non-linear, and is usually represented by the polynomial of $k$ degrees (maximum power of $x$). $$ \\ y = a x^3 + b x^2 + c x + d \\ $$Non-linear functions can have elements like exponentials, logarithms, fractions, and others. For example: $$ y = \\log(x)$$Or even, more complicated such as :$$ y = \\log(a x^3 + b x^2 + c x + d)$$ Let's take a look at a cubic function's graph. ###Code x = np.arange(-5.0, 5.0, 0.1) ##You can adjust the slope and intercept to verify the changes in the graph y = 1*(x**3) + 1*(x**2) + 1*x + 3 y_noise = 20 * np.random.normal(size=x.size) ydata = y + y_noise plt.plot(x, ydata, 'bo') plt.plot(x,y, 'r') plt.ylabel('Dependent Variable') plt.xlabel('Independent Variable') plt.show() ###Output _____no_output_____ ###Markdown As you can see, this function has $x^3$ and $x^2$ as independent variables. Also, the graphic of this function is not a straight line over the 2D plane. So this is a non-linear function. Some other types of non-linear functions are: Quadratic $$ Y = X^2 $$ ###Code x = np.arange(-5.0, 5.0, 0.1) ##You can adjust the slope and intercept to verify the changes in the graph y = np.power(x,2) y_noise = 2 * np.random.normal(size=x.size) ydata = y + y_noise plt.plot(x, ydata, 'bo') plt.plot(x,y, 'r') plt.ylabel('Dependent Variable') plt.xlabel('Independent Variable') plt.show() ###Output _____no_output_____ ###Markdown Exponential An exponential function with base c is defined by $$ Y = a + b c^X$$ where b ≠0, c > 0 , c ≠1, and x is any real number. The base, c, is constant and the exponent, x, is a variable. ###Code X = np.arange(-5.0, 5.0, 0.1) ##You can adjust the slope and intercept to verify the changes in the graph Y= np.exp(X) plt.plot(X,Y) plt.ylabel('Dependent Variable') plt.xlabel('Independent Variable') plt.show() ###Output _____no_output_____ ###Markdown LogarithmicThe response $y$ is a results of applying logarithmic map from input $x$'s to output variable $y$. It is one of the simplest form of **log()**: i.e. $$ y = \\log(x)$$Please consider that instead of $x$, we can use $X$, which can be polynomial representation of the $x$'s. In general form it would be written as \\begin{equation}y = \\log(X)\\end{equation} ###Code X = np.arange(-5.0, 5.0, 0.1) Y = np.log(X) plt.plot(X,Y) plt.ylabel('Dependent Variable') plt.xlabel('Independent Variable') plt.show() ###Output /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages/ipykernel_launcher.py:3: RuntimeWarning: invalid value encountered in log This is separate from the ipykernel package so we can avoid doing imports until ###Markdown Sigmoidal/Logistic $$ Y = a + \frac{b}{1+ c^{(X-d)}}$$ ###Code X = np.arange(-5.0, 5.0, 0.1) Y = 1-4/(1+np.power(3, X-2)) plt.plot(X,Y) plt.ylabel('Dependent Variable') plt.xlabel('Independent Variable') plt.show() ###Output _____no_output_____ ###Markdown Non-Linear Regression example For an example, we're going to try and fit a non-linear model to the datapoints corresponding to China's GDP from 1960 to 2014. We download a dataset with two columns, the first, a year between 1960 and 2014, the second, China's corresponding annual gross domestic income in US dollars for that year. ###Code import numpy as np import pandas as pd #downloading dataset !wget -nv -O china_gdp.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/china_gdp.csv df = pd.read_csv("china_gdp.csv") df.head(10) ###Output 2020-10-31 11:16:02 URL:https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/china_gdp.csv [1218/1218] -> "china_gdp.csv" [1] ###Markdown **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) Plotting the DatasetThis is what the datapoints look like. It kind of looks like an either logistic or exponential function. The growth starts off slow, then from 2005 on forward, the growth is very significant. And finally, it decelerate slightly in the 2010s. ###Code plt.figure(figsize=(8,5)) x_data, y_data = (df["Year"].values, df["Value"].values) plt.plot(x_data, y_data, 'ro') plt.ylabel('GDP') plt.xlabel('Year') plt.show() ###Output _____no_output_____ ###Markdown Choosing a modelFrom an initial look at the plot, we determine that the logistic function could be a good approximation,since it has the property of starting with a slow growth, increasing growth in the middle, and then decreasing again at the end; as illustrated below: ###Code X = np.arange(-5.0, 5.0, 0.1) Y = 1.0 / (1.0 + np.exp(-X)) plt.plot(X,Y) plt.ylabel('Dependent Variable') plt.xlabel('Independent Variable') plt.show() ###Output _____no_output_____ ###Markdown The formula for the logistic function is the following:$$ \\hat{Y} = \frac1{1+e^{\beta_1(X-\beta_2)}}$$$\\beta_1$: Controls the curve's steepness,$\\beta_2$: Slides the curve on the x-axis. Building The ModelNow, let's build our regression model and initialize its parameters. ###Code def sigmoid(x, Beta_1, Beta_2): y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2))) return y ###Output _____no_output_____ ###Markdown Lets look at a sample sigmoid line that might fit with the data: ###Code beta_1 = 0.10 beta_2 = 1990.0 #logistic function Y_pred = sigmoid(x_data, beta_1 , beta_2) #plot initial prediction against datapoints plt.plot(x_data, Y_pred*15000000000000.) plt.plot(x_data, y_data, 'ro') ###Output _____no_output_____ ###Markdown Our task here is to find the best parameters for our model. Lets first normalize our x and y: ###Code # Lets normalize our data xdata =x_data/max(x_data) ydata =y_data/max(y_data) ###Output _____no_output_____ ###Markdown How we find the best parameters for our fit line?we can use **curve_fit** which uses non-linear least squares to fit our sigmoid function, to data. Optimal values for the parameters so that the sum of the squared residuals of sigmoid(xdata, \*popt) - ydata is minimized.popt are our optimized parameters. ###Code from scipy.optimize import curve_fit popt, pcov = curve_fit(sigmoid, xdata, ydata) #print the final parameters print(" beta_1 = %f, beta_2 = %f" % (popt[0], popt[1])) ###Output beta_1 = 690.447527, beta_2 = 0.997207 ###Markdown Now we plot our resulting regression model. ###Code x = np.linspace(1960, 2015, 55) x = x/max(x) plt.figure(figsize=(8,5)) y = sigmoid(x, *popt) plt.plot(xdata, ydata, 'ro', label='data') plt.plot(x,y, linewidth=3.0, label='fit') plt.legend(loc='best') plt.ylabel('GDP') plt.xlabel('Year') plt.show() ###Output _____no_output_____ ###Markdown PracticeCan you calculate what is the accuracy of our model? ###Code # write your code here from scipy.optimize import curve_fit # split data into train/test msk = np.random.rand(len(df)) < 0.8 train_x = xdata[msk] test_x = xdata[~msk] train_y = ydata[msk] test_y = ydata[~msk] # build the model using train set popt, pcov = curve_fit(sigmoid, train_x, train_y) print('popt', popt) print('popt*', *popt) # predict using test set y_hat = sigmoid(test_x, *popt) # evaluation print("Mean absolute error: %.2f" % np.mean(np.absolute(y_hat - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((y_hat - test_y) ** 2)) from sklearn.metrics import r2_score print("R2-score: %.2f" % r2_score(y_hat , test_y) ) ###Output popt [713.16616259 0.99716702] popt* 713.1661625892445 0.9971670189898799 Mean absolute error: 0.03 Residual sum of squares (MSE): 0.00 R2-score: 0.98
365_DatascienceCNN.ipynb
###Markdown ###Code a = [] while(True): a.append(2) ###Output _____no_output_____
MidtermExam_Program1.ipynb
###Markdown ###Code def main(): class TemperatureConversion: def __init__(self, temp=1): self._temp = temp class CelsiusToFahrenheit(TemperatureConversion): def conversion(self): return (self._temp * 9) / 5 + 32 class CelsiusToKelvin(TemperatureConversion): def conversion(self): return self._temp + 273.15 class FahrenheitToCelsius(TemperatureConversion): def conversion(self): return (32 - self._temp) * 5 / 9 class KelvinToCelsius(TemperatureConversion): def conversion(self): return self._temp - 273.15 tempInCelsius = float(input("Enter the temperature in Celsius: ")) convert = CelsiusToKelvin(tempInCelsius) print(str(convert.conversion()) + " Kelvin") convert = CelsiusToFahrenheit(tempInCelsius) print(str(convert.conversion()) + " Fahrenheit") tempInCelsius = float(input("Enter the temperature in Fahrenheit: ")) convert = FahrenheitToCelsius(tempInCelsius) print(str(convert.conversion()) + " Celsius") tempInCelsius = float(input("Enter the temperature in Kelvin: ")) convert = KelvinToCelsius(tempInCelsius) print(str(convert.conversion()) + " Celsius") main() ###Output Enter the temperature in Celsius: 0 273.15 Kelvin 32.0 Fahrenheit Enter the temperature in Fahrenheit: 32 0.0 Celsius Enter the temperature in Kelvin: 273.15 0.0 Celsius
Text_Classification_Practice.ipynb
###Markdown 第一种变化确定特征提取流程:分词->使用词袋模型提取特征->计算TF-IDF权重,其中词袋模型的参数为保留20000个特征,同时考虑unigram和bigram,最大文档频率为95%,最小文档频率为2。比较以下分类器的性能:* 朴素贝叶斯* 随机森林* 支持向量机* K-近邻算法 ###Code def word_cut(corpus, x): for text in corpus: word_list = jieba.lcut(text) x.append(' '.join(word_list)) train_corpus_cut = [] valid_corpus_cut = [] test_corpus_cut = [] start = time.time() word_cut(train_corpus, train_corpus_cut) word_cut(valid_corpus, valid_corpus_cut) word_cut(test_corpus, test_corpus_cut) end = time.time() print('耗时:' + str(end-start) + 's') ###Output Building prefix dict from the default dictionary ... Loading model from cache C:\Users\min\AppData\Local\Temp\jieba.cache Loading model cost 0.572 seconds. Prefix dict has been built succesfully. ###Markdown 生成词云 ###Code start = time.time() my_wordcloud = WordCloud(background_color="white", width=1920, height=1080, font_path="C:\\Windows\\Fonts\\STXINWEI.TTF").generate(' '.join(train_corpus_cut)) end = time.time() print('耗时:' + str(end-start) + 's') plt.imshow(my_wordcloud) plt.axis("off") plt.show() vectorizer = TfidfVectorizer(max_features=20000, ngram_range=(1,2), max_df=0.95, min_df=2) x_train = vectorizer.fit_transform(train_corpus_cut) x_valid = vectorizer.transform(valid_corpus_cut) def train(clf, x_train, train_labels, info='nothing'): start = time.time() try: clf.fit(x_train, train_labels) except: clf.fit(x_train.toarray(), train_labels) print(info + '训练结束') end=time.time() print('耗时:' + str(end-start) + 's') #朴素贝叶斯 NB_clf = GaussianNB() #随机森林 RF_clf = RandomForestClassifier(n_estimators=100) #支持向量机 SVM_clf = SVC(gamma='scale') #K-近邻算法 KNN_clf = KNeighborsClassifier(algorithm='kd_tree') clf_list = [NB_clf, RF_clf, SVM_clf, KNN_clf] infos = ['朴素贝叶斯', '随机森林', '支持向量机', 'K-近邻'] for i, clf in enumerate(clf_list): train(clf, x_train, train_labels, infos[i]) def test(clf, x_valid, valid_labels): start = time.time() try: y_valid = clf.predict(x_valid) except: y_valid = clf.predict(x_valid.toarray()) end = time.time() print(clf) print('正确率:' + str(sum(y_valid==valid_labels)/len(valid_labels))) print('耗时:' + str(end-start) + 's') for clf in clf_list: test(clf, x_valid, valid_labels) accs = [0.98125, 0.979, 0.991, 0.97725] plt.rcParams['font.sans-serif'] =['Microsoft YaHei'] plt.rcParams['axes.unicode_minus'] = False plt.barh(range(4), accs) plt.yticks(range(4), infos) plt.title('各分类器性能比较') plt.xlabel('正确率') plt.xlim([0.95, 1]) for i, acc in enumerate(accs): plt.text(acc+0.00001, i, '%s'%acc) plt.show() emb = TSNE(n_components=2).fit_transform(x_train.toarray()) plt.scatter(emb[:, 0], emb[:, 1], c=train_labels) plt.colorbar() plt.show() ###Output _____no_output_____ ###Markdown 第二种变化分类器固定为朴素贝叶斯,然后比较以下特征提取方法的性能:* 使用分词,gram范围为(1,1)* 使用分词,gram范围为(1,1),使用百度停用词表* 使用分词,gram范围为(1,2)* 使用分词,gram范围为(1,2),使用百度停用词表* 不使用分词,gram范围为(1,2)* 不使用分词,gram范围为(1,4)以上六种方案的最大特征数都为2W ###Code def get_accuracy(x_train, x_valid, train_labels, valid_labels): NB_clf = GaussianNB() NB_clf.fit(x_train.toarray(), train_labels) y_valid = NB_clf.predict(x_valid.toarray()) print('正确率:' + str(sum(y_valid==valid_labels)/len(valid_labels))) vectorizer = TfidfVectorizer(max_features=20000, ngram_range=(1,1), max_df=0.95, min_df=2) x_train = vectorizer.fit_transform(train_corpus_cut) x_valid = vectorizer.transform(valid_corpus_cut) get_accuracy(x_train, x_valid, train_labels, valid_labels) vectorizer = TfidfVectorizer(max_features=20000, ngram_range=(1,2), max_df=0.95, min_df=2, analyzer='char') x_train = vectorizer.fit_transform(train_corpus) x_valid = vectorizer.transform(valid_corpus) get_accuracy(x_train, x_valid, train_labels, valid_labels) vectorizer = TfidfVectorizer(max_features=20000, ngram_range=(1,4), max_df=0.95, min_df=2, analyzer='char') x_train = vectorizer.fit_transform(train_corpus) x_valid = vectorizer.transform(valid_corpus) get_accuracy(x_train, x_valid, train_labels, valid_labels) stop_words = [] with open(r'D:\textClassify\stopwords\百度停用词表.txt', encoding='utf-8') as f: for line in f: stop_words.append(line.replace('\n', '')) vectorizer = TfidfVectorizer(max_features=20000, ngram_range=(1,1), max_df=0.95, min_df=2, stop_words=stop_words) x_train = vectorizer.fit_transform(train_corpus_cut) x_valid = vectorizer.transform(valid_corpus_cut) get_accuracy(x_train, x_valid, train_labels, valid_labels) vectorizer = TfidfVectorizer(max_features=20000, ngram_range=(1,2), max_df=0.95, min_df=2, stop_words=stop_words) x_train = vectorizer.fit_transform(train_corpus_cut) x_valid = vectorizer.transform(valid_corpus_cut) get_accuracy(x_train, x_valid, train_labels, valid_labels) accs = [0.9745, 0.974, 0.98125, 0.98175, 0.9705, 0.97925] infos = ['分词-(1,1)', '分词-(1,1)-去停用词', '分词-(1,2)', '分词-(1,2)-去停用词', '不分词-(1,2)', '不分词-(1,4)'] plt.barh(range(6), accs) plt.yticks(range(6), infos) plt.title('各特征提取方法性能比较') plt.xlabel('正确率') plt.xlim([0.95, 1]) for i, acc in enumerate(accs): plt.text(acc+0.00001, i, '%s'%acc) plt.show() ###Output _____no_output_____ ###Markdown 使用深度学习方法 ###Code vocab_size = 20000 maxlen = 2000 embedding_dims = 128 batch_size = 64 filters = 250 kernel_size = 3 hidden_dims = 250 epochs = 100 num_class = 4 tokenizer = Tokenizer(num_words=vocab_size) tokenizer.fit_on_texts(train_corpus_cut) x_train = tokenizer.texts_to_sequences(train_corpus_cut) x_valid = tokenizer.texts_to_sequences(valid_corpus_cut) x_test = tokenizer.texts_to_sequences(test_corpus_cut) x_train = pad_sequences(x_train, maxlen=maxlen) x_valid = pad_sequences(x_valid, maxlen=maxlen) x_test = pad_sequences(x_test, maxlen=maxlen) y_train = to_categorical(train_labels) y_valid = to_categorical(valid_labels) y_test = to_categorical(test_labels) def get_model(): model = Sequential() model.add(Embedding(vocab_size, embedding_dims, input_length=maxlen)) model.add(Dropout(0.2)) model.add(Conv1D(filters, kernel_size, padding='valid', activation='relu', strides=1)) model.add(GlobalMaxPooling1D()) model.add(Dense(hidden_dims)) model.add(Dropout(0.2)) model.add(Activation('relu')) model.add(Dense(num_class)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary(line_length=125)) return model def train_and_predict(model): ES = EarlyStopping(patience=3, verbose=1, restore_best_weights=True) start = time.time() model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, callbacks=[ES], validation_data=(x_valid, y_valid)) end = time.time() print('训练耗时:' + str(end-start) + 's') start = time.time() y_pred = model.predict(x_test) end = time.time() print('推断耗时:' + str(end-start) + 's') y_pred_cls = np.argmax(y_pred, 1) print('准确率:' + str(sum(y_pred_cls==test_labels)/len(test_labels))) model = get_model() train_and_predict(model) ###Output Train on 20000 samples, validate on 4000 samples Epoch 1/100 20000/20000 [==============================] - 14s 712us/step - loss: 0.0542 - acc: 0.9840 - val_loss: 0.0303 - val_acc: 0.9915 Epoch 2/100 20000/20000 [==============================] - 14s 700us/step - loss: 0.0023 - acc: 0.9996 - val_loss: 0.0264 - val_acc: 0.9938 Epoch 3/100 20000/20000 [==============================] - 14s 701us/step - loss: 6.0481e-04 - acc: 1.0000 - val_loss: 0.0258 - val_acc: 0.9930 Epoch 4/100 20000/20000 [==============================] - 14s 700us/step - loss: 9.9974e-05 - acc: 1.0000 - val_loss: 0.0249 - val_acc: 0.9938 Epoch 5/100 20000/20000 [==============================] - 14s 702us/step - loss: 7.9571e-05 - acc: 1.0000 - val_loss: 0.0251 - val_acc: 0.9930 Epoch 6/100 20000/20000 [==============================] - 14s 707us/step - loss: 3.2500e-05 - acc: 1.0000 - val_loss: 0.0259 - val_acc: 0.9938 Epoch 7/100 20000/20000 [==============================] - 14s 698us/step - loss: 3.8517e-05 - acc: 1.0000 - val_loss: 0.0252 - val_acc: 0.9938 Restoring model weights from the end of the best epoch Epoch 00007: early stopping 耗时:98.40335655212402s 耗时:0.664334774017334s 准确率:0.99275 ###Markdown 使用字作为特征单元 ###Code vocab_size = 1000 kernel_size = 5 tokenizer = Tokenizer(num_words=vocab_size, char_level=True) tokenizer.fit_on_texts(train_corpus) x_train = tokenizer.texts_to_sequences(train_corpus) x_valid = tokenizer.texts_to_sequences(valid_corpus) x_test = tokenizer.texts_to_sequences(test_corpus) x_train = pad_sequences(x_train, maxlen=maxlen) x_valid = pad_sequences(x_valid, maxlen=maxlen) x_test = pad_sequences(x_test, maxlen=maxlen) model = get_model() train_and_predict(model) accs = [0.98125, 0.979, 0.991, 0.97725, 0.99275] infos = ['朴素贝叶斯', '随机森林', '支持向量机', 'K-近邻', '深度学习'] plt.barh(range(5), accs) plt.yticks(range(5), infos) plt.title('各模型性能比较') plt.xlabel('正确率') plt.xlim([0.95, 1]) for i, acc in enumerate(accs): plt.text(acc+0.00001, i, '%s'%acc) plt.show() times = [22.47 + 4.73, 85.51 + 0.68, 2803.49 + 459.04, 28.29 + 1911.89, 98.40 + 0.66] s_times = ['22.47+4.73', '85.51+0.68', '2803.49+459.04', '28.29+1911.89', '98.40+0.66'] infos = ['朴素贝叶斯', '随机森林', '支持向量机', 'K-近邻', '深度学习'] plt.barh(range(5), times) plt.yticks(range(5), infos) plt.title('各模型耗时比较') plt.xlabel('时间(s)') plt.xlim(0, 4500) for i, t in enumerate(times): plt.text(t+0.00001, i, '%s'%s_times[i]) plt.show() ###Output _____no_output_____
src/Deep_Context.ipynb
###Markdown News Data Extraction ###Code !pip install tweet-preprocessor !pip install google-api-python-client !pip install nltk import pandas as pd import preprocessor as p import networkx as nx import matplotlib.pyplot as plt import tweepy import json import requests import re import sys import urllib.parse from datetime import datetime from sklearn.feature_extraction.text import CountVectorizer from googleapiclient.discovery import build import nltk from nltk.corpus import stopwords from nltk.stem.wordnet import WordNetLemmatizer nltk.download('punkt') nltk.download('wordnet') nltk.download('stopwords') nltk.download('vader_lexicon') stop_words = set(stopwords.words('english')) ###Output [nltk_data] Downloading package punkt to /root/nltk_data... [nltk_data] Package punkt is already up-to-date! [nltk_data] Downloading package wordnet to /root/nltk_data... [nltk_data] Package wordnet is already up-to-date! [nltk_data] Downloading package stopwords to /root/nltk_data... [nltk_data] Package stopwords is already up-to-date! [nltk_data] Downloading package vader_lexicon to /root/nltk_data... ###Markdown Get News Data from News API and Google News ###Code NEWS_API_KEY = " " GNEWS_API_KEY = " " def decode_text(dct, api_data=list()): if "title" in dct: api_data.append(p.clean(dct["title"])) if "description" in dct: api_data.append(p.clean(dct["description"])) return api_data def get_news(query, api_source="newsapi", api_key=None): if not all([api_source, api_key]): return list() keywords = urllib.parse.quote(query) api_url = "https://newsapi.org" url = "{}/v2/everything?q={}&apiKey={}".format(api_url, keywords, api_key) if "gnews" in api_source.lower(): api_url = "https://gnews.io" url = "{}/api/v3/search?q={}&token={}".format(api_url, keywords, api_key) response = requests.get(url) newsApi_json = json.dumps(response.json(), sort_keys=True) return json.loads(newsApi_json, object_hook=decode_text) ###Output _____no_output_____ ###Markdown Get Twitter Data ###Code CONSUMER_KEY = " " CONSUMER_SECRET = " " TWITTER_TOKEN_KEY = " " TWITTER_TOKEN_SECRET = " " def get_twitter_context(topicName): auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(TWITTER_TOKEN_KEY, TWITTER_TOKEN_SECRET) api = tweepy.API(auth) # The search term you want to find query = topicName language = "en" # Calling the user_timeline function with our parameters results = api.search(q=query, lang=language) corpus = [] # foreach through all tweets pulled for tweet in results: # print(tweet.user.screen_name, "Tweeted:", tweet.text) corpus.append(p.clean(tweet.text)) return corpus # print(get_twitter_context("kobe")) ###Output _____no_output_____ ###Markdown Google Search API ###Code class GoogleSearch(object): def __init__(self, api_key, cse_id): self.__api_key = api_key self.__cse_id = cse_id self.service = build("customsearch", "v1", developerKey=api_key) def search(self, search_term, **kwargs): self.__data = self.service.cse().list(q=search_term, cx=self.__cse_id, **kwargs).execute() def get_results(self): return self.__data def get_search_url(self): url_list = list() if "items" in self.__data: for item in self.__data['items']: url_list.append(item['link']) return url_list search_term = "coronavirus" GOOGLE_API_KEY = " " GOOGLE_CSE_ID = " " google_search = GoogleSearch(GOOGLE_API_KEY, GOOGLE_CSE_ID) google_search.search(search_term) ###Output _____no_output_____ ###Markdown Beatiful Soup ###Code import urllib.request from bs4 import BeautifulSoup from bs4.element import Comment url_list = google_search.get_search_url() def tag_visible(element): if element.parent.name in ['style', 'script', 'input', 'header','head', 'title', 'meta', '[document]']: return False if isinstance(element, Comment): return False return True def text_from_html(body): soup = BeautifulSoup(body, 'html.parser') texts = soup.findAll(text=True) visible_texts = filter(tag_visible, texts) return u" ".join(t.strip() for t in visible_texts) for url in url_list: page = urllib.request.urlopen(url).read() result = text_from_html(page) print(result) print("\n") ###Output _____no_output_____ ###Markdown **Data Clearning and Text Preprocessing** ###Code import re def cleaning(raw_news): import nltk # 1. Remove non-letters/Special Characters and Punctuations news = re.sub("[^a-zA-Z]", " ", raw_news) # 2. Convert to lower case. news = news.lower() # 3. Tokenize. news_words = nltk.word_tokenize( news) # 4. Convert the stopwords list to "set" data type. stops = set(nltk.corpus.stopwords.words("english")) # 5. Remove stop words. words = [w for w in news_words if not w in stops] # 6. Lemmentize wordnet_lem = [ WordNetLemmatizer().lemmatize(w) for w in words ] # 7. Stemming stems = [nltk.stem.SnowballStemmer('english').stem(w) for w in wordnet_lem ] # 8. Join the stemmed words back into one string separated by space, and return the result. return " ".join(stems) ###Output _____no_output_____ ###Markdown **Visulization of cleaned news content** ###Code from wordcloud import WordCloud, STOPWORDS import matplotlib.pyplot as plt import seaborn as sns from scipy import stats %matplotlib inline def cloud(data,backgroundcolor = 'white', width = 800, height = 600): wordcloud = WordCloud(stopwords = STOPWORDS, background_color = backgroundcolor, width = width, height = height).generate(data) plt.figure(figsize = (15, 10)) plt.imshow(wordcloud) plt.axis("off") plt.show() ###Output _____no_output_____ ###Markdown **Sentiment Analysis** ###Code import warnings import nltk.sentiment warnings.filterwarnings('ignore') senti = nltk.sentiment.vader.SentimentIntensityAnalyzer() def print_sentiment_scores(sentence): snt = senti.polarity_scores(sentence) print("{:-<40} \n{}".format(sentence, str(snt))) def get_vader_polarity(snt): if not snt: return None elif snt['neg'] > snt['pos'] and snt['neg'] > snt['neu']: return -1 elif snt['pos'] > snt['neg'] and snt['pos'] > snt['neu']: return 1 else: return 0 #Function to determine if a text is negative(-1) or postive (1) or neutral (0) def get_polarity_type(sentence): sentimentVector = [] snt = senti.polarity_scores(sentence) sentimentVector.append(get_vader_polarity(snt)) sentimentVector.append(snt['neg']) sentimentVector.append(snt['neu']) sentimentVector.append(snt['pos']) sentimentVector.append(snt['compound']) print(sentimentVector) return sentimentVector ###Output _____no_output_____ ###Markdown __Generate Memory Graph for Visualization__ ###Code # news_api_data = [] # keyword = "coronavirus" # my_api_key = "AIzaSyAULWtrSkRR-FLRSMfz5ycwFlrYHhCw1Vw" # my_cse_id = "014947934928168541572:hgmnooclf3g" # G=nx.Graph() # G.add_node(keyword) # corpus_twitter = get_twitter_context(keyword) # newApi = getNewsAPI(keyword) # gNews = getGNewsAPI(keyword) # corpus = corpus_twitter + newApi + gNews # top5_keyword_twitter = get_top_n_words(corpus,n=10) # for item in top5_keyword_twitter: # edge = (keyword, item[0]) # G.add_edge(*edge) # google_result_list = [] # google_keyword = item[0] # google_result = google_search(google_keyword,my_api_key,my_cse_id) # top5_keyword_google = get_top_n_words(google_result,n=10) # for result in top5_keyword_google: # edge = (item[0], result[0]) # G.add_edge(*edge) # nx.draw(G,with_labels=True) # plt.savefig("plot.png") # plt.show() ###Output _____no_output_____ ###Markdown Named-Entity Recognition (NER) ###Code import spacy # Load English tokenizer, tagger, parser, NER and word vectors ner = spacy.load("en_core_web_sm") # Working NER - By using the News Corpus directly, it properly identitifies individual entities and their type. def extract_entities(corpus): entities = list() for entry in corpus: filtered_corpus = "".join(entry) news_corpus_entities = ner(filtered_corpus) for entity in news_corpus_entities.ents: entities.append(entity.text) print(entity.text, entity.label_) return entities entity_list = extract_entities(corpus) print(entity_list) # Remove duplicates filtered_entity_list = list(set(entity_list)) print(filtered_entity_list) ''' # Not Working NER - By using Top Related Words, the NER is unable to identify separate entities. for word_string in top_related_words: filtered_string = "".join(word_string) entity_set = ner(filtered_string) for entity in entity_set.ents: print(entity.text, entity.label_) ''' ###Output _____no_output_____ ###Markdown **Based on the results above, it seems that NER only works when the input is a phrase or sentence. If the input is just a list of words, the NER does not properly recognize individual entities in the text.** LDA Topic Modeling ###Code from sklearn.decomposition import LatentDirichletAllocation as LDA def print_topics(model, count_vectorizer, n_top_words): extracted_words = list() words = count_vectorizer.get_feature_names() for topic_idx, topic in enumerate(model.components_): extracted_words.append(" ".join([words[i] for i in topic.argsort()[:-n_top_words - 1:-1]])) print("\nTopic #%d:" % topic_idx) print(" ".join([words[i] for i in topic.argsort()[:-n_top_words - 1:-1]])) return extracted_words number_topics = 10 number_words = 10 count_vectorizer = CountVectorizer(stop_words='english') # Fit and transform the processed titles #count_data = count_vectorizer.fit_transform(corpus) # Fit and transform the processed entities count_data = count_vectorizer.fit_transform(filtered_entity_list) lda = LDA(n_components=number_topics, n_jobs=-1) lda.fit(count_data) # Print the topics found by the LDA model print("Topics found via LDA:") top_related_words = print_topics(lda, count_vectorizer, number_words) print(top_related_words) ###Output ['americans fourty japan coronavirus china weeks annual chinese britons israel', 'delta platinum israeli feb wuhan maga coronavirus americans weeks annual', 'coronavirus initially infectious estimated novel westerdam israel tuesday americans chinese', 'francisco communist party san sunday hundreds cambodia california coronavirus americans', 'tom cotton taiwan barcelona weekend coronavirus americans annual china chinese', 'americans summit mobile reserve britons world delta coronavirus weeks china', 'xi jinping contagious highly annual chinese china weeks coronavirus americans', 'finance ministry kellyanne conway coronavirus americans weeks chinese annual china', 'week earlier 670 coronavirus americans weeks annual china chinese britons', 'world health organization malaysian washington virus coronavirus americans china weeks'] ###Markdown __Display LDA Topics__ ###Code ! pip install pyLDAvis from pyLDAvis import sklearn as sklearn_lda import pyLDAvis LDAvis_prepared = sklearn_lda.prepare(lda, count_data, count_vectorizer) pyLDAvis.display(LDAvis_prepared) # pyLDAvis.save_html(LDAvis_prepared, './ldavis_prepared_'+ str(number_topics) +'.html') ###Output /usr/local/lib/python3.6/dist-packages/pyLDAvis/_prepare.py:257: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version of pandas will change to not sort by default. To accept the future behavior, pass 'sort=False'. To retain the current behavior and silence the warning, pass 'sort=True'. return pd.concat([default_term_info] + list(topic_dfs)) ###Markdown Neo4j Graph Database Integration ###Code # pip install -U ipython pip install py2neo from py2neo import Graph, Node, Relationship #graph = Graph("bolt://ec2-100-27-23-215.compute-1.amazonaws.com:7687") graph = Graph("bolt://ec2-100-27-23-215.compute-1.amazonaws.com:7687", user = "kevin", password = "sjsucmpe295" ) graph.delete_all() news_api_data = [] keyword = "iowacaucus" my_api_key = "AIzaSyAULWtrSkRR-FLRSMfz5ycwFlrYHhCw1Vw" my_cse_id = "014947934928168541572:hgmnooclf3g" topic = Node("Keyword", name=keyword) graph.create(topic) corpus_twitter = get_twitter_context(keyword) newApi = getNewsAPI(keyword) gNews = getGNewsAPI(keyword) corpus = corpus_twitter + newApi + gNews top5_keyword_twitter = get_top_n_words(corpus,n=10) for item in top5_keyword_twitter: n = Node("Twitter", name=item[0]) r = Relationship(topic, "LINKS_TO", n) graph.create(n | r) google_result_list = [] google_keyword = item[0] google_result = google_search(google_keyword,my_api_key,my_cse_id) top5_keyword_google = get_top_n_words(google_result,n=10) for result in top5_keyword_google: res = Node("Google", name=result[0]) rel = Relationship(n, "LINKS_TO", res) graph.create(res) graph.create(rel) ###Output _____no_output_____
Chapter 0 - Foundations of Python/User-defined Functions.ipynb
###Markdown In order to organize your code with some repeating requirements, calling functions can be quite efficiently. Other than those built-in function, you can also create your own user-defined functions. The basic syntax of Function is: def function_name(parameters): """docstring""" statement(s) return return_valueReturn statement is an option since the return value is not always necessary. Also the docstring, short remark for this function, in between triple double quotes is optional. Docstring is available to us as \__doc\__ attribute of the function.Multiple input is allowed and using list or tuple can implement various returns. ###Code def sum(X): """Rerurn the summary amount of input values""" total_amount=0 for x in X: total_amount+=x return total_amount test=[15,45,10,5,30,25] sum_test=sum(test) print(sum_test) print(sum.__doc__) #print(total_amount) #NameError: name 'total_amount' is not defined ###Output 130 Rerurn the summary amount of input values ###Markdown Nested functionA function can also be inside of another function, the syntax would be: def outer(outer_parameters): statement(s) def inner(inner_parameters): statement(s) return return_value ###Code def array_sum(Xs): def sum(X): total_amount=0 for x in X: total_amount+=x return total_amount return [sum(x) for x in Xs] print(array_sum(([1,2,3],[4,5,6],[7,8,9]))) ###Output [6, 15, 24] ###Markdown Using a function as the return value ###Code def first_n_sum(n): def sum(X): total_amount=0 for i in range(n): total_amount+=X[i] return total_amount return sum test_data=(1,2,3,4,5,6) first_three=first_n_sum(3) first_five=first_n_sum(5) print((first_three(test_data),first_five(test_data))) ###Output (6, 15) ###Markdown global and nonlocalKeyword *global* can let you modify the variable outside of the function. Once a global variable is created, it can be changed in a local scope; similiarly, *nonlocal* allows modifying in outer function. Remember the search sequence is local scope->enclosing function->global global: ###Code n=10 def outer(): global n n=1 def inner(): n=2 print(n) inner() print(n) outer() print(n) ###Output 2 1 1 ###Markdown nonlocal ###Code n=10 def outer(): n=1 def inner(): nonlocal n n=2 print(n) inner() print(n) outer() print(n) ###Output 2 2 10 ###Markdown Default argumentsJust assign the default value to the argument with = ###Code def first_n_sum(n=4): def sum(X): total_amount=0 for i in range(n): total_amount+=X[i] return total_amount return sum test_data=(1,2,3,4,5,6) default=first_n_sum() first_four=first_n_sum(4) print((default(test_data),first_four(test_data))) ###Output (10, 10) ###Markdown Flexible argumentsIn the beginning, the case of function sum(), even though the argument X can be a list, the input of this function is single argument. There is a way to pass a variable number of arguments to a function: put asterisk (\*) or double asterisk (\*\*) in front of the argument. The single asterisk form is used to pass a non-keyworded, variable-length argument list, and the double asterisk form is used to pass a keyworded, variable-length argument list. ###Code def sum(*X): """Rerurn the summary amount of input values""" total_amount=0 for x in X: total_amount+=x return total_amount #test=[15,45,10,5,30,25] #sum_test=sum(test) sum_test=sum(15,45,10,5,30,25) #it actually accepts six parameters print(sum_test) #130 ###Output 130
noValueCode.ipynb
###Markdown 使用Gan训练cifra10,测试使用,由于BigGan遇到了处理不了的问题,回来试一下Gan ###Code # conf latent_dim = 28 height = 28 width = 28 channels = 1 generator_input = keras.Input(shape=(latent_dim,)) # Generate part x = layers.Dense(128 * 14 * 14)(generator_input) x = layers.LeakyReLU()(x) x = layers.Reshape((14, 14, 128))(x) x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) x = layers.Conv2DTranspose(256, 4, strides=2, padding='same')(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) # use tanh as the activation function in Generator part x = layers.Conv2D(channels, 7, activation='tanh', padding='same')(x) generator = keras.models.Model(generator_input, x) generator.summary() discriminator_input = layers.Input(shape=(height, width, channels)) x = layers.Conv2D(128, 3)(discriminator_input) x = layers.LeakyReLU()(x) x = layers.Conv2D(128, 4, strides=2)(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(128, 4, strides=2)(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(128, 4, strides=2)(x) x = layers.LeakyReLU()(x) x = layers.Flatten()(x) # Dropout for preventing Discriminator part controlling the whole network!!!! x = layers.Dropout(0.8)(x) # Classification x = layers.Dense(1, activation='sigmoid')(x) discriminator = keras.models.Model(discriminator_input, x) discriminator.summary() # set optimizer -> learning rate decay and gradient clipping discriminator_optimizer = keras.optimizers.RMSprop(lr=0.00001, clipvalue=1.0, decay=1e-8) discriminator.compile(optimizer=discriminator_optimizer, loss='binary_crossentropy') # set weights of Discriminator part to non-trainable for Generator part discriminator.trainable = False gan_input = keras.Input(shape=(latent_dim,)) gan_output = discriminator(generator(gan_input)) gan = keras.models.Model(gan_input, gan_output) gan_optimizer = keras.optimizers.RMSprop(lr=0.0004, clipvalue=1.0, decay=1e-8) gan.compile(optimizer=gan_optimizer, loss='binary_crossentropy') # set steps and parameters iterations = 4500 batch_size = 20 # start training start = 0 for step in range(iterations): # train Discriminator part # initial input data/latent vectors random_latent_vectors = np.random.normal(size=(batch_size, latent_dim)) # generate faked pic generated_images = generator.predict(random_latent_vectors) # mix faked images with real images stop = start + batch_size real_images = train_data[start: stop] combined_images = np.concatenate([generated_images, real_images]) labels = np.concatenate([np.ones((batch_size, 1)), np.zeros((batch_size, 1))]) # mix noise labels += 0.05 * np.random.random(labels.shape) # train the Discriminator part d_loss = discriminator.train_on_batch(combined_images, labels) # train GAN part # initial input data/latent vectors random_latent_vectors = np.random.normal(size=(batch_size, latent_dim)) # set labels for faked images misleading_targets = np.zeros((batch_size, 1)) # train GAN part and freeze discriminator a_loss = gan.train_on_batch(random_latent_vectors, misleading_targets) # judge when to stop start += batch_size if start > len(train_data) - batch_size: start = 0 # output log if step % 100 == 0: print('discriminator loss at step %s: %s' % (step, d_loss)) print('adversarial loss at step %s: %s' % (step, a_loss)) # generate test data(#40) random_latent_vectors = np.random.normal(size=(40, latent_dim)) # decode input data to faked images generated_images = generator.predict(random_latent_vectors) # print the shape of faked images print(np.array(generated_images[2]).shape) # plot for i in range(generated_images.shape[0]): img = image.array_to_img(generated_images[i] * 255., scale=False) plt.figure() plt.imshow(img,cmap='gray') plt.show() ###Output _____no_output_____ ###Markdown 关于sneaker数据集的各种功能函数,基于keras,绝大部分作废 ###Code seed = 42 os.environ['PYTHONHASHSEED']=str(seed) os.environ['TF_DETERMINISTIC_OPS'] = '1' os.environ['TF_CUDNN_DETERMINISTIC'] = '1' os.environ['HOROVOD_FUSION_THRESHOLD']='0' random.seed(seed) np.random.seed(seed) tf.random.set_random_seed(seed) tf.set_random_seed(seed) # conf channel = 3 height = 128 #300 width = 128 #400 class_num = 2 # 4 #norm_size = 32#参数 batch_size = 64 epochs = 200 train_dir = './drive/MyDrive/daydayup/dataset/sneaker_nonsneaker/sneaker_nonsneaker/training' # ../data/dataset/train validation_dir = './drive/MyDrive/daydayup/dataset/sneaker_nonsneaker/sneaker_nonsneaker/testing' # ../data/dataset/val # train_dir = './drive/MyDrive/daydayup/dataset/filteredDataset/training' # validation_dir = './drive/MyDrive/daydayup/dataset/filteredDataset/testing' save_tl_dir = "./drive/MyDrive/daydayup/Morpho/predict/TLCheckpoint" save_ft_dir = "./drive/MyDrive/daydayup/Morpho/predict/FTCheckpoint" save_Direct_dir = "./drive/MyDrive/daydayup/Morpho/predict/DirectCheckpoint" totalTrain = len(list(paths.list_images(train_dir))) totalVal = len(list(paths.list_images(validation_dir))) print(totalTrain) print(totalVal) source_train_dir_positive = os.path.join(train_dir, 'positive') source_train_dir_negative = os.path.join(train_dir, 'negative') source_validation_dir_positive = os.path.join(validation_dir, 'positive') source_validation_dir_negative = os.path.join(validation_dir, 'negative') import os from PIL import Image from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True # train_dir_positive = os.path.join(train_dir, '/1') # train_dir_negative = os.path.join(train_dir, '/0') # validation_dir_positive = os.path.join(validation_dir, '/1') # validation_dir_negative = os.path.join(validation_dir, '/0') def pilConvertJPG(path): for a, _, c in os.walk(path): for n in c: # print(n) if '.jpg' in n or '.png' in n or '.jpeg' in n or '.JPEG' in n: img = Image.open(os.path.join(a, n)) rgb_im = img.convert('RGB') error_img_path = os.path.join(a,n) os.remove(error_img_path) n = ''.join(filter(lambda n: ord(n) < 256, n)) jpg_img_path = os.path.splitext(os.path.join(a, n).replace('\\', '/'))[0] jpg_img_path += '.jpg' # print(jpg_img_path) rgb_im.save(jpg_img_path) else: print("error:", n) pilConvertJPG("./drive/MyDrive/daydayup/Morpho/BigGan/dataset/groundTruth") pilConvertJPG(source_train_dir_positive) pilConvertJPG(source_train_dir_negative) pilConvertJPG(source_validation_dir_positive) pilConvertJPG(source_validation_dir_negative) def dataprocess(train_dir, validation_dir,height, width, batch_size): train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( train_dir, target_size=(height, width), batch_size= batch_size, class_mode='categorical') validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=(height, width), batch_size= batch_size, class_mode='categorical') return train_generator, validation_generator train_generator, validation_generator = dataprocess(train_dir, validation_dir, height, width, batch_size) # generator使用方法,先利用生成器生成数据然后训练 for i, j in train_generator: print(i.shape) # 编写网络 class Generator: def neural(latent_dim): input_shape = (latent_dim,) inputs = Input(shape= input_shape) # conv_base = VGG16(include_top=False, weights='imagenet', input_shape=input_shape) # x = conv_base.output # UpSample = MaxPooling2D(pool_size=(9, 9), strides=(1, 1),padding = 'same', name='MaxPooling2D')(inputs) # UpSample = Dropout(0.5)(UpSample) # UpSample = Conv2D(256,(1,1))(UpSample) # UpSample = BatchNormalization()(UpSample) # UpSample = Activation('relu')(UpSample) # UpSample = Dropout(0.5)(UpSample) # UpSample = Conv2D(64,(1,1))(UpSample) # UpSample = BatchNormalization()(UpSample) # UpSample = Activation('relu')(UpSample) # UpSample = Dropout(0.5)(UpSample) # UpSample = Flatten(name='flatten')(UpSample) # UpSample = Dense(classes)(UpSample) # UpSample = BatchNormalization()(UpSample) # predictions = Activation('softmax')(UpSample) model = Model(inputs=inputs, outputs=predictions) return model generator_input = keras.Input(shape=(latent_dim,)) # Generate part x = layers.Dense(128 * 14 * 14)(generator_input) x = layers.LeakyReLU()(x) x = layers.Reshape((14, 14, 128))(x) x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) x = layers.Conv2DTranspose(256, 4, strides=2, padding='same')(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) x = layers.Conv2D(256, 5, padding='same')(x) x = layers.LeakyReLU()(x) # use tanh as the activation function in Generator part x = layers.Conv2D(channels, 7, activation='tanh', padding='same')(x) generator = keras.models.Model(generator_input, x) generator.summary() ## 废案,想自己写generate函数来生成图片,但是遇见了无法处理的问题,主要还是读取那一块的问题,tf自带的读取方法没有理解,不过好在keras的读取方式也可以用 # for i in os.listdir(train_dir): # print(i) # 这个函数用于返回符合,可以使用正则路径,*表示任意字符 # path_list = tf.data.Dataset.list_files(train_path + "*.jpg") # 定义一个读取图片的函数 def read_image(dirPath, batchSize, k, classNum = 2): ''' :dirPath: 数据集读取路径 :batchSize: 获得的数据数量 :k: 记录历史获取次数 :yield: 该步图片张量列表与图片标签列表 ''' historyCheck = np.zeros(classNum) ratioNum = [] splitRatio = np.random.dirichlet(np.ones(classNum),size=1) for i in range(classNum): ratioNum.append(int(splitRatio[0][i] * batchSize)) for className in os.listdir(dirPath): content = os.path.join(dirPath, str(className)) data = [] # 图片聊表 labels = [] # 图片标签列表 path_list = tf.data.Dataset.list_files(content + "*.jpg") # 根据文件路径列表依次读取 for i in path_list: image_temp = tf.io.read_file(i) # tesnsorflow的io读取文件 image_temp = tf.image.decode_jpeg(image_temp) # 根据图片的格式进行编码转化为张量,这里图片是jpg格式 data.append(image_temp) # 图片加入到数据集 labels.append(str(className)) # 获取文件名加入到标签,这里要张量i转化为字符串 for index, item in enumerate(ratioNum): historyCheck[index] = historyCheck[index] + item yield np.array(data), np.array(labels) # 读取训练图片 train_images, train_labels = read_image(train_dir) ###Output WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/random_seed.py:58: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where
Amazon Product Review/Amazon_product_sentiment_analysis.ipynb
###Markdown ###Code import tensorflow as tf num_gpus_available = len(tf.config.experimental.list_physical_devices('GPU')) print("Num GPUs Available: ", num_gpus_available) assert num_gpus_available > 0 !pip install transformers from transformers import DistilBertTokenizerFast from transformers import TFDistilBertForSequenceClassification import pandas as pd import numpy as np import nltk import re nltk.download('stopwords') from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer import tensorflow_datasets as tfds ds = tfds.load('amazon_us_reviews/Mobile_Electronics_v1_00', split='train', shuffle_files=True) assert isinstance(ds, tf.data.Dataset) print(ds) df = tfds.as_dataframe(ds) # saving the dataframe df.to_csv('file1.csv') df.head() def sentimentConversion(rating): if rating <=2: return 0 elif rating == 3: return 1 else: return 2 df["Sentiment"] = df["data/star_rating"].apply(sentimentConversion) #df['Sentiment'] = df['Sentiment'].map({'positive':1, 'negative':0}) df['short_review'] =df['data/review_body'].str.decode("utf-8") df = df[["short_review", "Sentiment"]] df # Dropping last n rows using drop n = 74975 df.drop(df.tail(n).index, inplace = True) index = df.index number_of_rows = len(index) print(number_of_rows) df.tail() df.head() reviews = df['short_review'].values.tolist() labels = df['Sentiment'].tolist() print(reviews[:2]) print(labels[:2]) from sklearn.model_selection import train_test_split training_sentences, validation_sentences, training_labels, validation_labels = train_test_split(reviews, labels, test_size=.2) tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') tokenizer([training_sentences[0]], truncation=True, padding=True, max_length=128) train_encodings = tokenizer(training_sentences, truncation=True, padding=True) val_encodings = tokenizer(validation_sentences, truncation=True, padding=True) train_dataset = tf.data.Dataset.from_tensor_slices(( dict(train_encodings), training_labels )) val_dataset = tf.data.Dataset.from_tensor_slices(( dict(val_encodings), validation_labels )) model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased',num_labels=3) optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5, epsilon=1e-08) model.compile(optimizer=optimizer, loss=model.compute_loss, metrics=['accuracy']) model.fit(train_dataset.shuffle(100).batch(16), epochs=2, batch_size=16, validation_data=val_dataset.shuffle(100).batch(16)) model.save_pretrained("./sentiment") loaded_model = TFDistilBertForSequenceClassification.from_pretrained("./sentiment") test_sentence = "This is a really good product. I love it" predict_input = tokenizer.encode(test_sentence, truncation=True, padding=True, return_tensors="tf") tf_output = loaded_model.predict(predict_input)[0] tf_prediction = tf.nn.softmax(tf_output, axis=1) labels = ['Negative','Neutral','Positive'] label = tf.argmax(tf_prediction, axis=1) label = label.numpy() print(labels[label[0]]) ###Output _____no_output_____
docs/io.ipynb
###Markdown Input and outputCurrently, the only supported approach for loading and saving ensembles in `medusa` is via [pickle](https://docs.python.org/3/library/pickle.html). `pickle` is the Python module that serializes and de-serializes Python objects (i.e. converts to/from a binary representation). This is an intentional design choice--as `medusa` matures, we will identify a feasible route for standardization through an extension to the Systems Biology Markup Language (SBML), which is the *de facto* standard for sharing genome-scale metabolic network reconstructions.To load an ensemble, use the `load` function from the `pickle` module: ###Code import medusa from pickle import load with open("../medusa/test/data/Staphylococcus_aureus_ensemble.pickle", 'rb') as infile: ensemble = load(infile) ###Output _____no_output_____ ###Markdown To save an ensemble, you can pickle it with: ###Code save_dir = ("../medusa/test/data/Staphylococcus_aureus_repickled.pickle") ensemble.to_pickle(save_dir) ###Output _____no_output_____ ###Markdown Reading/Writing a 🌈 ###Code from chromatic import * ###Output _____no_output_____ ###Markdown Reading FilesOne key goal of `chromatic` is to make it easy to load spectroscopic light curves from a variety of different file formats, so that the outputs from multiple different pipelines can be standardized into objects that can be direcly compared to one another. We hope to provide an straightforward way to check one analysis vs another as quickly as possible. Download Example InputsIf you want to test out any of these readers, you'll need data files in each format to test on. You can download some example datasets from [this link](https://www.dropbox.com/s/es5drnp6ufkz8wv/example-datasets.zip?dl=0). Simply extract that `.zip` file into the directory from which you'll be running this notebook. `chromatic` rainbow files (`*.rainbow.npy`)The `chromatic` toolkit saves files in its own default format, which can then be shared and loaded back in. These files directly encode the core dictionaries in binary files, so they load and save quickly. They have the extension `.rainbow.npy`. These files can be written (see below) from any `Rainbow` object. ###Code rainbow_chromatic = Rainbow('example-datasets/chromatic/simulated.rainbow.npy') ###Output _____no_output_____ ###Markdown STScI `jwst` pipeline outputs (`x1dints.fits`)The `jwst` pipeline developed at the Space Telescope Science Institute will produce extract 1D stellar spectra for time-series observations with the James Webb Space Telescope. Details about the pipeline itself are available [here](https://jwst-pipeline.readthedocs.io/en/latest/). These files typically end with the `_x1dints.fits` suffix. Each file contains a number of individual "integrations" (= time points). Because the datasets can get large, sometimes a particular observation might be split into multiple segments, each with its own file. As such, the reader for these files is designed to handle either a single file or a path with a `*` in it that points to a group of files from an observation that's been split into segments. ###Code rainbow_stsci = Rainbow('example-datasets/stsci/*_x1dints.fits') ###Output _____no_output_____ ###Markdown The `Rainbow` reader will try to guess the format of the file from the filepath. If that doesn't work for some reason, in this case you could explictly feed in the keyword `format='x1dints'` after the filepath, to force it to use the `from_x1dints` reader needed for these files. `eureka` pipeline outputs (`S3_*_Table_Save.txt`)The `eureka` pipeline is one of many community tools being designed to extract spectra from JWST data. Details about the pipeline itself are available [here](https://github.com/kevin218/Eureka). These files typically have names that look something like `S3_*_Table_Save.txt`, and they contain fluxes as a function of wavelength and time, stored as an astropy `ecsv` table. ###Code rainbow_eureka = Rainbow('example-datasets/eureka/S3_wasp43b_Table_Save.txt') ###Output _____no_output_____ ###Markdown Again, you can force the reader to use this format by including a `format='eureka'` keyword. Writing Files `chromatic` rainbow files (`*.rainbow.npy`)The default file format for saving files encodes the core dictionaries in binary files, using the extension `.rainbow.npy`. This is a file that can be read directly back into `chromatic`. (Indeed, the commands below created the file that we read above.) ###Code simulated = SimulatedRainbow().inject_transit() simulated.save('example-datasets/chromatic/simulated.rainbow.npy') ###Output _____no_output_____
.ipynb_checkpoints/pbc_spider_codebook-checkpoint.ipynb
###Markdown People's Bank of China: a Web ScrapperChanghao Li | 2021.12``GOAL`` of this programme: extract data, especially fines issued towards payment companies. Due to the structure obsticles, pure-automatically extracting those information is difficult.Therefore, we adopt a semi-structured method towards our goal. 0. Limitation of extracting data from a PDF format fileIt's difficult to read data, especially those written in Chinese character, from a PDF file. For instance, we try to extract relevant information from a sample PDF by **camelot**: ###Code # -*- coding: utf-8 -*- from camelot import read_pdf import re def parse_pdf_camelot(link): tables = read_pdf(link, pages = '0', flavor = 'stream', table_area = ['']) # stream会默认整页均为表格 print(tables) print(tables[0].data) print() #print(re.findall(r'\w+\s元|\w+\s万元', info)) parse_pdf_camelot('2021122718272373606.pdf') ###Output <TableList n=1> [['备注']] ###Markdown Cannot work...Let me try another way to read-in pdf file: use **PDFMiner3K** package. ###Code ### PDFMiner reading online PDF files from urllib.request import urlopen from pdfminer.pdfinterp import PDFResourceManager from pdfminer.pdfpage import PDFPage from pdfminer.converter import TextConverter from pdfminer.layout import LAParams from io import StringIO from io import open def readPDF(pdfFile): rsrcmgr = PDFResourceManager() retstr = StringIO() laparams = LAParams() device = TextConverter(rsrcmgr, retstr, laparams = laparams) PDFPage.get_pages(rsrcmgr, device, pdfFile) device.close() content = retstr.getvalue() retstr.close() return content pdfFile = urlopen('http://nanning.pbc.gov.cn/nanning/133346/133364/133371/4432873/2021122718272373606.pdf') outputString = readPDF(pdfFile) print('The program is running...') print(outputString) pdfFile.close() ###Output The program is running... ###Markdown Nothing happened... Now let's look at reading local files, and see whether it works or not... ###Code ### PDFMiner reading downloaded PDF files #from urllib.request import urlopen from pdfminer.pdfinterp import PDFResourceManager from pdfminer.pdfpage import PDFPage from pdfminer.converter import TextConverter from pdfminer.layout import LAParams from io import StringIO from io import open def readPDF(pdfFile): rsrcmgr = PDFResourceManager() retstr = StringIO() laparams = LAParams() device = TextConverter(rsrcmgr, retstr, laparams = laparams) PDFPage.get_pages(rsrcmgr, device, pdfFile) device.close() content = retstr.getvalue() retstr.close() return content outputString = readPDF('2021122718272373606.pdf') print('The program is running...') print(outputString) pdfFile.close() ###Output The program is running... ###Markdown Nope. Nothing happened. Hopelessness.The solution: skip the difficult part. This us not an PhD project. Waiting for Dr Josiah Poon, Dr Caren Han and their USYD NLP Group's breakthrough. Bless!In the main part I will only focus on extracting fine url from different PBC branch website. 1. PBC... so many branches!There are 35 PBC websites... 35! Each with different websites! The structure of the websites are different, the format of the information is also different... This makes web scraping extremely complex.For instance, PBC Shenzhen branch issue fine information in Excel format (.xls), PBC Xi'an branch use Word (.doc) instead, PBC Nanning branch use PDF files (.pdf), and PBC Nanjing branch issues pure words (HTML)! What a diversity...全国央行分支机构各自拥有其独立的网站【网站结构不同】、各个网站也单独发布罚单【格式不同】,这让爬虫变得异常复杂。不同央行分行的行政处罚罚单格式不尽相同————例如,深圳市中心支行的罚单信息为Excel格式(.xls)、西安分行公布的附件为Word格式(.doc)、南宁中心支行的则为PDF格式(.pdf);更有甚者,南京分行的罚单信息竟然为网页纯文字...... 1.1 Guangzhou ###Code # -*- coding = utf-8 -*- ### GUANGZHOU BRANCH ''' INPUT: - name: name of the city - year: year - month: month OUTPUT: - a csv file (e.g. 'guangzhou-2021-12.csv') ''' from datetime import datetime, date from urllib import request, parse from bs4 import BeautifulSoup import time import pandas as pd from selenium import webdriver from selenium.webdriver.common.keys import Keys from fake_useragent import UserAgent import csv import re import webbrowser ua = UserAgent() name = 'guangzhou' year = '2021' month = '12' def gzSpider(link): driver = webdriver.Chrome() time.sleep(1) driver.get(link) req = driver.page_source # print(req) soup = BeautifulSoup(req, 'lxml') # print(soup.prettify()) fram = soup.find("td", class_ = "content_right column") # print(fram.prettify()) mylist = [] finallist = [] count = 0 datelist = [] linklist = [] for item in fram.find_all("table", limit=1): # print(item) for temp in item.find_all("td", limit=1): ### FILTER ALL TIME OUT for inner in temp.find_all("td", width="100", class_="hei12jj", limit=10): # print(inner) d = datetime.strptime(inner.text, '%Y-%m-%d') ### INPUT DESIRED TIME HERE! if ((d > datetime(2021, 12, 1)) & (d < datetime(2021, 12, 31))): print(d) datelist.append(d) count += 1 l = temp.select('a[href]', limit=count) for k in range(0,len(l)): print("http://guangzhou.pbc.gov.cn" + (l[k]['href'])) w = "http://guangzhou.pbc.gov.cn" + (l[k]['href']) linklist.append(w) txt = '{n}-{y}-{m}.csv' f = open(txt.format(n = name, y = year, m = month), 'w') writer = csv.writer(f) writer.writerow(['发布日期', '罚单链接']) for i in range(0, count): dlist = [] dlist.append(datelist[i].date().strftime("%Y-%m-%d")) dlist.append(linklist[i]) print(dlist) writer.writerow(dlist) f.close() #webbrowser.open('') ### INPUT GZ OFFICIAL WEBSITE INSIDE gzSpider('http://guangzhou.pbc.gov.cn/guangzhou/129142/129159/129166/index.html') ###Output 2021-12-27 00:00:00 2021-12-27 00:00:00 2021-12-24 00:00:00 2021-12-23 00:00:00 http://guangzhou.pbc.gov.cn/guangzhou/129142/129159/129166/4433798/index.html http://guangzhou.pbc.gov.cn/guangzhou/129142/129159/129166/4433789/index.html http://guangzhou.pbc.gov.cn/guangzhou/129142/129159/129166/4433778/index.html http://guangzhou.pbc.gov.cn/guangzhou/129142/129159/129166/4433773/index.html ['2021-12-27', 'http://guangzhou.pbc.gov.cn/guangzhou/129142/129159/129166/4433798/index.html'] ['2021-12-27', 'http://guangzhou.pbc.gov.cn/guangzhou/129142/129159/129166/4433789/index.html'] ['2021-12-24', 'http://guangzhou.pbc.gov.cn/guangzhou/129142/129159/129166/4433778/index.html'] ['2021-12-23', 'http://guangzhou.pbc.gov.cn/guangzhou/129142/129159/129166/4433773/index.html'] ###Markdown 1.2 Nanjing ###Code # -*- coding = utf-8 -*- ''' INPUT: - name: name of the city - year: year - month: month OUTPUT: - a csv file (e.g. 'guangzhou-2021-12.csv') ''' from datetime import datetime, date from urllib import request, parse from bs4 import BeautifulSoup import time import pandas as pd from selenium import webdriver from selenium.webdriver.common.keys import Keys from fake_useragent import UserAgent import csv import re import webbrowser ua = UserAgent() name = 'nanjing' year = '2021' month = '12' def njSpider(link): driver = webdriver.Chrome() time.sleep(1) driver.get(link) req = driver.page_source # print(req) soup = BeautifulSoup(req, 'lxml') # print(soup.prettify()) fram = soup.find("td", class_ = "content_right column") # print(fram.prettify()) mylist = [] finallist = [] count = 0 datelist = [] linklist = [] for item in fram.find_all("table", limit=1): # print(item) for temp in item.find_all("td", limit=1): ### FILTER ALL TIME OUT for inner in temp.find_all("td", width="100", class_="hei12jj", limit=10): # print(inner) d = datetime.strptime(inner.text, '%Y-%m-%d') ### INPUT DESIRED TIME HERE! if ((d > datetime(2021, 12, 1)) & (d < datetime(2021, 12, 31))): print(d) datelist.append(d) count += 1 else: return # delete this if exists! l = temp.select('a[href]', limit=count) for k in range(0,len(l)): print("http://nanjing.pbc.gov.cn" + (l[k]['href'])) w = "http://nanjing.pbc.gov.cn" + (l[k]['href']) linklist.append(w) txt = '{n}-{y}-{m}.csv' f = open(txt.format(n = name, y = year, m = month), 'w') writer = csv.writer(f) writer.writerow(['发布日期', '罚单链接']) for i in range(0, count): dlist = [] dlist.append(datelist[i].date().strftime("%Y-%m-%d")) dlist.append(linklist[i]) print(dlist) writer.writerow(dlist) f.close() #webbrowser.open('') ### INPUT OFFICIAL WEBSITE INSIDE njSpider('http://nanjing.pbc.gov.cn/nanjing/117542/117560/117567/index.html') ###Output _____no_output_____ ###Markdown Nanjing did not issue fines during December, 2021. So it returned. 1.3 Jinan ###Code # -*- coding = utf-8 -*- ''' INPUT: - name: name of the city - year: year - month: month OUTPUT: - a csv file (e.g. 'guangzhou-2021-12.csv') ''' from datetime import datetime, date from urllib import request, parse from bs4 import BeautifulSoup import time import pandas as pd from selenium import webdriver from selenium.webdriver.common.keys import Keys from fake_useragent import UserAgent import csv import re import webbrowser ua = UserAgent() name = 'jinan' year = '2021' month = '12' def jnSpider(link): driver = webdriver.Chrome() time.sleep(1) driver.get(link) req = driver.page_source # print(req) soup = BeautifulSoup(req, 'lxml') # print(soup.prettify()) fram = soup.find("td", class_ = "content_right column") # print(fram.prettify()) mylist = [] finallist = [] count = 0 datelist = [] linklist = [] for item in fram.find_all("table", limit=1): # print(item) for temp in item.find_all("td", limit=1): ### FILTER ALL TIME OUT for inner in temp.find_all("td", width="100", class_="hei12jj", limit=10): # print(inner) d = datetime.strptime(inner.text, '%Y-%m-%d') ### INPUT DESIRED TIME HERE! if ((d > datetime(2021, 12, 1)) & (d < datetime(2021, 12, 31))): print(d) datelist.append(d) count += 1 l = temp.select('a[href]', limit=count) for k in range(0,len(l)): print("http://jinan.pbc.gov.cn" + (l[k]['href'])) w = "http://jinan.pbc.gov.cn" + (l[k]['href']) linklist.append(w) txt = '{n}-{y}-{m}.csv' f = open(txt.format(n = name, y = year, m = month), 'w') writer = csv.writer(f) writer.writerow(['发布日期', '罚单链接']) for i in range(0, count): dlist = [] dlist.append(datelist[i].date().strftime("%Y-%m-%d")) dlist.append(linklist[i]) print(dlist) writer.writerow(dlist) f.close() #webbrowser.open('') ### INPUT OFFICIAL WEBSITE INSIDE jnSpider('http://jinan.pbc.gov.cn/jinan/120967/120985/120994/index.html') ###Output 2021-12-27 00:00:00 2021-12-27 00:00:00 2021-12-27 00:00:00 2021-12-22 00:00:00 2021-12-21 00:00:00 2021-12-20 00:00:00 2021-12-17 00:00:00 2021-12-16 00:00:00 2021-12-07 00:00:00 2021-12-02 00:00:00 http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4431771/index.html http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4431386/index.html http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4430701/index.html http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4427013/index.html http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4423986/index.html http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4423126/index.html http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4418095/index.html http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4416957/index.html http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4408893/index.html http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4405290/index.html ['2021-12-27', 'http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4431771/index.html'] ['2021-12-27', 'http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4431386/index.html'] ['2021-12-27', 'http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4430701/index.html'] ['2021-12-22', 'http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4427013/index.html'] ['2021-12-21', 'http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4423986/index.html'] ['2021-12-20', 'http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4423126/index.html'] ['2021-12-17', 'http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4418095/index.html'] ['2021-12-16', 'http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4416957/index.html'] ['2021-12-07', 'http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4408893/index.html'] ['2021-12-02', 'http://jinan.pbc.gov.cn/jinan/120967/120985/120994/4405290/index.html']
.ipynb_ceckpoints/Predicting_bike_sharing_data-checkpoint.ipynb
###Markdown Your first neural networkIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. ###Code %matplotlib inline %load_ext autoreload %autoreload 2 %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Load and prepare the dataA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! ###Code data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() ###Output _____no_output_____ ###Markdown Checking out the dataThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. ###Code rides[:24*10].plot(x='dteday', y='cnt') ###Output _____no_output_____ ###Markdown Dummy variablesHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`. ###Code dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() ###Output _____no_output_____ ###Markdown Scaling target variablesTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.The scaling factors are saved so we can go backwards when we use the network for predictions. ###Code quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std ###Output _____no_output_____ ###Markdown Splitting the data into training, testing, and validation setsWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. ###Code # Save data for approximately the last 21 days test_data = data[-21*24:] # Now remove the test data from the data set data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] ###Output _____no_output_____ ###Markdown We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). ###Code # Hold out the last 60 days or so of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] ###Output _____no_output_____ ###Markdown Time to build the networkBelow you'll build your network. We've built out the structure. You'll implement both the forward pass and backwards pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.Below, you have these tasks:1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.2. Implement the forward pass in the `train` method.3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.4. Implement the forward pass in the `run` method. ###Code ############# # In the my_answers.py file, fill out the TODO sections as specified ############# from my_answers import NeuralNetwork def MSE(y, Y): return np.mean((y-Y)**2) ###Output _____no_output_____ ###Markdown Unit testsRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project. ###Code import unittest inputs = np.array([[0.5, -0.2, 0.1]]) targets = np.array([[0.4]]) test_w_i_h = np.array([[0.1, -0.2], [0.4, 0.5], [-0.3, 0.2]]) test_w_h_o = np.array([[0.3], [-0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328], [-0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, -0.20185996], [0.39775194, 0.50074398], [-0.29887597, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) ###Output ..... ---------------------------------------------------------------------- Ran 5 tests in 0.178s OK ###Markdown Training the networkHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterationsThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, this process can have sharply diminishing returns and can waste computational resources if you use too many iterations. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. The ideal number of iterations would be a level that stops shortly after the validation loss is no longer decreasing. Choose the learning rateThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. Normally a good choice to start at is 0.1; however, if you effectively divide the learning rate by n_records, try starting out with a learning rate of 1. In either case, if the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodesIn a model where all the weights are optimized, the more hidden nodes you have, the more accurate the predictions of the model will be. (A fully optimized model could have weights of zero, after all.) However, the more hidden nodes you have, the harder it will be to optimize the weights of the model, and the more likely it will be that suboptimal weights will lead to overfitting. With overfitting, the model will memorize the training data instead of learning the true pattern, and won't generalize well to unseen data. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. You'll generally find that the best number of hidden nodes to use ends up being between the number of input and output nodes. ###Code import sys #################### ### Set the hyperparameters in you myanswers.py file ### #################### from my_answers import iterations, learning_rate, hidden_nodes, output_nodes N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.iloc[batch].values, train_targets.iloc[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() _ = plt.ylim() ###Output _____no_output_____ ###Markdown Check out your predictionsHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. ###Code fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features).T*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.iloc[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) ###Output _____no_output_____
radar-workflow/Gauge_V_Radar.ipynb
###Markdown Creating a Plot that Shows Radar-Derived Rainfall and Compares it to Rain Gauge Data This code takes .csv files of rain gauge data and radar-estimated hourly precipitation rates and plots them. ###Code import warnings warnings.filterwarnings('ignore') from pylab import * import pyart, boto3, tempfile, os, shutil, datetime, matplotlib import numpy as np import pandas as pd import pylab as pl import cartopy.crs as ccrs import cartopy.feature as cfeature import matplotlib.pyplot as plt from matplotlib import animation import netCDF4 from datetime import datetime ###Output ## You are using the Python ARM Radar Toolkit (Py-ART), an open source ## library for working with weather radar data. Py-ART is partly ## supported by the U.S. Department of Energy as part of the Atmospheric ## Radiation Measurement (ARM) Climate Research Facility, an Office of ## Science user facility. ## ## If you use this software to prepare a publication, please cite: ## ## JJ Helmus and SM Collis, JORS 2016, doi: 10.5334/jors.119 ###Markdown Read in the files. "df" needed to be appended because the two days were in separate .csv files. "df" represents the gauge files through the entire code, and "hp" represents the radar-derived hourly precipitation. ###Code # Read in the files. hp = pd.read_csv("/home/amedendorp/Downloads/WaterYear2013_hourlyprecip.csv", skiprows=0, na_values = ['no info', '.']) # These .csv files are for two separate days, so they need to be appended together. df = pd.read_csv("KLOT_20130417_RainAmounts_at_CookCountyGauges.csv") df2 = pd.read_csv("KLOT_20130418_RainAmounts_at_CookCountyGauges.csv") df = df.append(df2) dates = [pd.Timestamp('2013-04-17'), pd.Timestamp('2013-04-18')] ts = pd.Series(np.random.randn(2), dates) type(ts.index) ###Output _____no_output_____ ###Markdown renaming the DatetimeIndexes: ###Code df_new = df hp_new = hp df_new.index = pd.to_datetime(df['Datetime']) hp_new.index = pd.to_datetime(hp['Date/Time']) ###Output _____no_output_____ ###Markdown Creating another new index, dropping the unnecessary columns: ###Code df_new = df_new.drop(labels=['Datetime'], axis=1) hp_new = hp_new.drop(labels=['Date/Time'], axis=1) df_new.index = pd.to_datetime(df_new.index) hp_new.index = pd.to_datetime(hp_new.index) ###Output _____no_output_____ ###Markdown Converting both of the indexes to floats, since the current items inside the indexes are strings and cannot be processed that way: ###Code df_float = df_new.convert_objects(convert_numeric=True) hp_float = hp_new.convert_objects(convert_numeric=True) ###Output _____no_output_____ ###Markdown Resample the rain gauge indexes while taking hourly sums (since the rain rate data is already in hour increments), and then convert mm into inches. ###Code df_mean = df_float.resample('1H', how='sum') df_new = df_mean/24.5 ###Output _____no_output_____ ###Markdown This next block of code fixes the five-hour data "shift" of the rain gauge data due to it being in UTC, not local time like the radar-estimated rain rate data is. Now both of the datasets are on the same time scale, so they can be plotted together. ###Code df_new.index = df_new.index + pd.DateOffset(hours=-5) ###Output _____no_output_____ ###Markdown Creating arrays based on the DatetimeIndex: ###Code lines_epoch_array = np.asanyarray(df_new.index) lines_time_array = pd.to_datetime(lines_epoch_array, unit='s') lines_time_array hp_lines_epoch_array = np.asanyarray(hp.index) hp_lines_time_array = pd.to_datetime(hp_lines_epoch_array, unit='s') hp_lines_time_array df_new['time'] = lines_time_array hp_float['time'] = hp_lines_time_array ###Output _____no_output_____ ###Markdown Using ".iloc" to find the correct time period for the radar rain rate index (using the gauge dataset as reference for the timeframe): ###Code hp_new = hp_float.iloc[4746:4799] ###Output _____no_output_____ ###Markdown Plot the figure. ###Code fig = plt.figure(figsize=(15,5)) ax1 = fig.add_subplot(111) plt.title("Rain Gauge Data Versus Hourly Precip Values from Radar") plt.ylabel("inches") df_new.plot(x='time', y="G10", label="Radar" + " G10", color='blue', ax=ax1) hp_new.plot(x='time', y="G10", label="Gauge" + " G10", color='red', ax=ax1 ) ###Output _____no_output_____ ###Markdown These next few blocks of code create a new array, removing the 'time' column so that we can loop through both the datasets and plot them. ###Code col1_names = list(df_new.columns.values) col2_names = list(hp_new.columns.values) common = np.intersect1d(col1_names, col2_names) index = np.argwhere(common=='time') col = np.delete(common, index) ###Output _____no_output_____ ###Markdown Here is the looping code, which plots the individual gauge and radar-estimated rain rate data on one plot. ###Code for i in col: fig = plt.figure(figsize=(16, 8)) ax1 = fig.add_subplot(111) df_new.plot(x='time', y=i, label="Radar " + i, color='blue', ax=ax1, legend=True) hp_new.plot(x='time', y=i, label="Gauge " + i, color='red', ax=ax1, legend=True) plt.ylabel("inches") plt.title("Rain Gauge Totals Versus Rainfall-Estimated Rain Rate") # plt.savefig( '/home/amedendorp/Desktop/SAVUER/RadarVGauge/' + i) # plt.show() plt.close() ###Output _____no_output_____ ###Markdown This next block of code plots the data from all of the sites. One plots the rain rate, the other plots the rain gauge data. We need to include the extra argument in col2_names or else the code will be attempting to plot time versus time, which will result in an error. ###Code col_names = list(df_mean.columns.values) col2_names = list(hp_new.columns.drop('time')) fig = plt.figure(figsize=(16, 8)) ax = df_new.plot(x='time', y="G1", color='red') for i in col_names: df_new.plot(x='time', y=i, ax=ax) ax.get_legend().remove() plt.ylabel("inches") plt.title("Radar-Estimated Rain Rate") #plt.savefig('/home/amedendorp/Desktop/SAVUER/RadarVGauge/Radar-Estimated_Rain_Rate', bbox_inches='tight') fig2 = plt.figure(figsize=(16, 8)) ax = df_new.plot(x='time', y="G1", color='red') for i in col2_names: hp_new.plot(x='time', y=i, ax=ax) ax.get_legend().remove() plt.ylabel("inches") plt.title("Rain Gauge Totals") #plt.savefig('/home/amedendorp/Desktop/SAVUER/RadarVGauge/Rain_gauge_Totals', bbox_inches='tight') plt.show() ###Output _____no_output_____ ###Markdown Next, plot both of the above plots together on the same figure: ###Code legend_elements = [Line2D([0], [0], color='blue', linewidth=1), Line2D([0], [0], color='red', linewidth=1)] fig, ax = plt.subplots(figsize=(12, 7)) for i in col2_names: hp_new.plot(x='time', y=i, ax=ax, color='red', label='radar', linewidth=.5) for i in col_names: df_new.plot(x='time', y=i, ax=ax, color='blue', label='rain gauge', linewidth=.5) ax.legend(legend_elements, ['radar', 'rain gauge']) plt.ylabel("rain amount (in)") plt.title("Rain Gauge Totals Versus Radar-Derived Rain Rate") plt.savefig('/home/amedendorp/Desktop/SAVUER/RadarVGauge/Rain_gauge_V_Radar', bbox_inches='tight') plt.show() ###Output _____no_output_____ ###Markdown This next block of code plots the average of both the rain gauge data and the radar-derived rain rate ###Code from matplotlib.lines import Line2D from matplotlib.patches import Patch legend_elements = [Line2D([0], [0], color='c', linewidth=2), Line2D([0], [0], color='m', linewidth=2)] hp_mean=hp_new.mean(axis=1) df_mean=df_new.mean(axis=1) fig, ax = plt.subplots(figsize=(12, 7)) df_mean.plot(color='c', linewidth=2, ax=ax) hp_mean.plot(color='m', linewidth=2, ax=ax) ax.legend(legend_elements, ['radar average', 'rain gauge average']) plt.ylabel("rain amount (in)") plt.title("Rain Gauge Average Versus Radar-Derived Rain Rate Average") plt.savefig('/home/amedendorp/Desktop/SAVUER/RadarVGauge/Rain_gauge_V_Radar_Avg', bbox_inches='tight') plt.plot() ###Output _____no_output_____ ###Markdown And now, combine the averages and the original lines on the same figure. ###Code legend_elements = [Line2D([0], [0], color='blue', linewidth=1), Line2D([0], [0], color='r', linewidth=1), Line2D([0], [0], color='k', linewidth=2), Line2D([0], [0], color='purple', linewidth=2)] hp_mean=hp_new.mean(axis=1) df_mean=df_new.mean(axis=1) fig, ax = plt.subplots(figsize=(12, 7)) df_mean.plot(color='k', linewidth=2, ax=ax) hp_mean.plot(color='purple', linewidth=2, ax=ax) plt.plot() for i in col2_names: hp_new.plot(x='time', y=i, ax=ax, color='red', label='radar', linewidth=.5, alpha=.6) for i in col_names: df_new.plot(x='time', y=i, ax=ax, color='blue', label='rain gauge', linewidth=.5, alpha=.6) ax.legend(legend_elements, ['radar', 'rain gauge', 'radar average', 'rain gauge average']) plt.ylabel("rain amount (in)") plt.title("Rain Gauge Totals Versus Radar-Derived Rain Rate") plt.savefig('/home/amedendorp/Desktop/SAVUER/RadarVGauge/Rain_gauge_V_Radar_Final', bbox_inches='tight') plt.show() ###Output _____no_output_____
notebooks/Intro_to_jeepr_with_gprMax_data.ipynb
###Markdown Intro to `jeepr` with `gprMax` data`jeepr` is a set of utilities for handling GPR data, especially `gprMax` models and synthetics, and real data from USRadar instruments. ###Code import numpy as np import matplotlib.pyplot as plt % matplotlib inline import jeepr jeepr.__version__ ###Output _____no_output_____ ###Markdown Make `Scan` from a `gprMax` simulation `.out` file ###Code from jeepr import Scan g = Scan.from_gprmax('../tests/test_2D_merged.out') g.__dict__ g.plot() t0 = np.sqrt(2) / float(g.freq) h = g.crop(t=t0) h.plot() h.shape h.log ###Output _____no_output_____ ###Markdown Note, however, that the `t0` of the section has been reset to 0 ns. ###Code h.t0 ###Output _____no_output_____ ###Markdown Let's look at a spectrum; it looks quite different from real data. ###Code f, p = g.get_spectrum() plt.plot(f, p) ###Output _____no_output_____ ###Markdown Make `Model` from `gprMax` VTI file ###Code from jeepr import Model m = Model.from_gprMax('../tests/test_2D.in') m.plot() m.__dict__ ground = m.rx['position'][0] n = m.crop(z=ground) n.plot() ###Output _____no_output_____ ###Markdown Plot `Model` and `Scan` together in time domain ###Code n_time, _ = n.to_time(dt=5e-11) n_time.plot() fig = plt.figure(figsize=(16, 9)) ax0 = fig.add_subplot(111) ax0 = h.plot(ax=ax0) ax0 = n_time.plot(ax=ax0, alpha=0.5) plt.show() ###Output _____no_output_____
License Plate Detection.ipynb
###Markdown Exploring images with pandas ###Code # Importing intial packages import numpy as np import pandas as pd import os import cv2 import imutils from matplotlib import pyplot as plt from skimage.filters import threshold_local os.getcwd() # Check content of current dir for dirname, _, filenames in os.walk('.'): print(dirname) # Load and display first image in '.\License Plates\images' with Matplotlib # Load as pixel array test_image = cv2.imread('.\License Plates\images\*.jpg') # Display pixel array as image plt.imshow(test_image) plt.show() # Since Jupyter layers image planes in BGR instead of RGB # it needs to be converted to RGB # This should also be made into a method test_image_rgb = cv2.cvtColor(test_image, cv2.COLOR_BGR2RGB) plt.imshow(test_image_rgb) plt.show() ###Output _____no_output_____ ###Markdown Try to isolate the licensplate into a new image ###Code # This "cleanup" should be in its own method # since it needs to be done in all images # Convert to grey scale - good for analysing gray_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY) gray_image_rgb = cv2.cvtColor(gray_image, cv2.COLOR_BGR2RGB) plt.imshow(gray_image_rgb) plt.show() # Blur to reduce noise - good for analysing contours gray_image_blurred = cv2.bilateralFilter(gray_image, 11, 17, 17) gray_image_blurred_rgb = cv2.cvtColor(gray_image_blurred, cv2.COLOR_BGR2RGB) plt.imshow(gray_image_blurred_rgb) plt.show() # Edge detection /w Canny Edge Method # OpenCv has it build in so we'll use that # This is how it works: # https://docs.opencv.org/master/da/d22/tutorial_py_canny.html test_image_edged_30_200_L1 = cv2.Canny(gray_image_blurred, 30, 200) img_to_show = cv2.cvtColor(test_image_edged_30_200_L1, cv2.COLOR_BGR2RGB) plt.imshow(img_to_show) plt.show() test_image_edged_100_200_L1 = cv2.Canny(gray_image_blurred, 100, 200) img_to_show = cv2.cvtColor(test_image_edged_100_200_L1, cv2.COLOR_BGR2RGB) plt.imshow(img_to_show) plt.show() test_image_edged_150_200_L1 = cv2.Canny(gray_image_blurred, 150, 200) img_to_show = cv2.cvtColor(test_image_edged_150_200_L1, cv2.COLOR_BGR2RGB) plt.imshow(img_to_show) plt.show() ###Output _____no_output_____ ###Markdown Since images will be of varrying quality canny edge detection will we done with a rather low threshold for the hysteresis procedure, to ensure high realiability. (30, 200) ###Code # Look for contours contours = cv2.findContours(test_image_edged_30_200_L1, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) # Sort detected contours from big to small contours = imutils.grab_contours(contours) contours = sorted(contours, key = cv2.contourArea, reverse = True)[:10] screenCnt = None # Search for rectangular contour with a closed figure. for contour in contours: peri = cv2.arcLength(contour, True) approx = cv2.approxPolyDP(contour, 0.018 * peri, True) if len(approx) == 4: screenCnt = approx break # Mask everything except the license plate mask = np.zeros(test_image_edged_30_200_L1.shape, np.uint8) new_image = cv2.drawContours(mask, [screenCnt], 0 , 255, -1, ) new_image = cv2.bitwise_and(gray_image_blurred, gray_image_blurred, mask=mask) img_to_show = cv2.cvtColor(new_image, cv2.COLOR_BGR2RGB) plt.imshow(img_to_show) plt.show() # Create new cropped image only with the licensplate (x, y) = np.where(mask == 255) (topx, topy) = (np.min(x), np.min(y)) (bottomx, bottomy) = (np.max(x), np.max(y)) cropped = test_image[topx:bottomx+1, topy:bottomy+1] img_to_show = cv2.cvtColor(cropped, cv2.COLOR_BGR2RGB) plt.imshow(img_to_show) plt.show() # extract the Value component from the HSV color space and apply adaptive thresholding # to reveal the characters on the license plate V = cv2.split(cv2.cvtColor(cropped, cv2.COLOR_BGR2HSV))[2] T = cv2.threshold(V, 180, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] thresh = (V > T).astype("uint8") * 255 thresh = cv2.bitwise_not(thresh) # resize the license plate region to a canonical size cropped = imutils.resize(cropped, width=400) thresh = imutils.resize(thresh, width=400) img_to_show = cv2.cvtColor(T, cv2.COLOR_BGR2RGB) plt.imshow(img_to_show) plt.show() ## contours sorting method def sort_contours(contours, reverse=False): i = 0 bounding_boxes = [cv2.boundingRect(contour) for contour in contours] (contours, bounding_boxes) = zip(*sorted(zip(contours, bounding_boxes), key=lambda b: b[1][i], reverse=reverse)) return contours contours, _ = cv2.findContours(T, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) width, height = T.shape dimension = (height, width) cropped = cv2.resize(cropped, dimension ,interpolation = cv2.INTER_AREA) test_roi = cropped.copy() crop_characters = [] digit_w, digit_h = 30, 60 for contour in sort_contours(contours): (x, y, w, h) = cv2.boundingRect(contour) aspect_ratio = w / float(h) solidity = cv2.contourArea(contour) / float(w * h) height_ratio = h / float(cropped.shape[0]) keep_aspect_ratio = aspect_ratio < 1.0 keep_solidity = solidity > 0.15 keep_height = height_ratio > 0.2 and height_ratio < 0.95 if keep_aspect_ratio and keep_solidity and keep_height: test_roi = cv2.rectangle(test_roi, (x, y), (x + w, y + h), (0, 255, 0), 1) curr_num = T[y:y+h,x:x+w] curr_num = cv2.resize(curr_num, dsize=(digit_w, digit_h)) _, curr_num = cv2.threshold(curr_num, 220, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) crop_characters.append(curr_num) print("Detect {} letters...".format(len(crop_characters))) img_to_show = cv2.cvtColor(test_roi, cv2.COLOR_BGR2RGB) plt.imshow(img_to_show) plt.show() for i in range(len(crop_characters)): img_to_show = cv2.cvtColor(crop_characters[i], cv2.COLOR_BGR2RGB) plt.imshow(img_to_show) plt.show() ###Output _____no_output_____ ###Markdown Image processing for our datasetSiden ovenstående billeder ikke passer særligt godt til vores datasæt, der er brugt til at træne modellen, skal dataet omformateres en smule ###Code # First try to resize without changing any proportions img_to_show = cv2.cvtColor(crop_characters[3], cv2.COLOR_BGR2RGB) plt.imshow(img_to_show) plt.show() print('Resize to 28x28') resized_char = cv2.resize(crop_characters[3], dsize=(28, 28)) img_to_show = cv2.cvtColor(resized_char, cv2.COLOR_BGR2RGB) plt.imshow(img_to_show) plt.show() ###Output _____no_output_____ ###Markdown Resultatet ser lovende ud, men lad os se om det ikke kan gøres bedre ###Code new_char = cv2.copyMakeBorder(crop_characters[3], 4, 4, 19, 19, cv2.BORDER_CONSTANT, None, [0, 0, 0]) img_to_show = cv2.cvtColor(new_char, cv2.COLOR_BGR2RGB) plt.imshow(img_to_show) plt.show() resized_char = cv2.resize(new_char, dsize=(28, 28)) img_to_show = cv2.cvtColor(resized_char, cv2.COLOR_BGR2RGB) plt.imshow(img_to_show) plt.show() ###Output _____no_output_____
Oluwatoyin Fashua.ipynb
###Markdown Importing libraries ###Code import numpy as np from numpy import array as ary from numpy import mean import warnings warnings.simplefilter('ignore', FutureWarning) dir(np.array) ###Output _____no_output_____ ###Markdown Task from file 8 1. Write a NumPy program to test whether none of the elements of a given array is zero. ###Code arr1 = np.array([2,50,0,6,10,0,21,35,18]) for i,e in enumerate(arr1): if e==0: print('Zero value found is at Index',i) else: print(i," is not zero") ###Output 0 is not zero 1 is not zero Zero value found is at Index 2 3 is not zero 4 is not zero Zero value found is at Index 5 6 is not zero 7 is not zero 8 is not zero ###Markdown 2. Write a NumPy program to test whether any of the elements of a given array is non-zero. ###Code arr2 = np.array([12,0,1,0,2,3,6,9,0,1,0]) for i,e in enumerate(arr2): if e != 0: print(e, 'is a Non-Zero element') else: print(i, 'Index a Zero ') ###Output 12 is a Non-Zero element 1 Index a Zero 1 is a Non-Zero element 3 Index a Zero 2 is a Non-Zero element 3 is a Non-Zero element 6 is a Non-Zero element 9 is a Non-Zero element 8 Index a Zero 1 is a Non-Zero element 10 Index a Zero ###Markdown Task from file 9 1. Write a NumPy program to test a given array element-wise for finiteness (not infinity or not a Number). ###Code arr3 = np.array([1, 0, np.nan, np.inf]) print("Original array") print(arr3) print("Test a given array element-wise for finiteness :") print(np.isfinite(arr3)) ###Output Original array [ 1. 0. nan inf] Test a given array element-wise for finiteness : [ True True False False] ###Markdown 2. Write a NumPy program to test element-wise for positive or negative infinity. ###Code arr4 = np.array([1, 0, np.nan, np.inf]) print("Original array") print(arr4) print("Test element-wise for positive or negative infinity:") print(np.isinf(arr4)) ###Output Original array [ 1. 0. nan inf] Test element-wise for positive or negative infinity: [False False False True] ###Markdown Tasks from file 10 1. Write a NumPy program to test element-wise for NaN of a given array. ###Code arr5 = np.array([1, 0, np.nan, np.inf]) print("Original array") print(arr5) print("Test element-wise for NaN:") print(np.isnan(arr5)) ###Output Original array [ 1. 0. nan inf] Test element-wise for NaN: [False False True False] ###Markdown 2. Write a NumPy program to test element-wise for complex number, real number of a given array. Also test whether a given number is a scalar type or not. ###Code arr6 = np.array([1+1j, 1+0j, 4.5, 3, 2, 2j]) print("Original array") print(arr6) print("Checking for complex number:") print(np.iscomplex(arr6)) print("Checking for real number:") print(np.isreal(arr6)) print("Checking for scalar type:") print(np.isscalar(3.1)) print(np.isscalar([3.1])) ###Output Original array [1. +1.j 1. +0.j 4.5+0.j 3. +0.j 2. +0.j 0. +2.j] Checking for complex number: [ True False False False False True] Checking for real number: [False True True True True False] Checking for scalar type: True False ###Markdown Task from File 11 1. Write a NumPy program to test whether two arrays are element-wise equal within a tolerance. ###Code print("Test if two arrays are element-wise equal within a tolerance:") print(np.allclose([1e10,1e-7], [1.00001e10,1e-8])) print(np.allclose([1e10,1e-8], [1.00001e10,1e-9])) print(np.allclose([1e10,1e-8], [1.0001e10,1e-9])) print(np.allclose([1.0, np.nan], [1.0, np.nan])) print(np.allclose([1.0, np.nan], [1.0, np.nan], equal_nan=True)) ###Output Test if two arrays are element-wise equal within a tolerance: False True False False True ###Markdown 2. Write a NumPy program to create an element-wise comparison (greater, greater_equal, less and less_equal) of two given arrays. ###Code k = np.array([3, 5]) l = np.array([2, 5]) print("Original numbers:") print(k) print(l) print("Comparison - greater") print(np.greater(k, l)) print("Comparison - greater_equal") print(np.greater_equal(k, l)) print("Comparison - less") print(np.less(k, l)) print("Comparison - less_equal") print(np.less_equal(k, l)) ###Output Original numbers: [3 5] [2 5] Comparison - greater [ True False] Comparison - greater_equal [ True True] Comparison - less [False False] Comparison - less_equal [False True] ###Markdown Task from File 12 1. Write a NumPy program to create an element-wise comparison (equal, equal within a tolerance) of two given arrays ###Code k = np.array([72, 79, 85, 90, 150, -135, 120, -10, 60, 100]) m = np.array([72, 79, 85, 90, 150, -135, 120, -10, 60, 100.000001]) print("Original numbers:") print(k) print(m) print("Comparison - equal:") print(np.equal(k, m)) print("Comparison - equal within a tolerance:") print(np.allclose(k, m)) ###Output Original numbers: [ 72 79 85 90 150 -135 120 -10 60 100] [ 72. 79. 85. 90. 150. -135. 120. -10. 60. 100.000001] Comparison - equal: [ True True True True True True True True True False] Comparison - equal within a tolerance: True ###Markdown 2. Write a NumPy program to create an array with the values 1, 7, 13, 105 and determine the size of the memory occupied by the array. ###Code Arr9 = np.array([1, 7, 13, 105]) print("Original array:") print(Arr9) print("Size of the memory occupied by the said array:") print("%d bytes" % (Arr9.size * Arr9.itemsize)) ###Output Original array: [ 1 7 13 105] Size of the memory occupied by the said array: 16 bytes
2 intro-to-pytorch/.ipynb_checkpoints/Part 6 - Saving and Loading Models-checkpoint.ipynb
###Markdown Saving and Loading ModelsIn this notebook, I'll show you how to save and load models with PyTorch. This is important because you'll often want to load previously trained models to use in making predictions or to continue training on new data. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms import helper import fc_model # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) ###Output _____no_output_____ ###Markdown Here we can see one of the images. ###Code image, label = next(iter(trainloader)) helper.imshow(image[0,:]); ###Output _____no_output_____ ###Markdown Train a networkTo make things more concise here, I moved the model architecture and training code from the last part to a file called `fc_model`. Importing this, we can easily create a fully-connected network with `fc_model.Network`, and train the network using `fc_model.train`. I'll use this model (once it's trained) to demonstrate how we can save and load models. ###Code # Create the network, define the criterion and optimizer model = fc_model.Network(784, 10, [512, 256, 128]) criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=2) ###Output Epoch: 1/2.. Training Loss: 1.706.. Test Loss: 0.951.. Test Accuracy: 0.663 Epoch: 1/2.. Training Loss: 1.024.. Test Loss: 0.768.. Test Accuracy: 0.715 Epoch: 1/2.. Training Loss: 0.863.. Test Loss: 0.687.. Test Accuracy: 0.736 Epoch: 1/2.. Training Loss: 0.789.. Test Loss: 0.642.. Test Accuracy: 0.758 Epoch: 1/2.. Training Loss: 0.784.. Test Loss: 0.663.. Test Accuracy: 0.752 Epoch: 1/2.. Training Loss: 0.739.. Test Loss: 0.593.. Test Accuracy: 0.784 Epoch: 1/2.. Training Loss: 0.706.. Test Loss: 0.567.. Test Accuracy: 0.784 Epoch: 1/2.. Training Loss: 0.692.. Test Loss: 0.566.. Test Accuracy: 0.784 Epoch: 1/2.. Training Loss: 0.649.. Test Loss: 0.558.. Test Accuracy: 0.790 Epoch: 1/2.. Training Loss: 0.646.. Test Loss: 0.544.. Test Accuracy: 0.795 Epoch: 1/2.. Training Loss: 0.633.. Test Loss: 0.525.. Test Accuracy: 0.805 Epoch: 1/2.. Training Loss: 0.617.. Test Loss: 0.547.. Test Accuracy: 0.802 Epoch: 1/2.. Training Loss: 0.614.. Test Loss: 0.519.. Test Accuracy: 0.809 Epoch: 1/2.. Training Loss: 0.649.. Test Loss: 0.512.. Test Accuracy: 0.812 Epoch: 1/2.. Training Loss: 0.604.. Test Loss: 0.507.. Test Accuracy: 0.807 Epoch: 1/2.. Training Loss: 0.572.. Test Loss: 0.508.. Test Accuracy: 0.815 Epoch: 1/2.. Training Loss: 0.620.. Test Loss: 0.514.. Test Accuracy: 0.806 Epoch: 1/2.. Training Loss: 0.553.. Test Loss: 0.493.. Test Accuracy: 0.821 Epoch: 1/2.. Training Loss: 0.558.. Test Loss: 0.483.. Test Accuracy: 0.822 Epoch: 1/2.. Training Loss: 0.579.. Test Loss: 0.509.. Test Accuracy: 0.815 Epoch: 1/2.. Training Loss: 0.559.. Test Loss: 0.494.. Test Accuracy: 0.823 Epoch: 1/2.. Training Loss: 0.566.. Test Loss: 0.486.. Test Accuracy: 0.827 Epoch: 1/2.. Training Loss: 0.548.. Test Loss: 0.482.. Test Accuracy: 0.820 Epoch: 2/2.. Training Loss: 0.542.. Test Loss: 0.483.. Test Accuracy: 0.828 Epoch: 2/2.. Training Loss: 0.569.. Test Loss: 0.481.. Test Accuracy: 0.828 Epoch: 2/2.. Training Loss: 0.544.. Test Loss: 0.472.. Test Accuracy: 0.824 Epoch: 2/2.. Training Loss: 0.548.. Test Loss: 0.485.. Test Accuracy: 0.823 Epoch: 2/2.. Training Loss: 0.549.. Test Loss: 0.465.. Test Accuracy: 0.833 Epoch: 2/2.. Training Loss: 0.540.. Test Loss: 0.467.. Test Accuracy: 0.834 Epoch: 2/2.. Training Loss: 0.538.. Test Loss: 0.460.. Test Accuracy: 0.830 Epoch: 2/2.. Training Loss: 0.573.. Test Loss: 0.467.. Test Accuracy: 0.829 Epoch: 2/2.. Training Loss: 0.528.. Test Loss: 0.464.. Test Accuracy: 0.831 Epoch: 2/2.. Training Loss: 0.542.. Test Loss: 0.461.. Test Accuracy: 0.831 Epoch: 2/2.. Training Loss: 0.489.. Test Loss: 0.476.. Test Accuracy: 0.824 Epoch: 2/2.. Training Loss: 0.524.. Test Loss: 0.460.. Test Accuracy: 0.835 Epoch: 2/2.. Training Loss: 0.548.. Test Loss: 0.457.. Test Accuracy: 0.834 Epoch: 2/2.. Training Loss: 0.556.. Test Loss: 0.456.. Test Accuracy: 0.833 Epoch: 2/2.. Training Loss: 0.531.. Test Loss: 0.451.. Test Accuracy: 0.833 Epoch: 2/2.. Training Loss: 0.532.. Test Loss: 0.455.. Test Accuracy: 0.834 Epoch: 2/2.. Training Loss: 0.515.. Test Loss: 0.439.. Test Accuracy: 0.836 Epoch: 2/2.. Training Loss: 0.499.. Test Loss: 0.446.. Test Accuracy: 0.839 Epoch: 2/2.. Training Loss: 0.498.. Test Loss: 0.446.. Test Accuracy: 0.836 Epoch: 2/2.. Training Loss: 0.535.. Test Loss: 0.451.. Test Accuracy: 0.835 Epoch: 2/2.. Training Loss: 0.518.. Test Loss: 0.439.. Test Accuracy: 0.839 Epoch: 2/2.. Training Loss: 0.497.. Test Loss: 0.456.. Test Accuracy: 0.836 Epoch: 2/2.. Training Loss: 0.521.. Test Loss: 0.438.. Test Accuracy: 0.843 ###Markdown Saving and loading networksAs you can imagine, it's impractical to train a network every time you need to use it. Instead, we can save trained networks then load them later to train more or use them for predictions.The parameters for PyTorch networks are stored in a model's `state_dict`. We can see the state dict contains the weight and bias matrices for each of our layers. ###Code print("Our model: \n\n", model, '\n') print("The state dict keys: \n\n", model.state_dict().keys()) ###Output Our model: Network( (hidden_layers): ModuleList( (0): Linear(in_features=784, out_features=512, bias=True) (1): Linear(in_features=512, out_features=256, bias=True) (2): Linear(in_features=256, out_features=128, bias=True) ) (output): Linear(in_features=128, out_features=10, bias=True) (dropout): Dropout(p=0.5) ) The state dict keys: odict_keys(['hidden_layers.0.weight', 'hidden_layers.0.bias', 'hidden_layers.1.weight', 'hidden_layers.1.bias', 'hidden_layers.2.weight', 'hidden_layers.2.bias', 'output.weight', 'output.bias']) ###Markdown The simplest thing to do is simply save the state dict with `torch.save`. For example, we can save it to a file `'checkpoint.pth'`. ###Code torch.save(model.state_dict(), 'checkpoint.pth') ###Output _____no_output_____ ###Markdown Then we can load the state dict with `torch.load`. ###Code state_dict = torch.load('checkpoint.pth') print(state_dict.keys()) ###Output odict_keys(['hidden_layers.0.weight', 'hidden_layers.0.bias', 'hidden_layers.1.weight', 'hidden_layers.1.bias', 'hidden_layers.2.weight', 'hidden_layers.2.bias', 'output.weight', 'output.bias']) ###Markdown And to load the state dict in to the network, you do `model.load_state_dict(state_dict)`. ###Code model.load_state_dict(state_dict) ###Output _____no_output_____ ###Markdown Seems pretty straightforward, but as usual it's a bit more complicated. Loading the state dict works only if the model architecture is exactly the same as the checkpoint architecture. If I create a model with a different architecture, this fails. ###Code # Try this model = fc_model.Network(784, 10, [400, 200, 100]) # This will throw an error because the tensor sizes are wrong! model.load_state_dict(state_dict) ###Output _____no_output_____ ###Markdown This means we need to rebuild the model exactly as it was when trained. Information about the model architecture needs to be saved in the checkpoint, along with the state dict. To do this, you build a dictionary with all the information you need to compeletely rebuild the model. ###Code checkpoint = {'input_size': 784, 'output_size': 10, 'hidden_layers': [each.out_features for each in model.hidden_layers], 'state_dict': model.state_dict()} torch.save(checkpoint, 'checkpoint.pth') ###Output _____no_output_____ ###Markdown Now the checkpoint has all the necessary information to rebuild the trained model. You can easily make that a function if you want. Similarly, we can write a function to load checkpoints. ###Code def load_checkpoint(filepath): checkpoint = torch.load(filepath) model = fc_model.Network(checkpoint['input_size'], checkpoint['output_size'], checkpoint['hidden_layers']) model.load_state_dict(checkpoint['state_dict']) return model model = load_checkpoint('checkpoint.pth') print(model) ###Output Network( (hidden_layers): ModuleList( (0): Linear(in_features=784, out_features=400, bias=True) (1): Linear(in_features=400, out_features=200, bias=True) (2): Linear(in_features=200, out_features=100, bias=True) ) (output): Linear(in_features=100, out_features=10, bias=True) (dropout): Dropout(p=0.5) )
examples/tutorials/translations/português/Parte 04 - Aprendizado Federado por meio de Agregador Confiável.ipynb
###Markdown Parte 4: Aprendizado Federado usando Média de Modelos**Recapitulando:** Na Parte 2 deste tutorial, nós treinamos um modelo usando uma versão bem simples do Aprendizado Federado. Isso exigia que cada proprietário dos dados confiasse no proprietário do modelo para poder ver seus gradientes.**Descrição:** Neste tutorial, mostraremos como usar as ferramentas avançadas de agregação da Parte 3 para permitir que os pesos sejam agregados por um \"worker seguro\" confiável antes que o modelo resultante final seja enviado de volta ao proprietário do modelo (nesse caso, nós).Dessa maneira, somente o *worker* seguro pode ver de quem são os pesos. Talvez possamos saber quais partes do modelo foram alteradas, mas NÃO saberemos qual *worker* (bob ou alice) fez tal alteração, o que cria uma camada de privacidade.Autores: - Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask) - Jason Mancuso - Twitter: [@jvmancuso](https://twitter.com/jvmancuso) Tradução:- Jeferson Silva - Twitter: [@jefersonnpn](https://twitter.com/jefersonnpn) ###Code import torch import syft as sy import copy hook = sy.TorchHook(torch) from torch import nn, optim ###Output _____no_output_____ ###Markdown Passo 1: Criar Proprietários de DadosPrimeiro, vamos criar dois proprietários de dados (Bob e Alice), cada um com alguns dados. Também vamos inicializar uma máquina segura chamada "secure_worker". Na prática, pode ser um hardware seguro (como o SGX da Intel) ou simplesmente um intermediário confiável. ###Code # criando alguns workers bob = sy.VirtualWorker(hook, id="bob") alice = sy.VirtualWorker(hook, id="alice") secure_worker = sy.VirtualWorker(hook, id="secure_worker") # Nosso Dataset de exemplo data = torch.tensor([[0,0],[0,1],[1,0],[1,1.]], requires_grad=True) target = torch.tensor([[0],[0],[1],[1.]], requires_grad=True) # obtenha apontadores para os dados de treinamento de cada worker # enviando alguns dados de treinamento para bob e alice bobs_data = data[0:2].send(bob) bobs_target = target[0:2].send(bob) alices_data = data[2:].send(alice) alices_target = target[2:].send(alice) ###Output _____no_output_____ ###Markdown Passo 2: Criar Nosso ModeloNeste exemplo, vamos treinar com um modelo linear simples. Podemos inicializá-lo, normalmente, usando o construtor `nn.Linear` do PyTorch. ###Code # Inicialize o modelo model = nn.Linear(2,1) ###Output _____no_output_____ ###Markdown Passo 3: Envie uma Cópia do Modelo para Alice e BobEm seguida, precisamos enviar uma cópia do modelo atual para Alice e Bob para que eles possam executar as etapas de aprendizado em seus próprios conjuntos de dados. ###Code bobs_model = model.copy().send(bob) alices_model = model.copy().send(alice) bobs_opt = optim.SGD(params=bobs_model.parameters(),lr=0.1) alices_opt = optim.SGD(params=alices_model.parameters(),lr=0.1) ###Output _____no_output_____ ###Markdown Passo 4: Treine os Modelos de Alice e Bob (em paralelo)De forma convencional, na aprendizagem federada via *Secure Averaging* (Média Segura), cada proprietário dos dados primeiro treina seu modelo em várias iterações localmente antes que seja feito o cálculo da média dos modelos. ###Code for i in range(10): # Treina o Modelo de Bob bobs_opt.zero_grad() bobs_pred = bobs_model(bobs_data) bobs_loss = ((bobs_pred - bobs_target)**2).sum() bobs_loss.backward() bobs_opt.step() bobs_loss = bobs_loss.get().data # Treina o Modelo de Alice alices_opt.zero_grad() alices_pred = alices_model(alices_data) alices_loss = ((alices_pred - alices_target)**2).sum() alices_loss.backward() alices_opt.step() alices_loss = alices_loss.get().data print("Bob:" + str(bobs_loss) + " Alice:" + str(alices_loss)) ###Output _____no_output_____ ###Markdown Passo 5: Enviar Ambos os Modelos Atualizados para um Worker SeguroAgora que cada proprietário de dados possui um modelo parcialmente treinado, é hora de calcular a média entre eles de maneira segura. Conseguimos isso instruindo Alice e Bob a enviar seus modelos para o servidor seguro (confiável).Observe que esse uso de nossa API significa que cada modelo é enviado DIRETAMENTE ao `secure_worker`. Mas, nunca vemos isso. ###Code alices_model.move(secure_worker) bobs_model.move(secure_worker) ###Output _____no_output_____ ###Markdown Passo 6: Calcular a Média dos Modelos Finalmente, o último passo é calcular a média dos modelos treinados por Bob e Alice e, em seguida, usá-lo para definir os valores dos pesos do nosso "modelo" global. ###Code with torch.no_grad(): model.weight.set_(((alices_model.weight.data + bobs_model.weight.data) / 2).get()) model.bias.set_(((alices_model.bias.data + bobs_model.bias.data) / 2).get()) ###Output _____no_output_____ ###Markdown Apenas Repita!E agora só precisamos repetir esse processo várias vezes! ###Code iterations = 10 worker_iters = 5 for a_iter in range(iterations): bobs_model = model.copy().send(bob) alices_model = model.copy().send(alice) bobs_opt = optim.SGD(params=bobs_model.parameters(),lr=0.1) alices_opt = optim.SGD(params=alices_model.parameters(),lr=0.1) for wi in range(worker_iters): # Treina o Modelo de Bob bobs_opt.zero_grad() bobs_pred = bobs_model(bobs_data) bobs_loss = ((bobs_pred - bobs_target)**2).sum() bobs_loss.backward() bobs_opt.step() bobs_loss = bobs_loss.get().data # Treina o Modelo de Alice alices_opt.zero_grad() alices_pred = alices_model(alices_data) alices_loss = ((alices_pred - alices_target)**2).sum() alices_loss.backward() alices_opt.step() alices_loss = alices_loss.get().data alices_model.move(secure_worker) bobs_model.move(secure_worker) with torch.no_grad(): model.weight.set_(((alices_model.weight.data + bobs_model.weight.data) / 2).get()) model.bias.set_(((alices_model.bias.data + bobs_model.bias.data) / 2).get()) print("Bob:" + str(bobs_loss) + " Alice:" + str(alices_loss)) ###Output _____no_output_____ ###Markdown Por fim, queremos garantir que nosso modelo resultante tenha aprendido corretamente, para que possamos avaliá-lo em um conjunto de dados de teste. Neste problema simples, usamos os dados originais, mas, na prática, queremos usar novos dados para entender o quão generalizado o modelo é para exemplos não vistos. ###Code preds = model(data) loss = ((preds - target) ** 2).sum() print(preds) print(target) print(loss.data) ###Output _____no_output_____
site/ja/tutorials/generative/deepdream.ipynb
###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown DeepDream TensorFlow.org で実行 Google Colab で実行 GitHubでソースを表示 ノートブックをダウンロード このチュートリアルには、Alexander Mordvintsev によるこちらの[ブログ記事](https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html)で説明された DeepDream の最小限の実装が含まれます。DeepDream はニューラルネットワークが学習したパターンを視覚化する実験です。子供が雲を見てなんらかの形に解釈しようとするのと同様に、DeepDream は過解釈を行って、画像に見いだせるパターンの精度を強化します。ネットワークを通じて画像を転送し、特定のレイヤーのアクティベーションに関して画像の勾配を計算することで行われています。画像は、これらのアクティベーションを変更しながら、ネットワークに見られるパターンを強化して、夢の中のようなイメージを作り出します。このプロセスは、[InceptionNet](https://arxiv.org/pdf/1409.4842.pdf) と、[映画](https://en.wikipedia.org/wiki/Inception)「インセプション」の因んで、「インセプショニズム」と呼ばれています。では、ニューラルネットワークに「夢を見させて」、画像に見いだすシュールなパターンを強化する方法を実演することにしましょう。![Dogception](https://www.tensorflow.org/tutorials/generative/images/dogception.png) ###Code import tensorflow as tf import numpy as np import matplotlib as mpl import IPython.display as display import PIL.Image from tensorflow.keras.preprocessing import image ###Output _____no_output_____ ###Markdown ドリーム化する画像を選択する このチュートリアルでは、[ラブラドール](https://commons.wikimedia.org/wiki/File:YellowLabradorLooking_new.jpg)の画像を使用しましょう。 ###Code url = 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg' # Download an image and read it into a NumPy array. def download(url, max_dim=None): name = url.split('/')[-1] image_path = tf.keras.utils.get_file(name, origin=url) img = PIL.Image.open(image_path) if max_dim: img.thumbnail((max_dim, max_dim)) return np.array(img) # Normalize an image def deprocess(img): img = 255*(img + 1.0)/2.0 return tf.cast(img, tf.uint8) # Display an image def show(img): display.display(PIL.Image.fromarray(np.array(img))) # Downsizing the image makes it easier to work with. original_img = download(url, max_dim=500) show(original_img) display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>')) ###Output _____no_output_____ ###Markdown 特徴抽出モデルを準備する 事前トレーニング済みの画像分類モデルをダウンロードして準備します。もともと DeepDream で使用されたモデルに似た [InceptionV3](https://keras.io/applications/inceptionv3) を使用します。任意の[事前トレーニング済みのモデル](https://keras.io/applications/models-for-image-classification-with-weights-trained-on-imagenet)を使用することができますが、レイヤー名を変更する場合は、以下のように調整する必要があります。 ###Code base_model = tf.keras.applications.InceptionV3(include_top=False, weights='imagenet') ###Output _____no_output_____ ###Markdown DeepDream の考え方は、レイヤーを選択して、画像がレイヤーを徐々に「刺激する」ように「損失」を最大化することです。追加する特徴量の複雑さは、あなたが選択するレイヤーによって異なり、レイヤーが低ければストロークや単純なパターンを生成し、レイヤーが深くなるほど、画像または画像全体の特徴がより洗練されることになります。 InceptionV3 アーキテクチャは非常に大型です(モデルアーキテクチャのグラフについては、TensorFlow の [research リポジトリ](https://github.com/tensorflow/models/tree/master/research/inception)をご覧ください)。DeepDream では、対象のレイヤーは畳み込みが連結されている場所です。こういったレイヤーは InceptionV3 には 11 個あり、'mixed0' から 'mixed10' の名前が付けられています。異なるレイヤーを使用すると、異なった夢のような画像が生成されます。レイヤーが深くなるほどより高度な特徴(目や顔など)に対応し、浅いほどよりシンプルな特徴(エッジ、形状、テクスチャなど)に対応します。以下で選択するレイヤーを自由に試してみてください。ただし、レイヤーが深くなるほど(インデックスが高いレイヤー)、勾配の計算がより深くなるため、トレーニングに時間がかかることに注意してください。 ###Code # Maximize the activations of these layers names = ['mixed3', 'mixed5'] layers = [base_model.get_layer(name).output for name in names] # Create the feature extraction model dream_model = tf.keras.Model(inputs=base_model.input, outputs=layers) ###Output _____no_output_____ ###Markdown 損失を計算する損失は、選択されたレイヤーのアクティベーションの和です。損失はレイヤーごとに正規化されるため、より大きなレイヤーからの貢献は小さなレイヤーを上回らないようになっています。通常、損失は、勾配降下法で最小化する量ですが、DeepDream では、勾配上昇法によってこの損失を最大化します。 ###Code def calc_loss(img, model): # Pass forward the image through the model to retrieve the activations. # Converts the image into a batch of size 1. img_batch = tf.expand_dims(img, axis=0) layer_activations = model(img_batch) if len(layer_activations) == 1: layer_activations = [layer_activations] losses = [] for act in layer_activations: loss = tf.math.reduce_mean(act) losses.append(loss) return tf.reduce_sum(losses) ###Output _____no_output_____ ###Markdown 勾配上昇法選択したレイヤーの損失を計算したら、後は画像に関して勾配を計算し、それを元の画像に追加するだけです。画像に勾配を追加すると、ネットワークが見るパターンの精度が上がります。各ステップで、ネットワークの特定のレイヤーのアクティベーションを徐々に刺激する画像を作成することになります。これを行うメソッドは、パフォーマンスを得られるように `tf.function` でラッピングされます。`input_signature` を使用するため、さまざまな画像サイズまたは `steps`/`step_size` 値で関数が再トレースされないようになっています。詳細は、[具象関数ガイド](../../guide/concrete_function.ipynb)をご覧ください。 ###Code class DeepDream(tf.Module): def __init__(self, model): self.model = model @tf.function( input_signature=( tf.TensorSpec(shape=[None,None,3], dtype=tf.float32), tf.TensorSpec(shape=[], dtype=tf.int32), tf.TensorSpec(shape=[], dtype=tf.float32),) ) def __call__(self, img, steps, step_size): print("Tracing") loss = tf.constant(0.0) for n in tf.range(steps): with tf.GradientTape() as tape: # This needs gradients relative to `img` # `GradientTape` only watches `tf.Variable`s by default tape.watch(img) loss = calc_loss(img, self.model) # Calculate the gradient of the loss with respect to the pixels of the input image. gradients = tape.gradient(loss, img) # Normalize the gradients. gradients /= tf.math.reduce_std(gradients) + 1e-8 # In gradient ascent, the "loss" is maximized so that the input image increasingly "excites" the layers. # You can update the image by directly adding the gradients (because they're the same shape!) img = img + gradients*step_size img = tf.clip_by_value(img, -1, 1) return loss, img deepdream = DeepDream(dream_model) ###Output _____no_output_____ ###Markdown メインのループ ###Code def run_deep_dream_simple(img, steps=100, step_size=0.01): # Convert from uint8 to the range expected by the model. img = tf.keras.applications.inception_v3.preprocess_input(img) img = tf.convert_to_tensor(img) step_size = tf.convert_to_tensor(step_size) steps_remaining = steps step = 0 while steps_remaining: if steps_remaining>100: run_steps = tf.constant(100) else: run_steps = tf.constant(steps_remaining) steps_remaining -= run_steps step += run_steps loss, img = deepdream(img, run_steps, tf.constant(step_size)) display.clear_output(wait=True) show(deprocess(img)) print ("Step {}, loss {}".format(step, loss)) result = deprocess(img) display.clear_output(wait=True) show(result) return result dream_img = run_deep_dream_simple(img=original_img, steps=100, step_size=0.01) ###Output _____no_output_____ ###Markdown オクターブを実行するここまでで非常に素晴らしいものではありますが、この最初の試行にはいくつかの問題があります。1. 出力にノイズがある(`tf.image.total_variation` 損失で解消可能)。2. 画像解像度が低い。3. パターンが同じ粒度で発生しているように見える。上記のすべての問題を解決するには、1 つのアプローチとして、異なるスケールで勾配上昇法を適用することが挙げられます。こうすれば、より小さなスケールで生成されたパターンをより高いスケールのパターンに統合して、追加の詳細で満たすことができます。これを行うには、上述の勾配上昇法を実行してから、画像のサイズを増加し(これをオクターブと呼びます)、このプロセスを複数のオクターブで繰り返します。 ###Code import time start = time.time() OCTAVE_SCALE = 1.30 img = tf.constant(np.array(original_img)) base_shape = tf.shape(img)[:-1] float_base_shape = tf.cast(base_shape, tf.float32) for n in range(-2, 3): new_shape = tf.cast(float_base_shape*(OCTAVE_SCALE**n), tf.int32) img = tf.image.resize(img, new_shape).numpy() img = run_deep_dream_simple(img=img, steps=50, step_size=0.01) display.clear_output(wait=True) img = tf.image.resize(img, base_shape) img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8) show(img) end = time.time() end-start ###Output _____no_output_____ ###Markdown オプション: タイルでスケールアップする画像サイズが大きくなるにつれ、勾配計算の実行に必要な時間とメモリ量も高まるということに注意する必要があります。上記のオクターブ実装は、非常に大きな画像や多数のオクターブでは機能しません。この問題を回避するには、画像をタイルに分割して、各タイルに対して勾配を計算することができます。それぞれのタイル計算を行う前に画像にランダムシフトを適用すると、タイルの継ぎ目が現れなくなります。ランダムシフトの実装から始めましょう。 ###Code def random_roll(img, maxroll): # Randomly shift the image to avoid tiled boundaries. shift = tf.random.uniform(shape=[2], minval=-maxroll, maxval=maxroll, dtype=tf.int32) img_rolled = tf.roll(img, shift=shift, axis=[0,1]) return shift, img_rolled shift, img_rolled = random_roll(np.array(original_img), 512) show(img_rolled) ###Output _____no_output_____ ###Markdown 以下は、前に定義した `deepdream` 関数のタイルバージョンです。 ###Code class TiledGradients(tf.Module): def __init__(self, model): self.model = model @tf.function( input_signature=( tf.TensorSpec(shape=[None,None,3], dtype=tf.float32), tf.TensorSpec(shape=[], dtype=tf.int32),) ) def __call__(self, img, tile_size=512): shift, img_rolled = random_roll(img, tile_size) # Initialize the image gradients to zero. gradients = tf.zeros_like(img_rolled) # Skip the last tile, unless there's only one tile. xs = tf.range(0, img_rolled.shape[0], tile_size)[:-1] if not tf.cast(len(xs), bool): xs = tf.constant([0]) ys = tf.range(0, img_rolled.shape[1], tile_size)[:-1] if not tf.cast(len(ys), bool): ys = tf.constant([0]) for x in xs: for y in ys: # Calculate the gradients for this tile. with tf.GradientTape() as tape: # This needs gradients relative to `img_rolled`. # `GradientTape` only watches `tf.Variable`s by default. tape.watch(img_rolled) # Extract a tile out of the image. img_tile = img_rolled[x:x+tile_size, y:y+tile_size] loss = calc_loss(img_tile, self.model) # Update the image gradients for this tile. gradients = gradients + tape.gradient(loss, img_rolled) # Undo the random shift applied to the image and its gradients. gradients = tf.roll(gradients, shift=-shift, axis=[0,1]) # Normalize the gradients. gradients /= tf.math.reduce_std(gradients) + 1e-8 return gradients get_tiled_gradients = TiledGradients(dream_model) ###Output _____no_output_____ ###Markdown これを合わせると、スケーラブルなオクターブ対応の DeepDream 実装が得られます。 ###Code def run_deep_dream_with_octaves(img, steps_per_octave=100, step_size=0.01, octaves=range(-2,3), octave_scale=1.3): base_shape = tf.shape(img) img = tf.keras.preprocessing.image.img_to_array(img) img = tf.keras.applications.inception_v3.preprocess_input(img) initial_shape = img.shape[:-1] img = tf.image.resize(img, initial_shape) for octave in octaves: # Scale the image based on the octave new_size = tf.cast(tf.convert_to_tensor(base_shape[:-1]), tf.float32)*(octave_scale**octave) img = tf.image.resize(img, tf.cast(new_size, tf.int32)) for step in range(steps_per_octave): gradients = get_tiled_gradients(img) img = img + gradients*step_size img = tf.clip_by_value(img, -1, 1) if step % 10 == 0: display.clear_output(wait=True) show(deprocess(img)) print ("Octave {}, Step {}".format(octave, step)) result = deprocess(img) return result img = run_deep_dream_with_octaves(img=original_img, step_size=0.01) display.clear_output(wait=True) img = tf.image.resize(img, base_shape) img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8) show(img) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown DeepDream TensorFlow.org で実行 Google Colab で実行 GitHubでソースを表示 ノートブックをダウンロード このチュートリアルには、Alexander Mordvintsev によるこちらの[ブログ記事](https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html)で説明された DeepDream の最小限の実装が含まれます。DeepDream はニューラルネットワークが学習したパターンを視覚化する実験です。子供が雲を見てなんらかの形に解釈しようとするのと同様に、DeepDream は過解釈を行って、画像に見いだせるパターンの精度を強化します。ネットワークを通じて画像を転送し、特定のレイヤーのアクティベーションに関して画像の勾配を計算することで行われています。画像は、これらのアクティベーションを変更しながら、ネットワークに見られるパターンを強化して、夢の中のようなイメージを作り出します。このプロセスは、[InceptionNet](https://arxiv.org/pdf/1409.4842.pdf) と、[映画](https://en.wikipedia.org/wiki/Inception)「インセプション」の因んで、「インセプショニズム」と呼ばれています。では、ニューラルネットワークに「夢を見させて」、画像に見いだすシュールなパターンを強化する方法を実演することにしましょう。![Dogception](https://www.tensorflow.org/tutorials/generative/images/dogception.png) ###Code import tensorflow as tf import numpy as np import matplotlib as mpl import IPython.display as display import PIL.Image from tensorflow.keras.preprocessing import image ###Output _____no_output_____ ###Markdown ドリーム化する画像を選択する このチュートリアルでは、[ラブラドール](https://commons.wikimedia.org/wiki/File:YellowLabradorLooking_new.jpg)の画像を使用しましょう。 ###Code url = 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg' # Download an image and read it into a NumPy array. def download(url, max_dim=None): name = url.split('/')[-1] image_path = tf.keras.utils.get_file(name, origin=url) img = PIL.Image.open(image_path) if max_dim: img.thumbnail((max_dim, max_dim)) return np.array(img) # Normalize an image def deprocess(img): img = 255*(img + 1.0)/2.0 return tf.cast(img, tf.uint8) # Display an image def show(img): display.display(PIL.Image.fromarray(np.array(img))) # Downsizing the image makes it easier to work with. original_img = download(url, max_dim=500) show(original_img) display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>')) ###Output _____no_output_____ ###Markdown 特徴抽出モデルを準備する 事前トレーニング済みの画像分類モデルをダウンロードして準備します。もともと DeepDream で使用されたモデルに似た [InceptionV3](https://keras.io/applications/inceptionv3) を使用します。任意の[事前トレーニング済みのモデル](https://keras.io/applications/models-for-image-classification-with-weights-trained-on-imagenet)を使用することができますが、レイヤー名を変更する場合は、以下のように調整する必要があります。 ###Code base_model = tf.keras.applications.InceptionV3(include_top=False, weights='imagenet') ###Output _____no_output_____ ###Markdown DeepDream の考え方は、レイヤーを選択して、画像がレイヤーを徐々に「刺激する」ように「損失」を最大化することです。追加する特徴量の複雑さは、あなたが選択するレイヤーによって異なり、レイヤーが低ければストロークや単純なパターンを生成し、レイヤーが深くなるほど、画像または画像全体の特徴がより洗練されることになります。 InceptionV3 アーキテクチャは非常に大型です(モデルアーキテクチャのグラフについては、TensorFlow の [research リポジトリ](https://github.com/tensorflow/models/tree/master/research/inception)をご覧ください)。DeepDream では、対象のレイヤーは畳み込みが連結されている場所です。こういったレイヤーは InceptionV3 には 11 個あり、'mixed0' から 'mixed10' の名前が付けられています。異なるレイヤーを使用すると、異なった夢のような画像が生成されます。レイヤーが深くなるほどより高度な特徴(目や顔など)に対応し、浅いほどよりシンプルな特徴(エッジ、形状、テクスチャなど)に対応します。以下で選択するレイヤーを自由に試してみてください。ただし、レイヤーが深くなるほど(インデックスが高いレイヤー)、勾配の計算がより深くなるため、トレーニングに時間がかかることに注意してください。 ###Code # Maximize the activations of these layers names = ['mixed3', 'mixed5'] layers = [base_model.get_layer(name).output for name in names] # Create the feature extraction model dream_model = tf.keras.Model(inputs=base_model.input, outputs=layers) ###Output _____no_output_____ ###Markdown 損失を計算する損失は、選択されたレイヤーのアクティベーションの和です。損失はレイヤーごとに正規化されるため、より大きなレイヤーからの貢献は小さなレイヤーを上回らないようになっています。通常、損失は、勾配降下法で最小化する量ですが、DeepDream では、勾配上昇法によってこの損失を最大化します。 ###Code def calc_loss(img, model): # Pass forward the image through the model to retrieve the activations. # Converts the image into a batch of size 1. img_batch = tf.expand_dims(img, axis=0) layer_activations = model(img_batch) if len(layer_activations) == 1: layer_activations = [layer_activations] losses = [] for act in layer_activations: loss = tf.math.reduce_mean(act) losses.append(loss) return tf.reduce_sum(losses) ###Output _____no_output_____ ###Markdown 勾配上昇法選択したレイヤーの損失を計算したら、後は画像に関して勾配を計算し、それを元の画像に追加するだけです。画像に勾配を追加すると、ネットワークが見るパターンの精度が上がります。各ステップで、ネットワークの特定のレイヤーのアクティベーションを徐々に刺激する画像を作成することになります。これを行うメソッドは、パフォーマンスを得られるように `tf.function` でラッピングされます。`input_signature` を使用するため、さまざまな画像サイズまたは `steps`/`step_size` 値で関数が再トレースされないようになっています。詳細は、[具象関数ガイド](../../guide/concrete_function.ipynb)をご覧ください。 ###Code class DeepDream(tf.Module): def __init__(self, model): self.model = model @tf.function( input_signature=( tf.TensorSpec(shape=[None,None,3], dtype=tf.float32), tf.TensorSpec(shape=[], dtype=tf.int32), tf.TensorSpec(shape=[], dtype=tf.float32),) ) def __call__(self, img, steps, step_size): print("Tracing") loss = tf.constant(0.0) for n in tf.range(steps): with tf.GradientTape() as tape: # This needs gradients relative to `img` # `GradientTape` only watches `tf.Variable`s by default tape.watch(img) loss = calc_loss(img, self.model) # Calculate the gradient of the loss with respect to the pixels of the input image. gradients = tape.gradient(loss, img) # Normalize the gradients. gradients /= tf.math.reduce_std(gradients) + 1e-8 # In gradient ascent, the "loss" is maximized so that the input image increasingly "excites" the layers. # You can update the image by directly adding the gradients (because they're the same shape!) img = img + gradients*step_size img = tf.clip_by_value(img, -1, 1) return loss, img deepdream = DeepDream(dream_model) ###Output _____no_output_____ ###Markdown メインのループ ###Code def run_deep_dream_simple(img, steps=100, step_size=0.01): # Convert from uint8 to the range expected by the model. img = tf.keras.applications.inception_v3.preprocess_input(img) img = tf.convert_to_tensor(img) step_size = tf.convert_to_tensor(step_size) steps_remaining = steps step = 0 while steps_remaining: if steps_remaining>100: run_steps = tf.constant(100) else: run_steps = tf.constant(steps_remaining) steps_remaining -= run_steps step += run_steps loss, img = deepdream(img, run_steps, tf.constant(step_size)) display.clear_output(wait=True) show(deprocess(img)) print ("Step {}, loss {}".format(step, loss)) result = deprocess(img) display.clear_output(wait=True) show(result) return result dream_img = run_deep_dream_simple(img=original_img, steps=100, step_size=0.01) ###Output _____no_output_____ ###Markdown オクターブを実行するここまでで非常に素晴らしいものではありますが、この最初の試行にはいくつかの問題があります。1. 出力にノイズがある(`tf.image.total_variation` 損失で解消可能)。2. 画像解像度が低い。3. パターンが同じ粒度で発生しているように見える。上記のすべての問題を解決するには、1 つのアプローチとして、異なるスケールで勾配上昇法を適用することが挙げられます。こうすれば、より小さなスケールで生成されたパターンをより高いスケールのパターンに統合して、追加の詳細で満たすことができます。これを行うには、上述の勾配上昇法を実行してから、画像のサイズを増加し(これをオクターブと呼びます)、このプロセスを複数のオクターブで繰り返します。 ###Code import time start = time.time() OCTAVE_SCALE = 1.30 img = tf.constant(np.array(original_img)) base_shape = tf.shape(img)[:-1] float_base_shape = tf.cast(base_shape, tf.float32) for n in range(-2, 3): new_shape = tf.cast(float_base_shape*(OCTAVE_SCALE**n), tf.int32) img = tf.image.resize(img, new_shape).numpy() img = run_deep_dream_simple(img=img, steps=50, step_size=0.01) display.clear_output(wait=True) img = tf.image.resize(img, base_shape) img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8) show(img) end = time.time() end-start ###Output _____no_output_____ ###Markdown オプション: タイルでスケールアップする画像サイズが大きくなるにつれ、勾配計算の実行に必要な時間とメモリ量も高まるということに注意する必要があります。上記のオクターブ実装は、非常に大きな画像や多数のオクターブでは機能しません。この問題を回避するには、画像をタイルに分割して、各タイルに対して勾配を計算することができます。それぞれのタイル計算を行う前に画像にランダムシフトを適用すると、タイルの継ぎ目が現れなくなります。ランダムシフトの実装から始めましょう。 ###Code def random_roll(img, maxroll): # Randomly shift the image to avoid tiled boundaries. shift = tf.random.uniform(shape=[2], minval=-maxroll, maxval=maxroll, dtype=tf.int32) img_rolled = tf.roll(img, shift=shift, axis=[0,1]) return shift, img_rolled shift, img_rolled = random_roll(np.array(original_img), 512) show(img_rolled) ###Output _____no_output_____ ###Markdown 以下は、前に定義した `deepdream` 関数のタイルバージョンです。 ###Code class TiledGradients(tf.Module): def __init__(self, model): self.model = model @tf.function( input_signature=( tf.TensorSpec(shape=[None,None,3], dtype=tf.float32), tf.TensorSpec(shape=[], dtype=tf.int32),) ) def __call__(self, img, tile_size=512): shift, img_rolled = random_roll(img, tile_size) # Initialize the image gradients to zero. gradients = tf.zeros_like(img_rolled) # Skip the last tile, unless there's only one tile. xs = tf.range(0, img_rolled.shape[0], tile_size)[:-1] if not tf.cast(len(xs), bool): xs = tf.constant([0]) ys = tf.range(0, img_rolled.shape[1], tile_size)[:-1] if not tf.cast(len(ys), bool): ys = tf.constant([0]) for x in xs: for y in ys: # Calculate the gradients for this tile. with tf.GradientTape() as tape: # This needs gradients relative to `img_rolled`. # `GradientTape` only watches `tf.Variable`s by default. tape.watch(img_rolled) # Extract a tile out of the image. img_tile = img_rolled[x:x+tile_size, y:y+tile_size] loss = calc_loss(img_tile, self.model) # Update the image gradients for this tile. gradients = gradients + tape.gradient(loss, img_rolled) # Undo the random shift applied to the image and its gradients. gradients = tf.roll(gradients, shift=-shift, axis=[0,1]) # Normalize the gradients. gradients /= tf.math.reduce_std(gradients) + 1e-8 return gradients get_tiled_gradients = TiledGradients(dream_model) ###Output _____no_output_____ ###Markdown これを合わせると、スケーラブルなオクターブ対応の DeepDream 実装が得られます。 ###Code def run_deep_dream_with_octaves(img, steps_per_octave=100, step_size=0.01, octaves=range(-2,3), octave_scale=1.3): base_shape = tf.shape(img) img = tf.keras.preprocessing.image.img_to_array(img) img = tf.keras.applications.inception_v3.preprocess_input(img) initial_shape = img.shape[:-1] img = tf.image.resize(img, initial_shape) for octave in octaves: # Scale the image based on the octave new_size = tf.cast(tf.convert_to_tensor(base_shape[:-1]), tf.float32)*(octave_scale**octave) img = tf.image.resize(img, tf.cast(new_size, tf.int32)) for step in range(steps_per_octave): gradients = get_tiled_gradients(img) img = img + gradients*step_size img = tf.clip_by_value(img, -1, 1) if step % 10 == 0: display.clear_output(wait=True) show(deprocess(img)) print ("Octave {}, Step {}".format(octave, step)) result = deprocess(img) return result img = run_deep_dream_with_octaves(img=original_img, step_size=0.01) display.clear_output(wait=True) img = tf.image.resize(img, base_shape) img = tf.image.convert_image_dtype(img/255.0, dtype=tf.uint8) show(img) ###Output _____no_output_____
examples/reference/indicators/LoadingSpinner.ipynb
###Markdown The ``LoadingSpinner`` is a boolean indicator providing a visual representation of the loading status. If the `value` is set to `True` the spinner will rotate while setting it to `False` will disable the rotating segment. Parameters:For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).* **``bgcolor``** (str): The color of spinner background segment, either 'light' or 'dark'* **``color``** (str): The color of the spinning segment, one of 'primary', 'secondary', 'success', 'info', 'warn', 'danger', 'light', 'dark'* **``value``** (boolean): Whether the indicator is spinning or not.___ The `LoadingSpinner` can be instantiated in a spinning or idle state: ###Code idle = LoadingSpinner(value=False, width=100, height=100) loading = LoadingSpinner(value=True, width=100, height=100) pn.Row(idle, loading) ###Output _____no_output_____ ###Markdown The `LoadingSpinner` indicator also supports a range of spinner colors and backgrounds: ###Code grid = pn.GridBox('', 'light', 'dark', ncols=3) for color in LoadingSpinner.param.color.objects: dark = LoadingSpinner(width=50, height=50, value=True, color=color, bgcolor='dark') light = LoadingSpinner(width=50, height=50, value=True, color=color, bgcolor='light') grid.extend((color, light, dark)) grid ###Output _____no_output_____ ###Markdown The ``LoadingSpinner`` is a boolean indicator providing a visual representation of the loading status. If the `value` is set to `True` the spinner will rotate while setting it to `False` will disable the rotating segment. Parameters:For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).* **``bgcolor``** (str): The color of spinner background segment, either 'light' or 'dark'* **``color``** (str): The color of the spinning segment, one of 'primary', 'secondary', 'success', 'info', 'warn', 'danger', 'light', 'dark'* **``value``** (boolean): Whether the indicator is spinning or not.___ The `LoadingSpinner` can be instantiated in a spinning or idle state: ###Code idle = LoadingSpinner(value=False, width=100, height=100) loading = LoadingSpinner(value=True, width=100, height=100) pn.Row(idle, loading) ###Output _____no_output_____ ###Markdown The `LoadingSpinner` indicator also supports a range of spinner colors and backgrounds: ###Code grid = pn.GridBox('', 'light', 'dark', ncols=3) for color in LoadingSpinner.param.color.objects: dark = LoadingSpinner(width=50, height=50, value=True, color=color, bgcolor='dark') light = LoadingSpinner(width=50, height=50, value=True, color=color, bgcolor='light') grid.extend((color, light, dark)) grid ###Output _____no_output_____ ###Markdown The ``LoadingSpinner`` is a boolean indicator providing a visual representation of the loading status. If the `value` is set to `True` the spinner will rotate while setting it to `False` will disable the rotating segment. Parameters:For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).* **``bgcolor``** (str): The color of spinner background segment, either 'light' or 'dark'* **``color``** (str): The color of the spinning segment, one of 'primary', 'secondary', 'success', 'info', 'warn', 'danger', 'light', 'dark'* **``value``** (boolean): Whether the indicator is spinning or not.___ The `LoadingSpinner` can be instantiated in a spinning or idle state: ###Code idle = pn.indicators.LoadingSpinner(value=False, width=100, height=100) loading = pn.indicators.LoadingSpinner(value=True, width=100, height=100) pn.Row(idle, loading) ###Output _____no_output_____ ###Markdown The `LoadingSpinner` indicator also supports a range of spinner colors and backgrounds: ###Code grid = pn.GridBox('', 'light', 'dark', ncols=3) for color in pn.indicators.LoadingSpinner.param.color.objects: dark = pn.indicators.LoadingSpinner(width=50, height=50, value=True, color=color, bgcolor='dark') light = pn.indicators.LoadingSpinner(width=50, height=50, value=True, color=color, bgcolor='light') grid.extend((color, light, dark)) grid ###Output _____no_output_____
examples/compare_perks.ipynb
###Markdown Perk Comparison AnalysisThis looks at a pair of decks after a common perk choice and breaksdown the changes in probabilities and attacks. ###Code import pandas as pd from gloomhaven.deck import GloomhavenDeck from gloomhaven.render import render_tables, format_table_to_hmtl deck1 = GloomhavenDeck() deck2 = deck1.copy() ###Output _____no_output_____ ###Markdown Apply Perks ###Code # remove 2 "-1"s assert deck1.remove_card("-1") assert deck1.remove_card("-1") # remove "-2" and add "0" assert deck2.remove_card("-2") assert deck2.add_card("0") from collections import Counter import dictdiffer diffs = dictdiffer.diff( Counter(deck1.card_list), Counter(deck2.card_list) ) print("="*3, "changes", "="*3) for diff in diffs: if diff[0] == "change": print(f"{diff[1]:>2}: {diff[2][0]:>2} => {diff[2][1]:>2}") elif diff[0] == "remove": for d in diff[2]: print(f"{d[0]:>2}: {d[1]:>2} => {0:>2}") elif diff[0] == "add": for d in diff[2]: print(f"{d[0]:>2}: {0:>2} => {d[1]:>2}") attack_data = {} base_attacks = [1, 2, 3, 4, 5] samp_size = 100_000 def attacks(samp_size, attack): for _ in range(samp_size): yield attack attack_data1, attack_data2 = {}, {} for val in base_attacks: moves = deck1.simulate(attacks(samp_size, val)) attack_data1[f"base_attack_{val}"] = [dmg for dmg, _ in moves] moves = deck2.simulate(attacks(samp_size, val)) attack_data2[f"base_attack_{val}"] = [dmg for dmg, _ in moves] attack_data1 = pd.DataFrame(attack_data1) attack_data2 = pd.DataFrame(attack_data2) def get_counts(srs: pd.Series): srs = srs.value_counts() for attack_val in range(srs.index.max()+1): if attack_val not in srs.index: srs[attack_val] = 0 srs = srs / samp_size srs.sort_index() return srs pdf_1 = attack_data1.apply(get_counts, axis=0) pdf_2 = attack_data2.apply(get_counts, axis=0) def mode(x): return x.value_counts().index[0] summ_1 = attack_data1.agg(["min", "median", "max", "mean", "std", mode]) summ_2 = attack_data2.agg(["min", "median", "max", "mean", "std", mode]) def _color_red_or_green(val): if val < 0: return f"background-color: #f07067" elif val > 0: return f"background-color: #79ed85" diff_table = (summ_1.fillna(0).round(2) - summ_2.fillna(0).round(3)) with open("../assets/perk_comparison.html", "w") as f: f.write(render_tables([ ( '"Remove -1x2" - "Remove -2, Add 0"', format_table_to_hmtl(diff_table, _color_red_or_green) ), ( '"Remove -1x2" Summary Stats', format_table_to_hmtl(summ_1) ), ( '"Remove -2, Add 0" Summary Stats', format_table_to_hmtl(summ_2) ) ])) ###Output _____no_output_____
assets/notebooks/analysis_workflow_fasttext.ipynb
###Markdown Data PipelineInitial data analysis pipeline including a naive sentiment analysis using TextBlob. ###Code import re import spacy import pandas as pd import numpy as np from pathlib import Path from string import punctuation import matplotlib.pyplot as plt %matplotlib inline import calmap # for making GitHub-style calendar plots of time-series # Plot using Pandas datatime objects from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() rc_fonts = {'figure.figsize': (15, 8), 'axes.labelsize': 18, 'xtick.labelsize': 18, 'ytick.labelsize': 18, 'legend.fontsize': 20, } plt.rcParams.update(rc_fonts) plt.style.use('ggplot') # Scikit-learn for TF-IDF and similarity detection from sklearn.metrics.pairwise import cosine_similarity from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.manifold import MDS ###Output _____no_output_____ ###Markdown Use ```spaCy``` for tokenization and sentence segmentation ###Code import spacy from spacy import displacy # Load spaCy language model (blank model to which we add pipeline components) sentencizer = spacy.blank('en') sentencizer.add_pipe(sentencizer.create_pipe('sentencizer')) ###Output _____no_output_____ ###Markdown Specify named entity of interest ###Code name = "United Airlines" ###Output _____no_output_____ ###Markdown Write data: BooleanSpecify if we want to write the output data to csv or not. ###Code write_ = True datafile = 'all_the_news_v2.csv' datapath = Path('../') / 'data' / datafile colnames = ['title', 'author', 'date', 'content', 'year', 'month', 'publication', 'length'] news = pd.read_csv(datapath, usecols=colnames, parse_dates=['date']) news['author'] = news['author'].str.strip() news.head() news = news.dropna(subset=['date', 'title']) news.shape[0] news['date'].describe() ###Output _____no_output_____ ###Markdown Filter articles based on name matchIn this section we only select those news articles that contain part of or all of the name we input as ```name```. ###Code def check_name(content, name): flag = False if name in content: flag = True return flag def filter_df(df): df['match'] = df['content'].apply(lambda x: check_name(x, name)) df_relevant = df.loc[df['match'].eq(True)] return df_relevant.drop(['match'], axis=1) news_relevant = filter_df(news) print(news_relevant.shape[0]) news_relevant.head() ###Output 255 ###Markdown Perform sentence segmentationStore the sentences in each news articles as a list of sentences, from which we can easily extract per-sentence sentiment. ###Code def get_relevant(text, name): doc = sentencizer(text) relevant = [] for sent in doc.sents: for n in name.split(): if n in sent.text: clean = sent.text.replace("\n", " ").replace("\xa0", " ") # Strip bad characters at the start of sentences clean = clean.strip("[\'").strip("\']").strip('\"').strip("\'\"") clean = clean.strip(",\'").strip("\',").strip('\"').strip("\'\"").strip() relevant.append(clean) # Remove duplicates relevant = list(dict.fromkeys(relevant)) return relevant news_relevant['relevant'] = news_relevant['content'].apply(lambda x: get_relevant(x, name)) for i in news_relevant['relevant'][:5]: print(i, '\n--') ###Output ["The Atlantic)', '-- The United Airlines leggings controversy that took social media by storm. ("] -- ["American Airlines has agreed to pay $200 million for a stake in China Southern Airlines, one of the country\\'s three major state-owned carriers, and expand commercial cooperation.',", "China Southern Airlines said in an announcement Tuesday through the Hong Kong stock exchange that the purchase will represent 2.76% of its shares.',", "Chinese spending on air travel rose 10% in 2015, compared with 1.7% in the United States, according to the International Air Travel Assn.',", 'Two years ago, Delta Air Lines paid $450 million for 3.55% of China Eastern Airlines.', "The third major U.S. carrier, United Airlines, has a partnership with Air China, the third major Chinese government-owned airline.',", "Regulators in both China and the United States are reluctant to allow large foreign ownership stakes or management control of their airlines.',", 'The partnership with American Airlines "is expected to provide continuous impetus for the company\\\'s long-term growth," China Southern\\\'s announcement said.\',', 'June 16, 2017) (Sign up for our free video newsletter here "" target="_blank">"">Studios pushing earlier movie rentals amid growing pressures\', \'Jim Gianopulos is tasked with turning around struggling Paramount Pictures\', \'Bond king Bill Gross agrees to settlement in lawsuit against Pimco, ending nasty dispute\', \'UPDATES:\', \'9:40 p.m.: This article was updated with background information on China Southern Airlines.\','] -- ["United Airlines has taken a heap of criticism from celebrities and other air travelers over its decision last week to bar two teenage girls from boarding a flight from Denver because they were wearing leggings.',", "Florida-based Spirit Airlines on Tuesday posted an ad declaring “Let them wear leggings,” along with a one-day offer of 75% off on flights to specific destinations, on Tuesdays and Wednesdays only.',"] -- ['Delta, American and United airlines claim the right to eject fliers for smelling bad.', "Airlines grant themselves all sorts of power over who can be removed from a flight, embedded in the fine print that passengers agree to when they click “confirm” to purchase an airline ticket.',", "But in light of the incident this week in which a United Airlines passenger was dragged off a sold-out flight, to the shock of his fellow passengers and millions watching it on video, passenger advocates say airlines’ seating policies may now get closer scrutiny.',", "Here's United Airlines' latest PR nightmare. (", 'April 11, 2017)", "Here\'s United Airlines\' latest PR nightmare. (', "On Sunday, United booted Dr. David Dao from a Chicago plane scheduled to fly to Louisville, Ky. A video recording of the incident that went viral showed law enforcement officials confronting Dao and then dragging him down the aisle as passengers looked on in disbelief.',", 'United said that it was seeking to bump four passengers from the flight in order to accommodate a group of airline employees who needed to travel to Louisville.', "Though United officials have described Dao as disruptive and belligerent, the company issued an apology Tuesday.', '“", "No one should ever be mistreated this way,” United Chief Executive Oscar Munoz said in a statement.',", "The airline and banking industries may seem to be about as different as chalk and cheese, but United Airlines and Wells Fargo have been shown to share a common bond: toxic corporate cultures that can be blamed on the men at the top, their chief executives.',", "Wells Fargo’s John Stumpf is gone, having...', 'The airline and banking industries may seem to be about as different as chalk and cheese, but United Airlines and Wells Fargo have been shown to share a common bond: toxic corporate cultures that can be blamed on the men at the top, their chief executives.',", "But United has denied reports that the flight was overbooked, saying that it was sold out.',", "Lawmakers from both parties, including leaders of the Senate Commerce, Science and Transportation Committee, have sent United a letter demanding that the airline explain its actions and requested an investigation by the Transportation Department.',", "The United incident comes at a time when the number of bumped passengers is declining across the industry, according to a recent report from the Bureau of Transportation Statistics.',", 'Airlines posted an involuntary bumping rate of 62 per 1 million passengers in 2016, down from 73 per 1 million fliers in 2015.', 'United’s latest run of bad publicity began last month when it stopped two teenage girls from boarding a flight because they were wearing leggings.', "But the company later defended the decision, saying that because the girls were flying as “pass travelers” — meaning they were traveling using an employee pass — they were obligated to adhere to United’s dress code.',", "Late in March, United Airlines took heat for barring two teenage girls from boarding a plane because they were wearing leggings, which violated a dress code policy mandated for family and friends of employees.\\xa0On Sunday, April 9, a viral video showed a man being dragged out of his seat and off...', 'Late in March, United Airlines took heat for barring two teenage girls from boarding a plane because they were wearing leggings, which violated a dress code policy mandated for family and friends of employees.\\xa0On Sunday, April 9, a viral video showed a man being dragged out of his seat and off...', 'Airlines often prioritize customers when it comes to involuntary bumping.", 'United states in its carriage contract that minors and people with disabilities are the last to be denied boarding.', "[email protected]', '@DavidNgLAT', 'ALSO', 'United\\'s CEO turns contrite as fallout spreads from passenger mistreatment', 'David Dao, United passenger who was dragged from plane, says he\\'s still in the hospital', 'United passenger threatened with handcuffs to make room for \\'higher-priority\\' traveler"] -- ["As always, competition drives pricing and competition is alive and well as evidenced by the fact that inflation-adjusted airfares have fallen 26% since 2000,” said Vaughn Jennings, a spokesman for Airlines for America, the trade group for the nation’s carriers.',", 'The positive airfare news comes as United Airlines tries to overcome widespread criticism over an incident two weeks ago when a passenger was dragged from his seat.', 'American Airlines has also come under scrutiny because of a more recent incident, also caught on video, showing a male flight attendant and a male passenger nearly coming to blows.'] -- ###Markdown Sentiment scoring using FastText ###Code import fastText # Load trained fastText model model_path = './fasttext_models/pretrained_yelp_review_full.ftz' ###Output _____no_output_____ ###Markdown Preprocess text and tokenize as per FastText requirements ###Code def fasttext_tokenize(string): string = string.lower() string = re.sub(r"([.!?,'/()])", r" \1 ", string) return string # reviews = [ # "This restaurant literally changed my life. This is the best food I've ever eaten!", # "I hate this place so much. They were mean to me.", # "I don't know. It was ok, I guess. Not really sure what to say.", # ] # preprocessed_reviews = list(map(strip_formatting, reviews)) # # Load classifier # classifier = fastText.load_model(model_path) # # Get fastText to classify each review with the model # labels, probabilities = classifier.predict(preprocessed_reviews, 1) # stars = [int(l[0][-1]) - 3 for l in labels] # stars classifier = fastText.load_model(model_path) def get_score_fasttext(text_list): # Calculate polarity for each sentence preprocessed = list(map(fasttext_tokenize, text_list)) labels, probabilities = classifier.predict(preprocessed, 1) sentiment_list = [(int(l[0][-1]) - 3)/2 for l in labels if l] score = np.mean(sentiment_list) deviation = np.std(sentiment_list) return score, deviation news_relevant['score'], news_relevant['deviation'] = zip(*news_relevant['relevant'].map(get_score_fasttext)) news_relevant.head(5) ###Output _____no_output_____ ###Markdown Lemmatize relevant sentences for comparisonThis is to remove duplicates. ###Code add_removed_words = {n for n in name.split()} # Include specific words to be removed stopwords = sentencizer.Defaults.stop_words stopwords = stopwords.union(add_removed_words) # Tokenize and lemmatize text def lemmatize(text): doc = sentencizer(text) tokens = [str(tok.lemma_).lower() for tok in doc if tok.text not in stopwords \ and tok.text not in punctuation] return tokens news_relevant['lemmas'] = news_relevant['relevant'].str.join(' ').apply(lemmatize).str.join(' ') news_relevant[['relevant', 'lemmas']].head() ###Output _____no_output_____ ###Markdown Drop duplicates ###Code news_relevant = news_relevant.drop_duplicates(subset=['lemmas']) news_relevant.shape[0] ###Output _____no_output_____ ###Markdown Positive sentiment group ###Code pos = news_relevant[news_relevant['score'] > 0.0].sort_values(by=['score'], ascending=False).reset_index(drop=True) print("Found {} overall positive articles for {}".format(pos.shape[0], name)) pos.head(3) ###Output Found 40 overall positive articles for United Airlines ###Markdown Write positive results ###Code if write_: out_filename = '_'.join(name.split()).lower() + '_pos.csv' out_path = Path('./') / "results/fasttext" / out_filename pos.sort_values(by='publication')[['publication', 'title', 'date', 'relevant', 'score', 'deviation']] \ .to_csv(out_path, index=False, header=True) ###Output _____no_output_____ ###Markdown Negative sentiment group ###Code neg = news_relevant[news_relevant['score'] < 0.0].sort_values(by=['score']).reset_index(drop=True) print("Found {} overall negative articles for {}".format(neg.shape[0], name)) neg.head(3) ###Output Found 190 overall negative articles for United Airlines ###Markdown Write negative results ###Code if write_: out_filename = '_'.join(name.split()).lower() + '_neg.csv' out_path = Path('./') / "results/fasttext" / out_filename neg.sort_values(by='publication')[['publication', 'title', 'date', 'relevant', 'score', 'deviation']] \ .to_csv(out_path, index=False, header=True) mixed = news_relevant[news_relevant['score'] == 0.0].reset_index(drop=True) print("Found {} overall mixed articles for {}".format(mixed.shape[0], name)) mixed.head(3) ###Output Found 15 overall mixed articles for United Airlines ###Markdown Highlight relevant named entities using ```spaCy```Optional step to observe the key named entities in the positive/negative or mixed sentiment articles. ###Code from IPython.display import Markdown, display options = {'ents': ['PERSON', 'ORG', 'GPE', 'EVENT'], 'colors': {'PERSON': '#9fafe5', 'ORG': '#d59b9b', 'GPE':'#81cba6'}} def printmd(string): display(Markdown(string)) def display_entities(nlp, df, max_entries=5): # Set relevant named entities that we want to extract for idx, sent in enumerate(df['relevant'].str.join(' ')[:max_entries]): doc = nlp(sent) printmd('**{}**'.format(df['title'][idx])) displacy.render(doc, style='ent', jupyter=True, options=options) print('\n') def vis(pos, neg, mixed, spacy_lang='en_core_web_md'): nlp = spacy.load(spacy_lang) # Visualize positive and negativ groups using markdown printmd('<font color=green>**Positive**</font>') display_entities(nlp, pos) printmd('<font color=red>**Negative**</font>') display_entities(nlp, neg) printmd('<font color=yellow>**Mixed**</font>') display_entities(nlp, mixed) # vis(pos, neg, mixed, spacy_lang='en_core_web_md') ###Output _____no_output_____ ###Markdown Visualization Plot sentiment score and magnitude versus time of publishing of the articleIn this section, sentiment "score" is the median of all polarity values (positive or negative) obtained per-sentence of the article from TextBlob. Sentiment "magnitude" is the standard deviation of sentiment among the per-sentence polarity values. ###Code news_avg_score = news_relevant.groupby('date')['score'].mean() news_avg_dev = news_relevant.groupby('date')['deviation'].mean() ###Output _____no_output_____ ###Markdown Get article count per day ###Code news_count = news_relevant.groupby(['date']).count()['title'] ###Output _____no_output_____ ###Markdown Get peak polar article per day (min negative or max positive score) ###Code news_relevant['abs'] = news_relevant['score'].abs() news_relevant[['date', 'score', 'abs']].head(3) news_relevant[(news_relevant['date'] > '2016-08-14') & (news_relevant['date'] < '2016-08-17')].head(10) news_peak_polar = news_relevant.groupby('date').max()[['title', 'publication', 'relevant']] # Extract just the first 3 relevant sentences from the article and convert to single string news_peak_polar['relevant'] = news_peak_polar['relevant'].apply(lambda x: x[:3]).str.join(' ') print(news_peak_polar.shape[0]) ###Output 133 ###Markdown Combine scores, magnitudes and article counts per day ###Code scores = pd.concat((news_avg_score, news_avg_dev, news_count), axis=1).sort_values(by=['date']) scores.columns = ['mean_score', 'mean_dev', 'count'] scores.head() ###Output _____no_output_____ ###Markdown Concatenate scores/counts DataFrame with most polar news content for that day ###Code data = pd.concat((news_peak_polar, scores), axis=1).sort_index() data.head() ###Output _____no_output_____ ###Markdown Reindex data to show daily scoresSince we have really sparse data (news articles about the target are not written every day, we reindex the time series and fill missing values with zeros. ###Code idx = pd.date_range('1/1/2014', '7/5/2017') daily = data.reindex(idx, fill_value=0.0) fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(15, 8)) ax1.fill_between(daily.index, daily['mean_score'], step='mid', color='black', alpha=0.6, linewidth=4); ax1.set_ylabel('Mean Score', fontdict={'fontsize': 18, 'fontweight': 'bold'}); ax1.set_title('Sentiment scores and deviations with time for "{}"'.format(name), fontdict={'fontsize': 18, 'fontweight': 'bold'}); ax2.fill_between(daily.index, daily['mean_dev'], step='mid', color='black', alpha=0.6, linewidth=4); ax2.set_ylabel('Mean Deviation', fontdict={'fontsize': 18, 'fontweight': 'bold'}); ax2.set_xlabel('Date', fontdict={'fontsize': 18, 'fontweight': 'bold'}); # Initiate a second y-axis with a shared x-axis for the article counts ax2_2 = ax2.twinx(); ax2_2.plot(daily.index, daily['count'], 'r--', alpha=0.6, linewidth=2); ax2_2.grid(False); ax2_2.set_ylabel('Article Count', fontdict={'fontsize': 18, 'fontweight': 'bold'}); plt.tight_layout() # plt.savefig("{}_scores".format('_'.join(name.split()).lower())) ###Output _____no_output_____ ###Markdown Make calendar plot to show periods of activity ###Code fig, axes = calmap.calendarplot(daily['mean_score'], vmin = -1.0, vmax=1.0, daylabels='MTWTFSS', dayticks=[0, 2, 4, 6], fig_kws=dict(figsize=(12.5, 9)), linewidth=1, fillcolor='lightgrey', cmap='coolwarm_r', ); fig.suptitle("Calendar map of aggregated sentiment for {}".format(name), fontsize=18); fig.tight_layout(rect=[0, 0.03, 1, 0.95]) if write_: out_filename = '_'.join(name.split()).lower() plt.savefig('calmap_{}.png'.format(out_filename)) ###Output /home/pprao/.local/lib/python3.6/site-packages/matplotlib/font_manager.py:1241: UserWarning: findfont: Font family ['Arial'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext])) ###Markdown Get counts of positive and negative mentions based on Publication ###Code grouped = news_relevant.groupby('publication').apply(lambda x: x['score'] >= 0.0) grouped = grouped.groupby('publication').value_counts().to_frame() grouped = grouped.unstack().fillna(0.0) grouped.columns = ['Negative', 'Positive'] grouped = grouped.sort_values(by='Negative') grouped ###Output _____no_output_____ ###Markdown Plot article breakdown ###Code grouped.plot(kind='barh', figsize=(12, 8)); plt.title('Count of number of articles with Positive/Negative Sentiment for {}'.format(name)); plt.ylabel(''); # plt.savefig("{}_breakdown".format('_'.join(name.split()).lower())) ###Output _____no_output_____ ###Markdown Output results to CSV ###Code if write_: out_filename = '_'.join(name.split()).lower() + '_breakdown.csv' out_path = Path('./') / "results/fasttext" / out_filename grouped.to_csv(out_path, header=True) if write_: data_filename = '_'.join(name.split()).lower() + '_data.csv' data_path = Path('./') / "results/fasttext" / data_filename daily[~daily['relevant'].eq(0)].to_csv(data_path, header=True) ###Output _____no_output_____ ###Markdown Visualize Cosine Similarity DistancesTo see how similar or different each article is based on publication, we can compute the cosine distances between articles to generate a "distance matrix" and then visualize these distances in two-dimensional space. Calculate TF-IDF for document similarityWe first define the term frequency-inverse document frequency to vectorize the text for each article into parameters, and generate a ```tf-idf``` matrix. Once we compute the ```tf-idf``` matrix, we can find a "distance matrix" that stores how similar or how different two documents are. ###Code # Define vectorizer parameters tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=200000, min_df=0.2) # Get TF-IDF matrix tfidf_matrix = tfidf_vectorizer.fit_transform(news_relevant['lemmas'] ) #fit the vectorizer to synopses print(tfidf_matrix.shape) # Display some key terms terms = tfidf_vectorizer.get_feature_names() print(terms) # Get cosine distance matrix dist = 1 - cosine_similarity(tfidf_matrix) ###Output _____no_output_____ ###Markdown Multidimensional Scaling (MDS)The computed distances are in multi-dimensional in nature. To visualize the similarity, we "embed" the cosine distances (from the distance matrix) to a two-dimensional space, which we can then plot to see how the articles compare with each other in terms of their content. ###Code embedding = MDS(n_components=2, dissimilarity="precomputed", random_state=37) dist_transformed = embedding.fit_transform(dist) print(dist_transformed.shape) xs, ys = dist_transformed[:, 0], dist_transformed[:, 1] ###Output (245, 2) ###Markdown Generate an MDS DataFrame for plottingWe combine the x-y distances from the MDS calculation with the original publication labels to see how different the articles are from each other, colored by publication. ###Code compare = pd.DataFrame(dict(label=news_relevant['publication'], x=xs, y=ys)) compare.head() L = news_relevant['publication'].nunique() print("Found {} unique categories for publications".format(L)) groups = compare.groupby('label').agg({'label': 'count', 'x': 'mean', 'y': 'mean'}) groups.columns = ['count', 'x', 'y'] groups = groups.sort_values(by='count') groups ###Output _____no_output_____ ###Markdown Visualize similarities as embedded cosine distances ###Code fig, ax = plt.subplots(figsize=(12, 9)) colors = [i for i in range(len(groups.index))] ax.scatter(groups['x'], groups['y'], c=colors, s=groups['count']*100, linewidths=1.5, alpha=0.5, edgecolors='k', cmap=plt.cm.gist_rainbow, ); for i, txt in enumerate(groups.index): ax.annotate(txt, (groups['x'][i], groups['y'][i]), fontsize=18, alpha=0.7); ax.set_xticklabels(['']); ax.set_yticklabels(['']); plt.tight_layout() ###Output _____no_output_____
cmse802-s20/0223-NN--pre-class-assignment.ipynb
###Markdown In order to successfully complete this assignment you must do the required reading, watch the provided videos and complete all instructions. The embedded Google form must be entirely filled out and submitted on or before **11:59pm on Sunday February 23**. Students must come to class the next day prepared to discuss the material covered in this assignment. answer Pre-Class Assignment: Artificial Neural NetworksThis entire Artificial Neural Networks module is from Neural Networks Demystified by @stephencwelch. We have streamlined the content to better fit the format of the class. However, if you have questions or are just curious I highly recommend downloading everything from the following git repository. It is a great reference to have: git clone https://github.com/stephencwelch/Neural-Networks-Demystified Goals for today's pre-class assignment 1. [The architecture of Artificial Neural Networks](The_architecture_of_Artificial_Neural_Networks)2. [Data flow: forward propagation](forward_propagation)3. [Exploring A Neural Network](Exploring_A_Neural_Network)4. [Assignment wrap-up](Assignment_wrap-up) ----- 1. The architecture of Artificial Neural Networks Watch the following video: ###Code from IPython.display import YouTubeVideo YouTubeVideo('bxe2T-V8XRs',width=640,height=360) ###Output _____no_output_____ ###Markdown We will use the data from the video above:$$X = \left[\begin{matrix} 3 & 5 \\ 5 & 1 \\ 10 & 2 \end{matrix}\right] \hspace{1cm} , \hspace{1cm}y = \left[ \begin{matrix} 75 \\ 82 \\ 93 \end{matrix}\right] $$ Step 1: Inicialize your inputs&9989; **DO THIS:** Create two numpy arrays to store the values of the variables $X$ and $y$, as well as their normalized counterparts $X_{norm}$ and $y_{norm}$. Call these python variables ```X```, ```X_norm```, ```y```, and ```y_norm``` ###Code # put your code here ###Output _____no_output_____ ###Markdown ----- 2. Data flow: forward propagation Data in a neural network flows via a process called **forward propagation**. Watch the following video: ###Code from IPython.display import YouTubeVideo YouTubeVideo('UJwK6jAStmg',width=640,height=360, align='Center') ###Output _____no_output_____ ###Markdown &9989; **QUESTION:** How many input layers, hidden layers and output layers are there in the neural network shown in the video? Modify the following variables to have their correct value. ###Code # Put your answer here inputLayerSize = 0 outputLayerSize = 0 hiddenLayerSize = 0 ###Output _____no_output_____ ###Markdown Step 2: Initialize random weights&9989; **DO THIS:** Randomly Initialize two numpy arrays ```W1``` and ```W2```, of the right dimensions, to store the weights (zero-one) in the synapses between input layer --> hidden layer, and hidden layer --> output layer. ###Code # your code here: ###Output _____no_output_____ ###Markdown Step 3: Multipuly the normalized input matrix by $W^{(1)}$$$Z^{(2)} = X W^{(1)} $$ Here is the code using the numpy dot matrix. If you get an error you may have initilized the size of your variables incorrectly. Make sure the second dimention of ```X_norm``` matches the first dimention of ```W1```: ###Code Z2 = np.dot(X_norm, W1) Z2 ###Output _____no_output_____ ###Markdown &9989; **DO THIS:** Implement and test the sigmoid function $$a(z) = \frac{1}{1 + e^{-z}} $$ The implemented sigmoid function should take as input a numpy array and return a numpy array of the same dimension, with the function $f$ applied to each entry. ###Code # your code here: def sigmoid(z): # apply sigmoid activation function return ###Output _____no_output_____ ###Markdown Test your sigmoid funciton using the following testing code: ###Code testInput = np.arange(-6,6,0.01) plt.plot(testInput, sigmoid(testInput), linewidth= 2) plt.grid(1) ###Output _____no_output_____ ###Markdown Step 4: Apply the sigmodal funciton to $Z^{(2)}$$$a^{(2)} = f({Z^{(2)}})$$ Here is the code to apply the sigmod function to $Z^{(2)}$ and display the results ###Code a2 = sigmoid(Z2) a2 ###Output _____no_output_____ ###Markdown Step 5: multiply $A^{(2)}$ by $W^{(2)}$ to get $Z^{(3)}$$$Z^{(3)} = A^{(2)} W^{(2)} $$ ###Code Z3 = np.dot(a2, W2) Z3 ###Output _____no_output_____ ###Markdown Step 6: Apply the sigmod function&9989; **DO THIS:** Apply the sigmod function again to $Z^{(3)}$ to produce $\hat{y}$$$\hat{y} = f({Z^{(3)}})$$ ###Code # your code here: yHat = 0 ###Output _____no_output_____ ###Markdown Final Comparison&9989; **DO THIS:** Now compare the estimation output ($\hat{y}$) to the actual output ```y_norm```. ###Code y_norm yHat ###Output _____no_output_____ ###Markdown Of course the results from forward propagation suck; no surprises here, the weights have not been properly chosen. That's what training a network does: the goal is to find a combination of weights so that the result of forward propagation fits the intended output data as best as possible. We will be covering this topic in class. ---- 3. Exploring A Neural NetworkPlease go to the following website : http://playground.tensorflow.org/There, you'll have the opportunity to play with an actual neural network (e.g., choosing its architecture and the type of activation function) for classification purpose. ---- 4. Assignment wrap-upPlease fill out the form that appears when you run the code below. **You must completely fill this out in order to receive credit for the assignment!**[Direct Link to Google Form](https://cmse.msu.edu/cmse802-pc-survey)If you have trouble with the embedded form, please make sure you log on with your MSU google account at [googleapps.msu.edu](https://googleapps.msu.edu) and then click on the direct link above. &9989; **Assignment-Specific QUESTION:** There is no Assignment specific question for this notebook. You can just say "none". Put your answer to the above question here &9989; **QUESTION:** Summarize what you did in this assignment. Put your answer to the above question here &9989; **QUESTION:** What questions do you have, if any, about any of the topics discussed in this assignment after working through the jupyter notebook? Put your answer to the above question here &9989; **QUESTION:** How well do you feel this assignment helped you to achieve a better understanding of the above mentioned topic(s)? Put your answer to the above question here &9989; **QUESTION:** What was the **most** challenging part of this assignment for you? Put your answer to the above question here &9989; **QUESTION:** What was the **least** challenging part of this assignment for you? Put your answer to the above question here &9989; **QUESTION:** What kind of additional questions or support, if any, do you feel you need to have a better understanding of the content in this assignment? Put your answer to the above question here &9989; **QUESTION:** Do you have any further questions or comments about this material, or anything else that's going on in class? Put your answer to the above question here &9989; **QUESTION:** Approximately how long did this pre-class assignment take? Put your answer to the above question here ###Code from IPython.display import HTML HTML( """ <iframe src="https://cmse.msu.edu/cmse802-pc-survey?embedded=true" width="100%" height="1200px" frameborder="0" marginheight="0" marginwidth="0"> Loading... </iframe> """ ) ###Output _____no_output_____
dataScience/aula01.ipynb
###Markdown **Semana de Data Science**- Minerando Dados Conhecendo a base de dados Monta o drive ###Code from google.colab import drive drive.mount('/content/drive') ###Output _____no_output_____ ###Markdown Importando as bibliotecas básicas ###Code import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Carregando a Base de Dados ###Code # carrega o dataset de boston from sklearn.datasets import load_boston boston = load_boston() # descrição do dataset print (boston.DESCR) # cria um dataframe pandas data = pd.DataFrame(boston.data, columns=boston.feature_names) # imprime as 5 primeiras linhas do dataset data.head() # escreve o arquivo para o disco data.to_csv('data.csv') ###Output _____no_output_____ ###Markdown Conhecendo as colunas da base de dados **`CRIM`**: Taxa de criminalidade per capita por regiao.**`ZN`**: Proporção de terrenos residenciais divididos por lotes com mais de 25.000 pés quadrados.**`INDUS`**: Essa é a proporção de hectares de negócios não comerciais por regiao.**`CHAS`**: variável fictícia Charles River (= 1 se o trecho limita o rio; 0 caso contrário)**`NOX`**: concentração de óxido nítrico (partes por 10 milhões)**`RM`**: Número médio de quartos entre as casas do bairro**`Age`**: proporção de unidades ocupadas pelos proprietários construídas antes de 1940**`DIS`**: distâncias ponderadas para cinco centros de emprego em Boston**`RAD`**: Índice de acessibilidade às rodovias radiais**`IMPOSTO`**: taxa do imposto sobre a propriedade de valor total por US $ 10.000**`B`**: 1000 (Bk - 0,63) ², onde Bk é a proporção de pessoas de descendência afro-americana por regiao**`PTRATIO`**: Bairros com maior proporção de alunos para professores (maior valor de 'PTRATIO')**`LSTAT`**: porcentagem de status mais baixo da população**`MEDV`**: valor médio de casas ocupadas pelos proprietários em US $ 1000 Adicionando a coluna que será nossa variável alvo ###Code # adiciona a variável MEDV data['MEDV'] = boston.target # imprime as 5 primeiras linhas do dataframe data.head() data.describe() ###Output _____no_output_____ ###Markdown Análise e Exploração dos Dados Nesta etapa nosso objetivo é conhecer os dados que estamos trabalhando.Podemos a ferramenta **Pandas Profiling** para essa etapa: ###Code # instalando o pandas profiling !pip install https://github.com/pandas-profiling/pandas-profiling/archive/master.zip # import o ProfileReport from pandas_profiling import ProfileReport # executando o profile profile = ProfileReport(data, title='Relatório - Pandas Profiling', html={'style':{'full_width':True}}) profile ###Output _____no_output_____ ###Markdown **Observações*** *O coeficiente de correlação varia de `-1` a `1`. Se valor é próximo de 1, isto significa que existe uma forte correlação positiva entre as variáveis. Quando esse número é próximo de -1, as variáveis tem uma forte correlação negativa.** *A relatório que executamos acima nos mostra que a nossa variável alvo (**MEDV**) é fortemente correlacionada com as variáveis `LSTAT` e `RM`** *`RAD` e `TAX` são fortemente correlacionadas, podemos remove-las do nosso modelo para evitar a multi-colinearidade.** *O mesmo acontece com as colunas `DIS` and `AGE` a qual tem a correlação de -0.75** *A coluna `ZN` possui 73% de valores zero.* ###Code # salvando o relatório no disco profile.to_file(output_file="Relatorio01.html") ###Output _____no_output_____
Datasets/tweet-dataset-5fold-roberta-96-clean.ipynb
###Markdown Dependencies ###Code import os, warnings, shutil import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from tokenizers import ByteLevelBPETokenizer from sklearn.utils import shuffle from sklearn.model_selection import StratifiedKFold from tweet_utility_scripts import * from tweet_utility_preprocess_roberta_scripts_aux import * import tweet_utility_preprocess_roberta_scripts_text as preprocess_text SEED = 0 warnings.filterwarnings("ignore") pd.set_option('max_colwidth', 120) ###Output _____no_output_____ ###Markdown Tokenizer ###Code MAX_LEN = 96 base_path = '/kaggle/input/qa-transformers/roberta/' vocab_path = base_path + 'roberta-base-vocab.json' merges_path = base_path + 'roberta-base-merges.txt' tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path, lowercase=True, add_prefix_space=True) tokenizer.save('./') ###Output _____no_output_____ ###Markdown Load data ###Code train_df = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/train.csv') # pre-process train_df.dropna(inplace=True) train_df = train_df.reset_index() train_df.drop('index', axis=1, inplace=True) train_df["text"] = train_df["text"].apply(lambda x: x.strip()) train_df["selected_text"] = train_df["selected_text"].apply(lambda x: x.strip()) train_df["text"] = train_df["text"].apply(lambda x: x.lower()) train_df["selected_text"] = train_df["selected_text"].apply(lambda x: x.lower()) train_df['jaccard'] = train_df.apply(lambda x: jaccard(x['text'], x['selected_text']), axis=1) train_df['text_len'] = train_df['text'].apply(lambda x : len(x)) train_df['text_wordCnt'] = train_df['text'].apply(lambda x : len(x.split(' '))) train_df['text_tokenCnt'] = train_df['text'].apply(lambda x : len(tokenizer.encode(x).ids)) train_df['selected_text_len'] = train_df['selected_text'].apply(lambda x : len(x)) train_df['selected_text_wordCnt'] = train_df['selected_text'].apply(lambda x : len(x.split(' '))) train_df['selected_text_tokenCnt'] = train_df['selected_text'].apply(lambda x : len(tokenizer.encode(x).ids)) sentiment_cols = train_df['sentiment'].unique() print('Train samples: %s' % len(train_df)) display(train_df.head()) display(train_df.describe()) ###Output Train samples: 27480 ###Markdown Remove noisy samples ###Code print(f'Complete set nunmber of samples {len(train_df)}') dirty_text_list = ['0b3fe0ca78', '4a265d8a34', 'ee20c2fdbe', '7f37ccff0a', 'add398ab57', '90e8facdd7', 'c9ea30009c', '7d665a86e0', '99d16017ae', 'cd89b279ef', 'fefc0ed9f0', '5db6024b06', '2d059a6bc6'] dirty_selected_text_list = ['7da058a4f6', '13259c1890', '25810b3323', '6899e9aa17', 'a99c5a9003', 'a21d9c38a8', '106e3d1042', '6ae7977873', '36c47981f9', '3686cff7dd', '4f57ed7ece', '8b31247f45', '5bada8d821', '9f19792407', '2225e0fa43'] train_df = train_df[~train_df['textID'].isin(dirty_text_list)] train_df = train_df[~train_df['textID'].isin(dirty_selected_text_list)] # Remove jaccard = 0 train_df = train_df[train_df['jaccard'] > 0] train_df = train_df.reset_index() train_df.drop('index', axis=1, inplace=True) print(f'Cleaned set nunmber of samples {len(train_df)}') ###Output Complete set nunmber of samples 27480 Cleaned set nunmber of samples 26882 ###Markdown 5-Fold split ###Code folds = StratifiedKFold(n_splits=5, shuffle=True, random_state=SEED) for fold_n, (train_idx, val_idx) in enumerate(folds.split(train_df, train_df['sentiment'])): print('Fold: %s, Train size: %s, Validation size %s' % (fold_n+1, len(train_idx), len(val_idx))) train_df[('fold_%s' % str(fold_n+1))] = 0 train_df[('fold_%s' % str(fold_n+1))].loc[train_idx] = 'train' train_df[('fold_%s' % str(fold_n+1))].loc[val_idx] = 'validation' ###Output Fold: 1, Train size: 21505, Validation size 5377 Fold: 2, Train size: 21505, Validation size 5377 Fold: 3, Train size: 21506, Validation size 5376 Fold: 4, Train size: 21506, Validation size 5376 Fold: 5, Train size: 21506, Validation size 5376 ###Markdown Data imputation ###Code train_df['imputed'] = False ##### Data imputation here ##### # # pre-process again # train_df.dropna(inplace=True) # train_df = train_df.reset_index() # train_df.drop('index', axis=1, inplace=True) # train_df["text"] = train_df["text"].apply(lambda x: x.strip()) # train_df["selected_text"] = train_df["selected_text"].apply(lambda x: x.strip()) # train_df["text"] = train_df["text"].apply(lambda x: x.lower()) # train_df["selected_text"] = train_df["selected_text"].apply(lambda x: x.lower()) # train_df['jaccard'] = train_df.apply(lambda x: jaccard(x['text'], x['selected_text']), axis=1) # train_df['text_len'] = train_df['text'].apply(lambda x : len(x)) # train_df['text_wordCnt'] = train_df['text'].apply(lambda x : len(x.split(' '))) # train_df['text_tokenCnt'] = train_df['text'].apply(lambda x : len(tokenizer.encode(x).ids)) # train_df['selected_text_len'] = train_df['selected_text'].apply(lambda x : len(x)) # train_df['selected_text_wordCnt'] = train_df['selected_text'].apply(lambda x : len(x.split(' '))) # train_df['selected_text_tokenCnt'] = train_df['selected_text'].apply(lambda x : len(tokenizer.encode(x).ids)) print(f"Original number of samples: {len(train_df[train_df['imputed'] == False])}") print(f"Imputed number of samples: {len(train_df[train_df['imputed'] == True])}") ###Output Original number of samples: 26882 Imputed number of samples: 0 ###Markdown Tokenizer sanity check ###Code for idx in range(10): print('\nRow %d' % idx) max_seq_len = 32 text = train_df['text'].values[idx] selected_text = train_df['selected_text'].values[idx] question = train_df['sentiment'].values[idx] _, (target_start, target_end, _) = preprocess_roberta(' ' + text, selected_text, ' ' + question, tokenizer, max_seq_len) question_encoded = tokenizer.encode(question).ids question_size = len(question_encoded) + 3 decoded_text = decode(target_start.argmax(), target_end.argmax(), text, question_size, tokenizer) print('text : "%s"' % text) print('selected_text: "%s"' % selected_text) print('decoded_text : "%s"' % decoded_text) assert selected_text == decoded_text ###Output Row 0 text : "i`d have responded, if i were going" selected_text: "i`d have responded, if i were going" decoded_text : "i`d have responded, if i were going" Row 1 text : "sooo sad i will miss you here in san diego!!!" selected_text: "sooo sad" decoded_text : "sooo sad" Row 2 text : "my boss is bullying me..." selected_text: "bullying me" decoded_text : "bullying me" Row 3 text : "what interview! leave me alone" selected_text: "leave me alone" decoded_text : "leave me alone" Row 4 text : "sons of ****, why couldn`t they put them on the releases we already bought" selected_text: "sons of ****," decoded_text : "sons of ****," Row 5 text : "http://www.dothebouncy.com/smf - some shameless plugging for the best rangers forum on earth" selected_text: "http://www.dothebouncy.com/smf - some shameless plugging for the best rangers forum on earth" decoded_text : "http://www.dothebouncy.com/smf - some shameless plugging for the best rangers forum on earth" Row 6 text : "2am feedings for the baby are fun when he is all smiles and coos" selected_text: "fun" decoded_text : "fun" Row 7 text : "soooo high" selected_text: "soooo high" decoded_text : "soooo high" Row 8 text : "both of you" selected_text: "both of you" decoded_text : "both of you" Row 9 text : "journey!? wow... u just became cooler. hehe... (is that possible!?)" selected_text: "wow... u just became cooler." decoded_text : "wow... u just became cooler." ###Markdown Data generation sanity check ###Code for idx in range(5): print('\nRow %d' % idx) max_seq_len = 24 text = train_df['text'].values[idx] selected_text = train_df['selected_text'].values[idx] question = train_df['sentiment'].values[idx] jaccard = train_df['jaccard'].values[idx] selected_text_wordCnt = train_df['selected_text_wordCnt'].values[idx] x_train, x_train_aux, x_train_aux_2, y_train, y_train_mask, y_train_aux = get_data(train_df[idx:idx+1], tokenizer, max_seq_len, preprocess_fn=preprocess_roberta) print('text : "%s"' % text) print('jaccard : "%.4f"' % jaccard) print('sentiment : "%s"' % question) print('word count : "%d"' % selected_text_wordCnt) print('input_ids : "%s"' % x_train[0][0]) print('attention_mask: "%s"' % x_train[1][0]) print('sentiment : "%d"' % x_train_aux[0]) print('sentiment OHE : "%s"' % x_train_aux_2[0]) print('selected_text : "%s"' % selected_text) print('start : "%s"' % y_train[0][0]) print('end : "%s"' % y_train[1][0]) print('mask : "%s"' % y_train_mask[0]) print('jaccard : "%.4f"' % y_train_aux[0][0]) print('word count : "%d"' % y_train_aux[1][0]) assert len(x_train) == 2 assert len(x_train_aux) == 1 assert len(x_train_aux_2) == 1 assert len(y_train) == 2 assert len(y_train_mask) == 1 assert len(y_train_aux) == 3 assert len(x_train[0][0]) == len(x_train[1][0]) == max_seq_len assert len(y_train[0][0]) == len(y_train[1][0]) == len(y_train_mask[0]) == max_seq_len ###Output Row 0 text : "i`d have responded, if i were going" jaccard : "1.0000" sentiment : "neutral" word count : "7" input_ids : "[ 0 7974 2 2 939 12905 417 33 2334 6 114 939 58 164 2 1 1 1 1 1 1 1 1 1]" attention_mask: "[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0]" sentiment : "1" sentiment OHE : "[0 1 0]" selected_text : "i`d have responded, if i were going" start : "[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" end : "[0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0]" mask : "[0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0]" jaccard : "1.0000" word count : "7" Row 1 text : "sooo sad i will miss you here in san diego!!!" jaccard : "0.2000" sentiment : "negative" word count : "2" input_ids : "[ 0 2430 2 2 98 3036 5074 939 40 2649 47 259 11 15610 1597 2977 16506 2 1 1 1 1 1 1]" attention_mask: "[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0]" sentiment : "0" sentiment OHE : "[1 0 0]" selected_text : "sooo sad" start : "[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" end : "[0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" mask : "[0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" jaccard : "0.2000" word count : "2" Row 2 text : "my boss is bullying me..." jaccard : "0.1667" sentiment : "negative" word count : "2" input_ids : "[ 0 2430 2 2 127 3504 16 11902 162 734 2 1 1 1 1 1 1 1 1 1 1 1 1 1]" attention_mask: "[1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0]" sentiment : "0" sentiment OHE : "[1 0 0]" selected_text : "bullying me" start : "[0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" end : "[0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" mask : "[0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" jaccard : "0.1667" word count : "2" Row 3 text : "what interview! leave me alone" jaccard : "0.6000" sentiment : "negative" word count : "3" input_ids : "[ 0 2430 2 2 99 1194 328 989 162 1937 2 1 1 1 1 1 1 1 1 1 1 1 1 1]" attention_mask: "[1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0]" sentiment : "0" sentiment OHE : "[1 0 0]" selected_text : "leave me alone" start : "[0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" end : "[0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" mask : "[0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" jaccard : "0.6000" word count : "3" Row 4 text : "sons of ****, why couldn`t they put them on the releases we already bought" jaccard : "0.2143" sentiment : "negative" word count : "3" input_ids : "[ 0 2430 2 2 7250 9 31095 6 596 1705 12905 90 51 342 106 15 5 8255 52 416 2162 2 1 1]" attention_mask: "[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0]" sentiment : "0" sentiment OHE : "[1 0 0]" selected_text : "sons of ****," start : "[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" end : "[0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" mask : "[0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" jaccard : "0.2143" word count : "3" ###Markdown Sentiment distribution ###Code for fold_n in range(folds.n_splits): fold_n += 1 fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 8.7)) fig.suptitle('Fold %s' % fold_n, fontsize=22) sns.countplot(x="sentiment", data=train_df[train_df[('fold_%s' % fold_n)] == 'train'], palette="GnBu_d", order=sentiment_cols, ax=ax1).set_title('Train') sns.countplot(x="sentiment", data=train_df[train_df[('fold_%s' % fold_n)] == 'validation'], palette="GnBu_d", order=sentiment_cols, ax=ax2).set_title('Validation') sns.despine() plt.show() ###Output _____no_output_____ ###Markdown Token count distribution ###Code for fold_n in range(folds.n_splits): fold_n += 1 fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 8.7), sharex=True) fig.suptitle('Fold %s' % fold_n, fontsize=22) sns.distplot(train_df[train_df[('fold_%s' % fold_n)] == 'train']['text_tokenCnt'], ax=ax1).set_title("Train") sns.distplot(train_df[train_df[('fold_%s' % fold_n)] == 'validation']['text_tokenCnt'], ax=ax2).set_title("Validation") sns.despine() plt.show() ###Output _____no_output_____ ###Markdown Output 5-fold set ###Code train_df.to_csv('5-fold.csv', index=False) display(train_df.head()) for fold_n in range(folds.n_splits): fold_n += 1 base_path = 'fold_%d/' % fold_n # Create dir os.makedirs(base_path) x_train, x_train_aux, x_train_aux_2, y_train, y_train_mask, y_train_aux = get_data(train_df[train_df[('fold_%s' % fold_n)] == 'train'], tokenizer, MAX_LEN, preprocess_fn=preprocess_roberta) x_valid, x_valid_aux, x_valid_aux_2, y_valid, y_valid_mask, y_valid_aux = get_data(train_df[(train_df[('fold_%s' % fold_n)] == 'validation') & (train_df['imputed'] == False)], tokenizer, MAX_LEN, preprocess_fn=preprocess_roberta) x_train, x_train_aux, x_train_aux_2 = np.array(x_train), np.array(x_train_aux), np.array(x_train_aux_2) y_train, y_train_mask, y_train_aux = np.array(y_train), np.array(y_train_mask), np.array(y_train_aux) x_valid, x_valid_aux, x_valid_aux_2 = np.array(x_valid), np.array(x_valid_aux), np.array(x_valid_aux_2) y_valid, y_valid_mask, y_valid_aux = np.array(y_valid), np.array(y_valid_mask), np.array(y_valid_aux) print('\nFOLD: %d' % (fold_n)) print('x_train, y_train', x_train.shape, y_train.shape) print('x_valid, y_valid', x_valid.shape, y_valid.shape) print('x_train_aux, y_train_aux, x_train_aux_2', x_train_aux.shape, y_train_aux.shape, x_train_aux_2.shape) print('x_valid_aux, y_valid_aux, x_valid_aux_2', x_valid_aux.shape, y_valid_aux.shape, x_valid_aux_2.shape) print('y_train_mask', y_train_mask.shape) print('y_valid_mask', y_valid_mask.shape) np.save(base_path + 'x_train', x_train) np.save(base_path + 'y_train', y_train) np.save(base_path + 'x_valid', x_valid) np.save(base_path + 'y_valid', y_valid) np.save(base_path + 'x_train_aux', x_train_aux) np.save(base_path + 'x_train_aux_2', x_train_aux_2) np.save(base_path + 'y_train_mask', y_train_mask) np.save(base_path + 'y_train_aux', y_train_aux) np.save(base_path + 'x_valid_aux', x_valid_aux) np.save(base_path + 'x_valid_aux_2', x_valid_aux_2) np.save(base_path + 'y_valid_mask', y_valid_mask) np.save(base_path + 'y_valid_aux', y_valid_aux) # Compress logs dir !tar -czf fold_1.tar.gz fold_1 !tar -czf fold_2.tar.gz fold_2 !tar -czf fold_3.tar.gz fold_3 !tar -czf fold_4.tar.gz fold_4 !tar -czf fold_5.tar.gz fold_5 # Delete logs dir shutil.rmtree('fold_1') shutil.rmtree('fold_2') shutil.rmtree('fold_3') shutil.rmtree('fold_4') shutil.rmtree('fold_5') ###Output _____no_output_____ ###Markdown Output 5-fold set (positive and negative) ###Code for fold_n in range(folds.n_splits): fold_n += 1 base_path = 'polar_fold_%d/' % fold_n # Create dir os.makedirs(base_path) train_fold = train_df[train_df[('fold_%s' % fold_n)] == 'train'].copy() valid_fold = train_df[(train_df[('fold_%s' % fold_n)] == 'validation') & (train_df['imputed'] == False)].copy() train_fold = pd.concat([train_fold[train_fold['sentiment'] == 'negative'], train_fold[train_fold['sentiment'] == 'positive']]) valid_fold = pd.concat([valid_fold[valid_fold['sentiment'] == 'negative'], valid_fold[valid_fold['sentiment'] == 'positive']]) train_fold = shuffle(train_fold, random_state=SEED).reset_index(drop=True) valid_fold = shuffle(valid_fold, random_state=SEED).reset_index(drop=True) x_train, x_train_aux, x_train_aux_2, y_train, y_train_mask, y_train_aux = get_data(train_fold, tokenizer, MAX_LEN, preprocess_fn=preprocess_roberta) x_valid, x_valid_aux, x_valid_aux_2, y_valid, y_valid_mask, y_valid_aux = get_data(valid_fold, tokenizer, MAX_LEN, preprocess_fn=preprocess_roberta) x_train, x_train_aux, x_train_aux_2 = np.array(x_train), np.array(x_train_aux), np.array(x_train_aux_2) y_train, y_train_mask, y_train_aux = np.array(y_train), np.array(y_train_mask), np.array(y_train_aux) x_valid, x_valid_aux, x_valid_aux_2 = np.array(x_valid), np.array(x_valid_aux), np.array(x_valid_aux_2) y_valid, y_valid_mask, y_valid_aux = np.array(y_valid), np.array(y_valid_mask), np.array(y_valid_aux) print('\nFOLD: %d' % (fold_n)) print('x_train, y_train', x_train.shape, y_train.shape) print('x_valid, y_valid', x_valid.shape, y_valid.shape) print('x_train_aux, y_train_aux, x_train_aux_2', x_train_aux.shape, y_train_aux.shape, x_train_aux_2.shape) print('x_valid_aux, y_valid_aux, x_valid_aux_2', x_valid_aux.shape, y_valid_aux.shape, x_valid_aux_2.shape) print('y_train_mask', y_train_mask.shape) print('y_valid_mask', y_valid_mask.shape) np.save(base_path + 'x_train', x_train) np.save(base_path + 'y_train', y_train) np.save(base_path + 'x_valid', x_valid) np.save(base_path + 'y_valid', y_valid) np.save(base_path + 'x_train_aux', x_train_aux) np.save(base_path + 'x_train_aux_2', x_train_aux_2) np.save(base_path + 'y_train_mask', y_train_mask) np.save(base_path + 'y_train_aux', y_train_aux) np.save(base_path + 'x_valid_aux', x_valid_aux) np.save(base_path + 'x_valid_aux_2', x_valid_aux_2) np.save(base_path + 'y_valid_mask', y_valid_mask) np.save(base_path + 'y_valid_aux', y_valid_aux) # Compress logs dir !tar -czf polar_fold_1.tar.gz polar_fold_1 !tar -czf polar_fold_2.tar.gz polar_fold_2 !tar -czf polar_fold_3.tar.gz polar_fold_3 !tar -czf polar_fold_4.tar.gz polar_fold_4 !tar -czf polar_fold_5.tar.gz polar_fold_5 # Delete logs dir shutil.rmtree('polar_fold_1') shutil.rmtree('polar_fold_2') shutil.rmtree('polar_fold_3') shutil.rmtree('polar_fold_4') shutil.rmtree('polar_fold_5') ###Output FOLD: 1 x_train, y_train (2, 12624, 96) (2, 12624, 96) x_valid, y_valid (2, 3156, 96) (2, 3156, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 12624) (3, 12624) (12624, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 3156) (3, 3156) (3156, 3) y_train_mask (12624, 96) y_valid_mask (3156, 96) FOLD: 2 x_train, y_train (2, 12624, 96) (2, 12624, 96) x_valid, y_valid (2, 3156, 96) (2, 3156, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 12624) (3, 12624) (12624, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 3156) (3, 3156) (3156, 3) y_train_mask (12624, 96) y_valid_mask (3156, 96) FOLD: 3 x_train, y_train (2, 12624, 96) (2, 12624, 96) x_valid, y_valid (2, 3156, 96) (2, 3156, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 12624) (3, 12624) (12624, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 3156) (3, 3156) (3156, 3) y_train_mask (12624, 96) y_valid_mask (3156, 96) FOLD: 4 x_train, y_train (2, 12624, 96) (2, 12624, 96) x_valid, y_valid (2, 3156, 96) (2, 3156, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 12624) (3, 12624) (12624, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 3156) (3, 3156) (3156, 3) y_train_mask (12624, 96) y_valid_mask (3156, 96) FOLD: 5 x_train, y_train (2, 12624, 96) (2, 12624, 96) x_valid, y_valid (2, 3156, 96) (2, 3156, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 12624) (3, 12624) (12624, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 3156) (3, 3156) (3156, 3) y_train_mask (12624, 96) y_valid_mask (3156, 96) ###Markdown Output 5-fold set (balanced) ###Code for fold_n in range(folds.n_splits): fold_n += 1 base_path = 'balanced_fold_%d/' % fold_n # Create dir os.makedirs(base_path) train_fold = train_df[train_df[('fold_%s' % fold_n)] == 'train'].copy() valid_fold = train_df[(train_df[('fold_%s' % fold_n)] == 'validation') & (train_df['imputed'] == False)].copy() # Sample data by lower bound lower_count_train = min(len(train_fold[train_fold['sentiment'] == 'neutral']), len(train_fold[train_fold['sentiment'] == 'negative']), len(train_fold[train_fold['sentiment'] == 'positive'])) lower_count_valid = min(len(valid_fold[valid_fold['sentiment'] == 'neutral']), len(valid_fold[valid_fold['sentiment'] == 'negative']), len(valid_fold[valid_fold['sentiment'] == 'positive'])) train_fold = pd.concat([train_fold[train_fold['sentiment'] == 'neutral'].sample(n=lower_count_train, random_state=SEED), train_fold[train_fold['sentiment'] == 'negative'].sample(n=lower_count_train, random_state=SEED), train_fold[train_fold['sentiment'] == 'positive'].sample(n=lower_count_train, random_state=SEED), ]) valid_fold = pd.concat([valid_fold[valid_fold['sentiment'] == 'neutral'].sample(n=lower_count_valid, random_state=SEED), valid_fold[valid_fold['sentiment'] == 'negative'].sample(n=lower_count_valid, random_state=SEED), valid_fold[valid_fold['sentiment'] == 'positive'].sample(n=lower_count_valid, random_state=SEED), ]) train_fold = shuffle(train_fold, random_state=SEED).reset_index(drop=True) valid_fold = shuffle(valid_fold, random_state=SEED).reset_index(drop=True) x_train, x_train_aux, x_train_aux_2, y_train, y_train_mask, y_train_aux = get_data(train_fold, tokenizer, MAX_LEN, preprocess_fn=preprocess_roberta) x_valid, x_valid_aux, x_valid_aux_2, y_valid, y_valid_mask, y_valid_aux = get_data(valid_fold, tokenizer, MAX_LEN, preprocess_fn=preprocess_roberta) x_train, x_train_aux, x_train_aux_2 = np.array(x_train), np.array(x_train_aux), np.array(x_train_aux_2) y_train, y_train_mask, y_train_aux = np.array(y_train), np.array(y_train_mask), np.array(y_train_aux) x_valid, x_valid_aux, x_valid_aux_2 = np.array(x_valid), np.array(x_valid_aux), np.array(x_valid_aux_2) y_valid, y_valid_mask, y_valid_aux = np.array(y_valid), np.array(y_valid_mask), np.array(y_valid_aux) print('\nFOLD: %d' % (fold_n)) print('x_train, y_train', x_train.shape, y_train.shape) print('x_valid, y_valid', x_valid.shape, y_valid.shape) print('x_train_aux, y_train_aux, x_train_aux_2', x_train_aux.shape, y_train_aux.shape, x_train_aux_2.shape) print('x_valid_aux, y_valid_aux, x_valid_aux_2', x_valid_aux.shape, y_valid_aux.shape, x_valid_aux_2.shape) print('y_train_mask', y_train_mask.shape) print('y_valid_mask', y_valid_mask.shape) np.save(base_path + 'x_train', x_train) np.save(base_path + 'y_train', y_train) np.save(base_path + 'x_valid', x_valid) np.save(base_path + 'y_valid', y_valid) np.save(base_path + 'x_train_aux', x_train_aux) np.save(base_path + 'x_train_aux_2', x_train_aux_2) np.save(base_path + 'y_train_mask', y_train_mask) np.save(base_path + 'y_train_aux', y_train_aux) np.save(base_path + 'x_valid_aux', x_valid_aux) np.save(base_path + 'x_valid_aux_2', x_valid_aux_2) np.save(base_path + 'y_valid_mask', y_valid_mask) np.save(base_path + 'y_valid_aux', y_valid_aux) # Compress logs dir !tar -czf balanced_fold_1.tar.gz balanced_fold_1 !tar -czf balanced_fold_2.tar.gz balanced_fold_2 !tar -czf balanced_fold_3.tar.gz balanced_fold_3 !tar -czf balanced_fold_4.tar.gz balanced_fold_4 !tar -czf balanced_fold_5.tar.gz balanced_fold_5 # Delete logs dir shutil.rmtree('balanced_fold_1') shutil.rmtree('balanced_fold_2') shutil.rmtree('balanced_fold_3') shutil.rmtree('balanced_fold_4') shutil.rmtree('balanced_fold_5') ###Output FOLD: 1 x_train, y_train (2, 17991, 96) (2, 17991, 96) x_valid, y_valid (2, 4497, 96) (2, 4497, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 17991) (3, 17991) (17991, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 4497) (3, 4497) (4497, 3) y_train_mask (17991, 96) y_valid_mask (4497, 96) FOLD: 2 x_train, y_train (2, 17991, 96) (2, 17991, 96) x_valid, y_valid (2, 4497, 96) (2, 4497, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 17991) (3, 17991) (17991, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 4497) (3, 4497) (4497, 3) y_train_mask (17991, 96) y_valid_mask (4497, 96) FOLD: 3 x_train, y_train (2, 17988, 96) (2, 17988, 96) x_valid, y_valid (2, 4500, 96) (2, 4500, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 17988) (3, 17988) (17988, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 4500) (3, 4500) (4500, 3) y_train_mask (17988, 96) y_valid_mask (4500, 96) FOLD: 4 x_train, y_train (2, 17991, 96) (2, 17991, 96) x_valid, y_valid (2, 4497, 96) (2, 4497, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 17991) (3, 17991) (17991, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 4497) (3, 4497) (4497, 3) y_train_mask (17991, 96) y_valid_mask (4497, 96) FOLD: 5 x_train, y_train (2, 17991, 96) (2, 17991, 96) x_valid, y_valid (2, 4497, 96) (2, 4497, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 17991) (3, 17991) (17991, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 4497) (3, 4497) (4497, 3) y_train_mask (17991, 96) y_valid_mask (4497, 96) ###Markdown Tokenizer sanity check (no QA) ###Code for idx in range(5): print('\nRow %d' % idx) max_seq_len = 32 text = train_df['text'].values[idx] selected_text = train_df['selected_text'].values[idx] _, (target_start, target_end, _) = preprocess_text.preprocess_roberta(' ' + text, selected_text, tokenizer, max_seq_len) decoded_text = preprocess_text.decode(target_start.argmax(), target_end.argmax(), text, tokenizer) print('text : "%s"' % text) print('selected_text: "%s"' % selected_text) print('decoded_text : "%s"' % decoded_text) assert selected_text == decoded_text ###Output Row 0 text : "i`d have responded, if i were going" selected_text: "i`d have responded, if i were going" decoded_text : "i`d have responded, if i were going" Row 1 text : "sooo sad i will miss you here in san diego!!!" selected_text: "sooo sad" decoded_text : "sooo sad" Row 2 text : "my boss is bullying me..." selected_text: "bullying me" decoded_text : "bullying me" Row 3 text : "what interview! leave me alone" selected_text: "leave me alone" decoded_text : "leave me alone" Row 4 text : "sons of ****, why couldn`t they put them on the releases we already bought" selected_text: "sons of ****," decoded_text : "sons of ****," ###Markdown Data generation sanity check (no QA) ###Code for idx in range(5): print('\nRow %d' % idx) max_seq_len = 24 text = train_df['text'].values[idx] selected_text = train_df['selected_text'].values[idx] jaccard = train_df['jaccard'].values[idx] selected_text_wordCnt = train_df['selected_text_wordCnt'].values[idx] x_train, x_train_aux, x_train_aux_2, y_train, y_train_mask, y_train_aux = preprocess_text.get_data(train_df[idx:idx+1], tokenizer, max_seq_len, preprocess_fn=preprocess_text.preprocess_roberta) print('text : "%s"' % text) print('jaccard : "%.4f"' % jaccard) print('sentiment : "%s"' % question) print('word count : "%d"' % selected_text_wordCnt) print('input_ids : "%s"' % x_train[0][0]) print('attention_mask: "%s"' % x_train[1][0]) print('sentiment : "%d"' % x_train_aux[0]) print('sentiment OHE : "%s"' % x_train_aux_2[0]) print('selected_text : "%s"' % selected_text) print('start : "%s"' % y_train[0][0]) print('end : "%s"' % y_train[1][0]) print('mask : "%s"' % y_train_mask[0]) print('jaccard : "%.4f"' % y_train_aux[0][0]) print('word count : "%d"' % y_train_aux[1][0]) assert len(x_train) == 2 assert len(x_train_aux) == 1 assert len(x_train_aux_2) == 1 assert len(y_train) == 2 assert len(y_train_mask) == 1 assert len(y_train_aux) == 3 assert len(x_train[0][0]) == len(x_train[1][0]) == max_seq_len assert len(y_train[0][0]) == len(y_train[1][0]) == len(y_train_mask[0]) == max_seq_len ###Output Row 0 text : "i`d have responded, if i were going" jaccard : "1.0000" sentiment : "negative" word count : "7" input_ids : "[ 939 12905 417 33 2334 6 114 939 58 164 1 1 1 1 1 1 1 1 1 1 1 1 1 1]" attention_mask: "[1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" sentiment : "1" sentiment OHE : "[0 1 0]" selected_text : "i`d have responded, if i were going" start : "[1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" end : "[0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" mask : "[1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" jaccard : "1.0000" word count : "7" Row 1 text : "sooo sad i will miss you here in san diego!!!" jaccard : "0.2000" sentiment : "negative" word count : "2" input_ids : "[ 98 3036 5074 939 40 2649 47 259 11 15610 1597 2977 16506 1 1 1 1 1 1 1 1 1 1 1]" attention_mask: "[1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0]" sentiment : "0" sentiment OHE : "[1 0 0]" selected_text : "sooo sad" start : "[1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" end : "[0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" mask : "[1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" jaccard : "0.2000" word count : "2" Row 2 text : "my boss is bullying me..." jaccard : "0.1667" sentiment : "negative" word count : "2" input_ids : "[ 127 3504 16 11902 162 734 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]" attention_mask: "[1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" sentiment : "0" sentiment OHE : "[1 0 0]" selected_text : "bullying me" start : "[0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" end : "[0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" mask : "[0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" jaccard : "0.1667" word count : "2" Row 3 text : "what interview! leave me alone" jaccard : "0.6000" sentiment : "negative" word count : "3" input_ids : "[ 99 1194 328 989 162 1937 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]" attention_mask: "[1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" sentiment : "0" sentiment OHE : "[1 0 0]" selected_text : "leave me alone" start : "[0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" end : "[0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" mask : "[0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" jaccard : "0.6000" word count : "3" Row 4 text : "sons of ****, why couldn`t they put them on the releases we already bought" jaccard : "0.2143" sentiment : "negative" word count : "3" input_ids : "[ 7250 9 31095 6 596 1705 12905 90 51 342 106 15 5 8255 52 416 2162 1 1 1 1 1 1 1]" attention_mask: "[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0]" sentiment : "0" sentiment OHE : "[1 0 0]" selected_text : "sons of ****," start : "[1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" end : "[0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" mask : "[1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]" jaccard : "0.2143" word count : "3" ###Markdown Output 5-fold set (no QA) ###Code for fold_n in range(folds.n_splits): fold_n += 1 base_path = 'no_qa_fold_%d/' % fold_n # Create dir os.makedirs(base_path) x_train, x_train_aux, x_train_aux_2, y_train, y_train_mask, y_train_aux = preprocess_text.get_data(train_df[train_df[('fold_%s' % fold_n)] == 'train'], tokenizer, MAX_LEN, preprocess_fn=preprocess_text.preprocess_roberta) x_valid, x_valid_aux, x_valid_aux_2, y_valid, y_valid_mask, y_valid_aux = preprocess_text.get_data(train_df[(train_df[('fold_%s' % fold_n)] == 'validation') & (train_df['imputed'] == False)], tokenizer, MAX_LEN, preprocess_fn=preprocess_text.preprocess_roberta) x_train, x_train_aux, x_train_aux_2 = np.array(x_train), np.array(x_train_aux), np.array(x_train_aux_2) y_train, y_train_mask, y_train_aux = np.array(y_train), np.array(y_train_mask), np.array(y_train_aux) x_valid, x_valid_aux, x_valid_aux_2 = np.array(x_valid), np.array(x_valid_aux), np.array(x_valid_aux_2) y_valid, y_valid_mask, y_valid_aux = np.array(y_valid), np.array(y_valid_mask), np.array(y_valid_aux) print('\nFOLD: %d' % (fold_n)) print('x_train, y_train', x_train.shape, y_train.shape) print('x_valid, y_valid', x_valid.shape, y_valid.shape) print('x_train_aux, y_train_aux, x_train_aux_2', x_train_aux.shape, y_train_aux.shape, x_train_aux_2.shape) print('x_valid_aux, y_valid_aux, x_valid_aux_2', x_valid_aux.shape, y_valid_aux.shape, x_valid_aux_2.shape) print('y_train_mask', y_train_mask.shape) print('y_valid_mask', y_valid_mask.shape) np.save(base_path + 'x_train', x_train) np.save(base_path + 'y_train', y_train) np.save(base_path + 'x_valid', x_valid) np.save(base_path + 'y_valid', y_valid) np.save(base_path + 'x_train_aux', x_train_aux) np.save(base_path + 'x_train_aux_2', x_train_aux_2) np.save(base_path + 'y_train_mask', y_train_mask) np.save(base_path + 'y_train_aux', y_train_aux) np.save(base_path + 'x_valid_aux', x_valid_aux) np.save(base_path + 'x_valid_aux_2', x_valid_aux_2) np.save(base_path + 'y_valid_mask', y_valid_mask) np.save(base_path + 'y_valid_aux', y_valid_aux) # Compress logs dir !tar -czf no_qa_fold_1.tar.gz no_qa_fold_1 !tar -czf no_qa_fold_2.tar.gz no_qa_fold_2 !tar -czf no_qa_fold_3.tar.gz no_qa_fold_3 !tar -czf no_qa_fold_4.tar.gz no_qa_fold_4 !tar -czf no_qa_fold_5.tar.gz no_qa_fold_5 # Delete logs dir shutil.rmtree('no_qa_fold_1') shutil.rmtree('no_qa_fold_2') shutil.rmtree('no_qa_fold_3') shutil.rmtree('no_qa_fold_4') shutil.rmtree('no_qa_fold_5') ###Output FOLD: 1 x_train, y_train (2, 21505, 96) (2, 21505, 96) x_valid, y_valid (2, 5377, 96) (2, 5377, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 21505) (3, 21505) (21505, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 5377) (3, 5377) (5377, 3) y_train_mask (21505, 96) y_valid_mask (5377, 96) FOLD: 2 x_train, y_train (2, 21505, 96) (2, 21505, 96) x_valid, y_valid (2, 5377, 96) (2, 5377, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 21505) (3, 21505) (21505, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 5377) (3, 5377) (5377, 3) y_train_mask (21505, 96) y_valid_mask (5377, 96) FOLD: 3 x_train, y_train (2, 21506, 96) (2, 21506, 96) x_valid, y_valid (2, 5376, 96) (2, 5376, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 21506) (3, 21506) (21506, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 5376) (3, 5376) (5376, 3) y_train_mask (21506, 96) y_valid_mask (5376, 96) FOLD: 4 x_train, y_train (2, 21506, 96) (2, 21506, 96) x_valid, y_valid (2, 5376, 96) (2, 5376, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 21506) (3, 21506) (21506, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 5376) (3, 5376) (5376, 3) y_train_mask (21506, 96) y_valid_mask (5376, 96) FOLD: 5 x_train, y_train (2, 21506, 96) (2, 21506, 96) x_valid, y_valid (2, 5376, 96) (2, 5376, 96) x_train_aux, y_train_aux, x_train_aux_2 (1, 21506) (3, 21506) (21506, 3) x_valid_aux, y_valid_aux, x_valid_aux_2 (1, 5376) (3, 5376) (5376, 3) y_train_mask (21506, 96) y_valid_mask (5376, 96) ###Markdown Test set EDA ###Code test_df = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv') # pre-process test_df["text"] = test_df["text"].apply(lambda x: x.strip()) test_df["text"] = test_df["text"].apply(lambda x: x.lower()) test_df['text_len'] = test_df['text'].apply(lambda x : len(x)) test_df['text_wordCnt'] = test_df['text'].apply(lambda x : len(x.split(' '))) test_df['text_tokenCnt'] = test_df['text'].apply(lambda x : len(tokenizer.encode(x).ids)) print('Test samples: %s' % len(test_df)) display(test_df.head()) display(test_df.describe()) ###Output Test samples: 3534 ###Markdown Sentiment distribution ###Code fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 8.7)) sns.countplot(x="sentiment", data=train_df, palette="GnBu_d", order=sentiment_cols, ax=ax1).set_title('Train') sns.countplot(x="sentiment", data=test_df, palette="GnBu_d", order=sentiment_cols, ax=ax2).set_title('Test') sns.despine() plt.show() ###Output _____no_output_____ ###Markdown Word count distribution ###Code fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 8.7), sharex=True) sns.distplot(train_df['text_wordCnt'], ax=ax1).set_title("Train") sns.distplot(test_df['text_wordCnt'], ax=ax2).set_title("Test") sns.despine() plt.show() ###Output _____no_output_____ ###Markdown Token count distribution ###Code fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 8.7), sharex=True) sns.distplot(train_df['text_tokenCnt'], ax=ax1).set_title("Train") sns.distplot(test_df['text_tokenCnt'], ax=ax2).set_title("Test") sns.despine() plt.show() ###Output _____no_output_____ ###Markdown Token count distribution (selected_text) ###Code fig, ax = plt.subplots(1, 1, figsize=(20, 5), sharex=True) sns.distplot(train_df['selected_text_tokenCnt'], ax=ax).set_title("Train") sns.despine() plt.show() ###Output _____no_output_____ ###Markdown Original set ###Code original_df = pd.read_csv("https://raw.githubusercontent.com/Galanopoulog/DATA607-Project-4/master/TextEmotion.csv") # pre-process original_df.dropna(inplace=True) original_df = original_df.reset_index() original_df.drop('index', axis=1, inplace=True) original_df['content'] = original_df['content'].apply(lambda x: x.strip()) original_df['content'] = original_df['content'].apply(lambda x: x.lower()) original_df['text_len'] = original_df['content'].apply(lambda x : len(x)) original_df['text_wordCnt'] = original_df['content'].apply(lambda x : len(x.split(' '))) original_df['text_tokenCnt'] = original_df['content'].apply(lambda x : len(tokenizer.encode(x).ids)) display(original_df.head()) display(original_df.describe()) fig, ax = plt.subplots(1, 1, figsize=(20, 6), sharex=True) sns.distplot(original_df['text_wordCnt'], ax=ax).set_title("Train") sns.despine() plt.show() fig, ax = plt.subplots(1, 1, figsize=(20, 6), sharex=True) sns.distplot(original_df['text_tokenCnt'], ax=ax).set_title("Train") sns.despine() plt.show() ###Output _____no_output_____
框架/Redis/Redis详解(1)——为什么我们都需要了解Redis.ipynb
###Markdown 原文: 一、前言从我第一次使用Redis来帮助加速算法运行速度开始,就把Redis应用在了各个项目中,每次带来的体验都非常得好,python多进程+Redis的使用帮助我把单进程运行十几个小时的程序加速到了只需要10分钟左右,也帮助我把本来需要运行十几分钟的项目加速到了几十秒就能运行结束,同时我也喜欢Redis项目本身的小巧和精致,再加上Redis目前业界使用的非常多,也常作为面试题目出现,所以在这里计划写些关于Redis的介绍,预计总共写两篇,第一篇主要介绍Redis的整体的一些设计和思想,第二篇会主要介绍Redis集群的一些研究,希望能帮助大家熟悉认识Redis,并鼓励在你的项目中能尝试使用Redis。本篇主要会涉及到如下内容:- Redis是什么- 为什么Redis速度能够这么快- Redis支持写入的数据结构都有哪些及其底层实现方式是什么- 内存资源稀缺,能够存储的键值数目有限,当Redis键值存不下时,该如何淘汰掉已有的键- Redis进程在内存中存储数据,如果Redis进程崩溃了,进程中的数据会丢失,那么Redis如何利用持久化来尽可能的保证数据的安全性- Redis的python客户端及一些常用的高效使用手段 二、Redis是什么Redis的全称是REmote DIctionary Server,是一个高效的内存键值数据库,常被用来做分布式的高速缓存,相比较我们常规使用的Mysql、MongoDB等数据库,Redis的最大特点在于数据读写全部在内存中进行,进而带来极大的效率优势。相比较其他的内存键值存储系统如Memcached, Redis支持更多的数据结构,极大的提升了使用的易用性。同时Redis采用典型的CS架构, 并且有着非常丰富的不同语言客户端支持,本篇文章的最后也会向大家介绍同步和异步模式下的两个python语言的Redis客户端使用。![pic](../../assets/cs.jpg)Redis采用CS架构 三、Redis为什么这么快Redis最大的好处就是快,Redis为什么能做到这么快呢?主要的原因有三点- **数据读写都在内存中完成**。从下图中我们可以看出,即使使用SSD,内存的读写速度要比外存的数据的读写速度快1000倍左右,如果你的电脑还没装上SSD,还是机械硬盘,那内存的读写速度比硬盘的读写速度就要快100000倍,那么基于内存的数据库的读写速度优势自然就是巨大的。![pic](../../assets/cache.jpg)不同存储层次的访问速度对比- **单线程请求处理**,这个主要是实现上的选择。也许同学会有疑惑,为什么不采用多线程进行并行读写呢?这里主要的原因仍然是Redis基于内存读写,多线程并行对数据读取的确能带来好处,但是同样带来了数据写入时**锁的开销以及线程切换的开销**。再大量的写入情况下,过多的锁带来的时间消耗比多线程带来的多核利用优势更大。- **I/O多路复用技术**。I/O多路复用我们又称之为事件驱动,Redis基于epoll等技术实现了网络部分响应的同步非阻塞I/O模式。Redis的I/O主要集中在了读写socket上,同步阻塞下,向客户端发送数据的时候,Redis需要一直等到对应客户端的socket可写才会去写,直到写完了再服务下一个请求,使用epoll等系统调用,把socket是否可读写的状态监控交给了操作系统,即Redis只会在操作系统告知其可读或者可写的socket文件的时候采取读写,进而节省了等IO的时间。关于epoll的具体介绍可以参考这一篇文章**[链接](https://blog.csdn.net/HDUTigerkin/article/details/7517390)。**以上三点是Redis为什么这么快的原因(**划重点,敲黑板!!**),内存读写是最主要的,其他两个技术选型对此也有所帮助。 四、Redis支持的数据结构我们要把数据存到内存里面,怎么存呢?理论上来讲,内存KV数据存储其实只需要支持字符串数据存取就能支持所有的数据类型存储了,至于列表、字典的存储,我们只需要将数据进行序列化就行。缺点就是用户每次要修改数据都要获得所有的数据,修改结束之后还得把所有的数据再传回去,这样不但增加了每次网络的传输数据体积,而且使用体验也不是很好,因为需要用户自己来解析数据,事实上这就是Memcached的做法。Redis为了提高易用性,支持了更加丰富的数据结构,最常用的便是**String、List、Hash、Set、Sorted Set**五种。接下来我们一一介绍五种数据结构,主要介绍其特点和底层实现,这样我们就好估计每种数据结构的操作时间复杂度。 StringString和我们常规理解的字符串基本一致,主要存储序列化后的字符串,支持写入原生字符串也支持写入数字类型。**String的存取复杂度均为O(1)**。主要支持的操作如下表|命令|含义||:---|:---||SET| 设置键值||GET| 获得给定的键||DEL| 删除给定的键| ListList即为列表,List在Redis底层采用的是**双向链**表实现的,所以我们会发现Redis的List操作命令有左右之分,比如`LPUSH`、`RPUSH`,实际上就是双端列表左右两端的存取。对于列表的端点插入和查询时间复杂度为O(1), 对于中间某个index的位置的值的获取以及对于index处于[start, end]的连续多个值的读取就是O(n)的复杂度(n为列表的长度),在我们的项目中,我们用List来存储疾病列表,来帮助实现用户搜索疾病时的即时自动补全。列表的主要命令如下:|命令|含义||:---|:---|LPUSH/RPUSH|向列表的左端/右端插入数据LPOP/RPOP| 向列表的左端/右端删除数据LRANGE/RANGE| 去除从左/右端开始计数的位置处于给定[start, end] 之间的所有 value|LINDEX/RINDEX|删除从左/右端开始计数的第INDEX个值 HashHash可以理解为我们常规使用的字典数据结构,Redis采用**散列表**来实现Hash, 一个Hash结构里面可以存在很多的key和value,Hash是Redis比较推荐使用的一种数据结构,据说内存使用会更好,具体我还没有研究。在我们的项目里,我们主要用Hash保存用户的token信息来帮助快速验证用户是否已登录。**Hash中的键值存取效率可以认为是O(1)**,Hash结构操作的主要命令如下表|命令|含义||:---|:---||HSET| 向Hash中添加Key|HGET|获取Hash中给定的Key值|HKEYS| 获取Hash中所有的Key SetSet是集合,满足集合**确定性、无序性、唯一性**三个性质,可以用来进行元素的**去重**操作。集合的底层实现仍然采用**散列表**,所以单个元素的存取可以认为是**O(1)**的时间复杂度,同时Redis支持对不同的集合的**交并等计算**,集合的操作命令主要如下:|命令|含义||:---|:---||SADD| 向集合中添加元素|SISMEMBER|判断键是否在集合中|SMEMBERS| 获取集合中所有的键|SREM |删除集合中给定的键 Sorted SetSorted Set是有序集合,满足集合唯一性的要求,同时也满足有序的性质。**向Sorted Set中插入元素的时候需要同时指定一个Score,用于作为排序的标准**,如下面的这条命令,我们向知乎热榜这个有序集合中插入一个文章的题目及其阅读量,通过有知乎热榜这个有序结合我们可以方便的得到每天排名靠前的文章题目及其对应的阅读量。 Sorted Set的底层实现采用的是**Skip List**, 所以其单个元素的存取效率可以近似认为是**O(logn)**的。有序集合的操作命令主要如下:```ZADD 知乎热榜 2000 如何看待xxxxx```|命令|含义 |示例|:---|:-------------------|:-------|ZADD| 向有序集合中添加元素|ZADD 知乎热榜 2000 如何看待xxxx|ZREM|删除集合中的元素|ZREM 知乎热榜 如何看待xxxx|ZRANGE| 获取有序集合中排名处于[start, end] 之间的值| ZRANGE 知乎热榜 0 10|ZSCORE|获取集合中给定的键score |ZSCORE 知乎热榜 如何看待xxxx 五、Redis键淘汰策略前文提过,Redis的所有数据是存储在内存中的,但是内存本身就是稀缺资源,我们常规使用的笔记本内存只有8G或者16G,而且这个内存是给所有的进程使用的,Redis作为我们运行的其中一个进程我们一般会限制Redis的使用内存上限,比如2G,否则Redis就会把可用内存耗光。2G实际上能存储的键值是有限的,那么如果用户把Redis的存储存满了该怎么办呢?就像我们把家里的冰箱都装满了,再想装东西就得扔掉一部分不吃的东西或者过期的东西一样,Redis也会选择淘汰掉一些键来为新的键提供空间。同时Redis**支持用户给键值设置过期时间**,如果检查到某些键过期了,就删除掉键来空余出空间。**为了方便管理,Redis把所有设置了过期时间的键放到一个单独的链表里面进行维护**。那么满了之后到底选择哪些键进行淘汰掉呢,Redis主要有三大类策略: 不淘汰策略第一条淘汰键的策略就是不淘汰哈哈,其实是表明Redis不主动清除键,清除键的操作全部交给用户来决定,如果用户始终不清除键,当Redis被写满了后,用户在往里面写Redis就会报错,**拒绝写入数据**。这种策略叫**noeviction**。 随机淘汰随机抽样淘汰即Redis随机选取一些键然后进行删除,这样带来的问题是用户也不知道哪些键被删除了,可能用户吃着火锅唱着歌,回头一看,自己的数据没了!那显然是很糟糕的,但Redis提供了这样一个选项,用不用那自然是用户的选择问题了。**根据随机抽样的集合不同又细分为两个策略,从所有的键中随机抽取就是allkeys-random, 从只设置了过期时间的键集合中进行抽取,就是volatile-random** LRU策略LRU(Least Recently Used)是最近最少使用原则,策略就是淘汰掉最近最不常用的键,每次用户访问某个键的时候,Redis就会记录这个键的访问时间,如果一个键距离上次访问已经太久没有被访问到了,那么Redis就认为这个键用户用不上了,就会把键清除掉。按照标准的LRU算法,我们应该统计所有键中最不常用的键,然后淘汰掉他,但是Redis是单线程响应用户请求的,不能每次都遍历所有的键来进行检查,否则就会严重的影响到服务的响应。所以**Redis采用一种随机抽样的方法。即每次随机抽取K个键值,然后淘汰掉这K个键中上次访问时间最早的的那个键**。同样,针对随机收取的集合不同又细分为两个策略,从所有的键中进行抽取,就是allkeys-lru策略,从只设置了过期时间的键集合中进行抽取,就是**volatile-lru**策略。**volatile-lru**策略是**比较推荐**的一种策略。关于LRU的策略,Redis的源码实现如下,我加了注释,还比较易懂.```c...... /* 如果选择了volatile-lru 或者 allkeys-lru 策略 */ else if (server.maxmemory_policy == REDIS_MAXMEMORY_ALLKEYS_LRU || server.maxmemory_policy == REDIS_MAXMEMORY_VOLATILE_LRU) { /*每次随机抽取maxmeory_samples个元素进行检查淘汰,默认设置为3*/ for (k = 0; k < server.maxmemory_samples; k++) { sds thiskey; long thisval; robj *o; /*随机抽取一个键*/ de = dictGetRandomKey(dict); thiskey = dictGetKey(de); /*如果用户设置的是volatile-lru,则从设置了有效期的集合中进行抽样*/ if (server.maxmemory_policy == REDIS_MAXMEMORY_VOLATILE_LRU) de = dictFind(db->dict, thiskey); o = dictGetVal(de); thisval = estimateObjectIdleTime(o); /* 找到距离上次访问过去时间最久的键*/ if (bestkey == NULL || thisval > bestval) { bestkey = thiskey; bestval = thisval; } } }......``` 六、Redis持久化策略Redis是把数据存储在自己进程的内存中,但是如果Redis进程挂了或者说电脑断电了,那么存储的数据就全部丢失了。为了保证数据的安全性,就需要把数据从内存的数据备份到硬盘上,这就是持久化操作。这样即使内存中的数据丢失了,那么也可以从硬盘上把数据恢复出来。Redis提供两种持久化策略:**RDB持久化和AOF持久化**。不要被这两个名字吓到,RDB,AOF只是两种持久化文件的后缀名,并不是什么神奇的策略。都比较容易懂,下面一一介绍。 RDB持久化RDB持久化就是**快照持久化**,即定期把内存中的数据全部拷贝保存到文件中。我们前面提到Redis是单线程响应用户需求的,如果把持久化这样涉及到大量IO的操作也放到这个线程中,会严重影响服务的响应。于是Redis采用**fork一个子进程出来进行持久化**。但是我们都知道,**fork出来的子进程会拷贝父进程所有的数据**,这样理论上当Redis要持久化2G的内存数据的时候,子进程也会占据几乎2G的内存,那么Redis相关的进程内存占用就会达到4G左右,这在数据比较小的时候还不严重,但是比如你的电脑内存是8G,目前备份的Redis的数据本身体积是5G,那么按照上面的计算备份一定是无法进行的,所幸在Unix类操作系统上面,做了如下的优化:**在刚开始的时候父子进程共享相同的内存,直到父进程或者子进程进行内存的写入后,对被写入的内存共享才结束**。这样就会减少Redis持久化时对内存的消耗。 AOF持久化AOF(AppendOnlyFile)持久化就是Redis把每次的用户写操作**日志append到一个文件中**,如果数据丢失了,那么按照AOF文件的操作顺序再进行操作一遍就可以恢复数据,而且这样每次我们都只需要写一小部分数据到文件中。但是这样会带来一个什么问题呢?由于程序一直在运行,所以不停的会往AOF文件中添加写的操作日志,这样终有一天AOF文件体积会大到不可想象。所以就又有一个操作叫**AOF重写**用于删除掉冗余的命令,比如用户对同一个key执行100遍SET操作,如果不AOF重写,那么AOF文件中就会有100条SET记录,数据恢复的时候也需要操作100次SET,但实际上只有最后一条SET命令是真正有意义的,所以AOF重写就会把前99条SET命令删除掉,只保留最后一条SET命令,这样不仅文件内存储的内容就变少了,Redis恢复数据的速度也加快了。除了上面两条策略,Redis还支持**主从备份**,这又是一块比较大的内容,限于篇幅,我们将主从备份放到第二篇的Redis集群中介绍。以及需要划重点的是,**即使有持久化措施,仍然会有少量数据丢失的问题,**因为备份是每隔一段时间进行的,如果两个备份操作之间机器坏了,那么这期间的数据修改就会因为没来得及备份就被丢失掉,所以一般**不建议**把Redis做常规存储手段,更多的做热数据缓存。 七、talk is cheap, show me the code redis-py和aredis这部分主要介绍两个python的Redis客户端,[redis-py](https://pypi.org/project/redis/)和[aredis](https://aredis.readthedocs.io/en/latest/)前者是同步redis客户端,后者是异步redis客户端。aredis就是在redis-py的基础上利用了协程的技术来重写了接口,试图省去客户端等待服务器结果的时间。如果你是本地机器使用Redis,那么使用前者就能很好的满足你的需求,**如果你使用的远端的Redis服务器而且网络还比较差的话**,aredis也许会有些帮助。我之前尝试使用aredis客户端与本地运行的Redis服务器搭配使用,发现性能下降了很多,主要的原因就是因为本地Redis服务器网络延迟几乎为0,但过多的协程切换反而带来了高昂的开销。我使用redis-py客户端,处理完需要288s, 用aredis客户端处理完需要340s,后来我重写了客户端的一些接口,把一些协程的接口改成了普通的函数接口,减少了协程数目,运行结束为330s,快了10s。 ###Code import redis r = redis.Redis(host='localhost', port=6379, db=0) r.set('foo', 'bar') r.get('foo') r.close() import asyncio from aredis import StrictRedis async def example(): client = StrictRedis(host='127.0.0.1', port=6379, db=0) await client.flushdb() await client.set('foo', 1) assert await client.exists('foo') is True await client.incr('foo', 100) assert int(await client.get('foo')) == 101 await client.expire('foo', 1) await asyncio.sleep(0.1) await client.ttl('foo') await asyncio.sleep(1) assert not await client.exists('foo') # loop = asyncio.get_event_loop() # loop.run_until_complete(example()) ###Output _____no_output_____ ###Markdown 流水线再介绍一个Redis中常用的用来降低网络通信对于程序运行速度影响的小技巧:流水线。Redis客户端和服务器的请求响应过程如下图所示,客户端发送一个命令,等待服务器返回结果之后再提交下一个命令。如果网络情况比较差,我们就会需要花许多的时间来等待服务器的响应。一种解决方案就是利用上文提到的aredis,可以在等待响应的同时切换协程做点其他的计算。另一种解决方案就是把的命令打包一起发送,然后等服务器计算完了之后把结果一起返回来,把命令打包一起发送就是流水线的概念。其中流水线又分为事务型流水线和非事务型流水线,限于篇幅,这两个概念可自行查阅资料进行了解。![pic](../../assets/redis1.jpg)客户端与服务器端的交互![pic](../../assets/redis2.jpg)使用pipeline,进行多个命令一起提交代码如下: ###Code import redis r = redis.Redis() # 使用一个pipeline pipeline = r.pipeline() pipeline.set("thu", "No.1") pipeline.set("xxu", "No.2") # 把所有的命令打包发送给服务器 pipeline.execute() ###Output _____no_output_____
results_processed/publication/massbank/ssvm_lib=v2__exp_ver=4/exp_01__comparison_of_different_score_integration_approaches.ipynb
###Markdown Experiment 1: Comparison of our LC-MS$^2$Struct method with other approaches in the literatureHere we compare the Structure Support Vector Machine (SSVM) with two (2) retention time (RT) based approaches and the method proposed by Bach et al. (2020) using predicted retention orders (RO). Only-MS$^2$- Only the MS$^2$ information is used for the candidate ranking. - We apply each method to three (3) different base MS$^2$ scorers. LC-MS$^2$Struct - Our proposed method. MS$^2$+RT- Predicted RTs of the molecular candidates are compared with the measured RT- Error threshold is used to prune the candidate list MS$^2$+logP- PubChem XLogP3 values of the molecular candidates are related to the predicted XLogP3 value given the measured RT. - A linear model is trained to establish the XLogP3 prediction function given the RT.- Candidate scores are using a weighted sum of RT (xlogp3) and MS$^2$ score. MS$^2$+RO- Method proposed by Bach et al. (2020)- Retention orders (RO) are predicted using a RankSVM model. - RO and MS$^2$ scores are combined using by a weighted scoring. Load raw results for all three MS$^2$ scorers ###Code agg_setting = { "marg_agg_fun": "average", "cand_agg_id": "inchikey1" } ###Output _____no_output_____ ###Markdown MetFragMetFrag performs an in-silico fragmentation for each candidate structure and compares the predicted and observed (from the MS2 spectrum) fragments. ###Code # SSVM (Our method) setting = {"ds": "*", "mol_feat": "FCFP__binary__all__2D", "mol_id": "cid", "ms2scorer": "metfrag__norm", "ssvm_flavor": "default", "lloss_mode": "mol_feat_fps"} res__ssvm__metfrag = load_topk__publication(setting, agg_setting, basedir=os.path.join("massbank"), top_k_method="csi", load_max_model_number=True) res__ssvm__metfrag__ALL_MODELS = load_topk__publication(setting, agg_setting, basedir=os.path.join("massbank"), top_k_method="csi", load_max_model_number=False) # RT filtering setting = {"ds": "*", "mol_feat": "bouwmeester__smiles_iso", "mol_id": "cid", "ms2scorer": "metfrag__norm", "rt_predictor": "svr", "score_int_app": "filtering__global"} res__rtfilter__metfrag = load_topk__comparison(setting, agg_setting, os.path.join("..", "..", "..", "comparison", "massbank__exp_ver=3"), top_k_method="csi") # XLogP3 setting = {"ds": "*", "mol_feat": "xlogp3", "mol_id": "cid", "ms2scorer": "metfrag__norm", "rt_predictor": "linear_reg", "score_int_app": "score_combination"} res__xlogp3__metfrag = load_topk__comparison(setting, agg_setting, os.path.join("..", "..", "..", "comparison", "massbank__exp_ver=3"), top_k_method="csi") # Predicted ROs by Bach et al. (2020) setting = {"ds": "*", "mol_feat": "substructure_count__smiles_iso", "mol_id": "cid", "ms2scorer": "metfrag__norm", "rt_predictor": "ranksvm", "score_int_app": "msms_pl_rt_score_integration"} res__bach2020__metfrag = load_topk__comparison(setting, agg_setting, os.path.join("..", "..", "..", "comparison", "massbank__exp_ver=3"), top_k_method="csi") # Perform some sanity checks assert res__ssvm__metfrag["scoring_method"].nunique() == 2 if len(res__rtfilter__metfrag) > 0: assert res__rtfilter__metfrag["scoring_method"].nunique() == 2 if len(res__xlogp3__metfrag) > 0: assert res__xlogp3__metfrag["scoring_method"].nunique() == 2 if len(res__bach2020__metfrag) > 0: assert res__bach2020__metfrag["scoring_method"].nunique() == 2 _check_onlyms(res__ssvm__metfrag, [res__rtfilter__metfrag, res__xlogp3__metfrag, res__bach2020__metfrag]) ###Output Performed tests: [1479. 1479. 1500.] ###Markdown Get false negative rate for the RT filtering approach ###Code # RT (filtering) setting = {"ds": "*", "mol_feat": "bouwmeester__smiles_iso", "mol_id": "cid", "ms2scorer": "metfrag__norm", "rt_predictor": "svr", "score_int_app": "filtering__global"} cand_set_info__metfrag = load_topk__cand_set_info(setting, os.path.join("..", "..", "..", "comparison", "massbank__exp_ver=3")) __tmp__ = cand_set_info__metfrag.groupby(["dataset"]) \ .aggregate({"correct_structure_remains_after_filtering": lambda x: np.sum(~ x) / len(x) * 100}) \ .rename({"correct_structure_remains_after_filtering": "false_negative_rate"}, axis=1) \ .reset_index() \ .round(1) # print(__tmp__) print("Average false negative rate:", np.round(np.sum(~ cand_set_info__metfrag["correct_structure_remains_after_filtering"]) / len(cand_set_info__metfrag) * 100, 1)) ###Output Average false negative rate: 4.7 ###Markdown Overview result table (LC-MS$^2$Struct) ###Code tab = table__top_k_acc_per_dataset_with_significance(res__ssvm__metfrag, test="ttest", ks=[1, 5, 10, 20]) tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown Small datasets with only one evaluation samples. None of the datasets examples was used for training. ###Code tab[tab["n_samples"] == 1].pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown How does the number of SSVM models effects the performance?Max-marginal values are averaged across the models. ###Code sns.catplot(data=res__ssvm__metfrag__ALL_MODELS[res__ssvm__metfrag__ALL_MODELS["k"].isin([1, 5, 10, 20])], x="n_models", y="top_k_acc", hue="scoring_method", col="k", kind="point", sharey=False) ###Output _____no_output_____ ###Markdown Overview tables for the alternative approaches MS$^2$+RT ###Code tab = table__top_k_acc_per_dataset_with_significance(res__rtfilter__metfrag, test="ttest", ks=[1, 5, 10, 20]) tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown MS$^2$+logP ###Code tab = table__top_k_acc_per_dataset_with_significance(res__xlogp3__metfrag, test="ttest", ks=[1, 5, 10, 20]) tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown MS$^2$+RO ###Code tab = table__top_k_acc_per_dataset_with_significance(res__bach2020__metfrag, test="ttest", ks=[1, 5, 10, 20]) tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown SIRIUS ###Code # SSVM (Our method) setting = {"ds": "*", "mol_feat": "FCFP__binary__all__2D", "mol_id": "cid", "ms2scorer": "sirius__norm", "ssvm_flavor": "default", "lloss_mode": "mol_feat_fps"} res__ssvm__sirius = load_topk__publication(setting, agg_setting, basedir=os.path.join("massbank"), top_k_method="csi", load_max_model_number=True) res__ssvm__sirius__ALL_MODELS = load_topk__publication(setting, agg_setting, basedir=os.path.join("massbank"), top_k_method="csi", load_max_model_number=False) # RT filtering setting = {"ds": "*", "mol_feat": "bouwmeester__smiles_iso", "mol_id": "cid", "ms2scorer": "sirius__norm", "rt_predictor": "svr", "score_int_app": "filtering__global"} res__rtfilter__sirius = load_topk__comparison(setting, agg_setting, os.path.join("..", "..", "..", "comparison", "massbank__exp_ver=3"), top_k_method="csi") # XLogP3 setting = {"ds": "*", "mol_feat": "xlogp3", "mol_id": "cid", "ms2scorer": "sirius__norm", "rt_predictor": "linear_reg", "score_int_app": "score_combination"} res__xlogp3__sirius = load_topk__comparison(setting, agg_setting, os.path.join("..", "..", "..", "comparison", "massbank__exp_ver=3"), top_k_method="csi") # Predicted ROs by Bach et al. (2020) setting = {"ds": "*", "mol_feat": "substructure_count__smiles_iso", "mol_id": "cid", "ms2scorer": "sirius__norm", "rt_predictor": "ranksvm", "score_int_app": "msms_pl_rt_score_integration"} res__bach2020__sirius = load_topk__comparison(setting, agg_setting, os.path.join("..", "..", "..", "comparison", "massbank__exp_ver=3"), top_k_method="csi") # Perform some sanity checks assert res__ssvm__sirius["scoring_method"].nunique() == 2 if len(res__rtfilter__sirius) > 0: assert res__rtfilter__sirius["scoring_method"].nunique() == 2 if len(res__xlogp3__sirius) > 0: assert res__xlogp3__sirius["scoring_method"].nunique() == 2 if len(res__bach2020__sirius) > 0: assert res__bach2020__sirius["scoring_method"].nunique() == 2 _check_onlyms(res__ssvm__sirius, [res__rtfilter__sirius, res__xlogp3__sirius, res__bach2020__sirius]) ###Output Performed tests: [1483. 1483. 1500.] ###Markdown Get false negative rate for the RT filtering approach ###Code # RT (filtering) setting = {"ds": "*", "mol_feat": "bouwmeester__smiles_iso", "mol_id": "cid", "ms2scorer": "sirius__norm", "rt_predictor": "svr", "score_int_app": "filtering__global"} cand_set_info__sirius = load_topk__cand_set_info(setting, os.path.join("..", "..", "..", "comparison", "massbank__exp_ver=3")) __tmp__ = cand_set_info__sirius.groupby(["dataset"]) \ .aggregate({"correct_structure_remains_after_filtering": lambda x: np.sum(~ x) / len(x) * 100}) \ .rename({"correct_structure_remains_after_filtering": "false_negative_rate"}, axis=1) \ .reset_index() \ .round(1) # print(__tmp__) print("Average false negative rate:", np.round(np.sum(~ cand_set_info__sirius["correct_structure_remains_after_filtering"]) / len(cand_set_info__sirius) * 100, 1)) ###Output Average false negative rate: 4.7 ###Markdown Overview result table (LC-MS$^2$Struct) ###Code tab = table__top_k_acc_per_dataset_with_significance(res__ssvm__sirius, test="ttest", ks=[1, 5, 10, 20]) tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown Small datasets with only one evaluation samples. None of the datasets examples was used for training. ###Code tab[tab["n_samples"] == 1].pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown How does the number of SSVM models effects the performance?Max-marginal values are averaged across the models. ###Code sns.catplot(data=res__ssvm__sirius__ALL_MODELS[res__ssvm__sirius__ALL_MODELS["k"].isin([1, 5, 10, 20])], x="n_models", y="top_k_acc", hue="scoring_method", col="k", kind="point", sharey=False) ###Output _____no_output_____ ###Markdown Overview tables for the alternative approaches MS$^2$+RT ###Code tab = table__top_k_acc_per_dataset_with_significance(res__rtfilter__sirius, test="ttest", ks=[1, 5, 10, 20]) tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown MS$^2$+logP ###Code tab = table__top_k_acc_per_dataset_with_significance(res__xlogp3__sirius, test="ttest", ks=[1, 5, 10, 20]) tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown MS$^2$+RO ###Code tab = table__top_k_acc_per_dataset_with_significance(res__bach2020__sirius, test="ttest", ks=[1, 5, 10, 20]) tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown CFM-ID ###Code # SSVM (Our method) setting = {"ds": "*", "mol_feat": "FCFP__binary__all__2D", "mol_id": "cid", "ms2scorer": "cfmid4__norm", "ssvm_flavor": "default", "lloss_mode": "mol_feat_fps"} res__ssvm__cfmid4 = load_topk__publication(setting, agg_setting, basedir=os.path.join("massbank"), top_k_method="csi", load_max_model_number=True) res__ssvm__cfmid4__ALL_MODELS = load_topk__publication(setting, agg_setting, basedir=os.path.join("massbank"), top_k_method="csi", load_max_model_number=False) # RT filtering setting = {"ds": "*", "mol_feat": "bouwmeester__smiles_iso", "mol_id": "cid", "ms2scorer": "cfmid4__norm", "rt_predictor": "svr", "score_int_app": "filtering__global"} res__rtfilter__cfmid4 = load_topk__comparison(setting, agg_setting, os.path.join("..", "..", "..", "comparison", "massbank__exp_ver=3"), top_k_method="csi") # XLogP3 setting = {"ds": "*", "mol_feat": "xlogp3", "mol_id": "cid", "ms2scorer": "cfmid4__norm", "rt_predictor": "linear_reg", "score_int_app": "score_combination"} res__xlogp3__cfmid4 = load_topk__comparison(setting, agg_setting, os.path.join("..", "..", "..", "comparison", "massbank__exp_ver=3"), top_k_method="csi") # Predicted ROs by Bach et al. (2020) setting = {"ds": "*", "mol_feat": "substructure_count__smiles_iso", "mol_id": "cid", "ms2scorer": "cfmid4__norm", "rt_predictor": "ranksvm", "score_int_app": "msms_pl_rt_score_integration"} res__bach2020__cfmid4 = load_topk__comparison(setting, agg_setting, os.path.join("..", "..", "..", "comparison", "massbank__exp_ver=3"), top_k_method="csi") # Perform some sanity checks assert res__ssvm__cfmid4["scoring_method"].nunique() == 2 if len(res__rtfilter__cfmid4) > 0: assert res__rtfilter__cfmid4["scoring_method"].nunique() == 2 if len(res__xlogp3__cfmid4) > 0: assert res__xlogp3__cfmid4["scoring_method"].nunique() == 2 if len(res__bach2020__cfmid4) > 0: assert res__bach2020__cfmid4["scoring_method"].nunique() == 2 _check_onlyms(res__ssvm__cfmid4, [res__rtfilter__cfmid4, res__xlogp3__cfmid4, res__bach2020__cfmid4]) ###Output Performed tests: [1479. 1479. 1500.] ###Markdown Get false negative rate for the RT filtering approach ###Code # RT (filtering) setting = {"ds": "*", "mol_feat": "bouwmeester__smiles_iso", "mol_id": "cid", "ms2scorer": "cfmid4__norm", "rt_predictor": "svr", "score_int_app": "filtering__global"} cand_set_info__cfmid4 = load_topk__cand_set_info(setting, os.path.join("..", "..", "..", "comparison", "massbank__exp_ver=3")) __tmp__ = cand_set_info__cfmid4.groupby(["dataset"]) \ .aggregate({"correct_structure_remains_after_filtering": lambda x: np.sum(~ x) / len(x) * 100}) \ .rename({"correct_structure_remains_after_filtering": "false_negative_rate"}, axis=1) \ .reset_index() \ .round(1) # print(__tmp__) print("Average false negative rate:", np.round(np.sum(~ cand_set_info__cfmid4["correct_structure_remains_after_filtering"]) / len(cand_set_info__cfmid4) * 100, 1)) ###Output Average false negative rate: 4.7 ###Markdown Overview result table (LC-MS$^2$Struct) ###Code tab = table__top_k_acc_per_dataset_with_significance(res__ssvm__cfmid4, test="ttest", ks=[1, 5, 10, 20]) tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown Small datasets with only one evaluation samples. None of the datasets examples was used for training. ###Code tab[tab["n_samples"] == 1].pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown How does the number of SSVM models effects the performance?Max-marginal values are averaged across the models. ###Code sns.catplot(data=res__ssvm__cfmid4__ALL_MODELS[res__ssvm__cfmid4__ALL_MODELS["k"].isin([1, 5, 10, 20])], x="n_models", y="top_k_acc", hue="scoring_method", col="k", kind="point", sharey=False) ###Output _____no_output_____ ###Markdown Overview tables for the alternative approaches MS$^2$+RT ###Code tab = table__top_k_acc_per_dataset_with_significance(res__rtfilter__cfmid4, test="ttest", ks=[1, 5, 10, 20]) tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown MS$^2$+logP ###Code tab = table__top_k_acc_per_dataset_with_significance(res__xlogp3__cfmid4, test="ttest", ks=[1, 5, 10, 20]) tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown MS$^2$+RO ###Code tab = table__top_k_acc_per_dataset_with_significance(res__bach2020__cfmid4, test="ttest", ks=[1, 5, 10, 20]) tab.pivot(columns=["k", "scoring_method"], index=["dataset", "n_samples"], values="top_k_acc__as_labels") ###Output _____no_output_____ ###Markdown Visualization of the performance ###Code print(len(res__ssvm__metfrag[(res__ssvm__metfrag["scoring_method"] == "Only MS") & (res__ssvm__metfrag["n_models"] == 8)])) print(len(res__ssvm__sirius[(res__ssvm__sirius["scoring_method"] == "Only MS") & (res__ssvm__sirius["n_models"] == 8)])) print(len(res__ssvm__cfmid4[(res__ssvm__cfmid4["scoring_method"] == "Only MS") & (res__ssvm__cfmid4["n_models"] == 8)])) print(len(res__ssvm__metfrag[(res__ssvm__metfrag["scoring_method"] == "MS + RT") & (res__ssvm__metfrag["n_models"] == 8)].assign(scoring_method="MS + RO"))) print(len(res__ssvm__sirius[(res__ssvm__sirius["scoring_method"] == "MS + RT") & (res__ssvm__sirius["n_models"] == 8)].assign(scoring_method="MS + RO"))) print(len(res__ssvm__cfmid4[(res__ssvm__cfmid4["scoring_method"] == "MS + RT") & (res__ssvm__cfmid4["n_models"] == 8)].assign(scoring_method="MS + RO"))) print(len(res__rtfilter__metfrag[(res__rtfilter__metfrag["scoring_method"] == "MS + RT (filtering__global)")].assign(scoring_method="MS + RT (filtering)"))) print(len(res__rtfilter__sirius[(res__rtfilter__sirius["scoring_method"] == "MS + RT (filtering__global)")].assign(scoring_method="MS + RT (filtering)"))) print(len(res__rtfilter__cfmid4[(res__rtfilter__cfmid4["scoring_method"] == "MS + RT (filtering__global)")].assign(scoring_method="MS + RT (filtering)"))) print(len(res__xlogp3__metfrag[(res__xlogp3__metfrag["scoring_method"] == "MS + RT (score_combination)")].assign(scoring_method="MS + RT (rescoring)"))) print(len(res__xlogp3__sirius[(res__xlogp3__sirius["scoring_method"] == "MS + RT (score_combination)")].assign(scoring_method="MS + RT (rescoring)"))) print(len(res__xlogp3__cfmid4[(res__xlogp3__cfmid4["scoring_method"] == "MS + RT (score_combination)")].assign(scoring_method="MS + RT (rescoring)"))) print(len(res__bach2020__metfrag[(res__bach2020__metfrag["scoring_method"] == "MS + RT (msms_pl_rt_score_integration)")].assign(scoring_method="MS + RT (rescoring)"))) print(len(res__bach2020__sirius[(res__bach2020__sirius["scoring_method"] == "MS + RT (msms_pl_rt_score_integration)")].assign(scoring_method="MS + RT (rescoring)"))) print(len(res__bach2020__cfmid4[(res__bach2020__cfmid4["scoring_method"] == "MS + RT (msms_pl_rt_score_integration)")].assign(scoring_method="MS + RT (rescoring)"))) __tmp__01__a = plot__01__a( res__baseline=[ res__ssvm__cfmid4[(res__ssvm__cfmid4["scoring_method"] == "Only MS") & (res__ssvm__cfmid4["n_models"] == 8)].assign(scoring_method="Only-MS$^2$", ms2scorer="CFM-ID"), res__ssvm__metfrag[(res__ssvm__metfrag["scoring_method"] == "Only MS") & (res__ssvm__metfrag["n_models"] == 8)].assign(scoring_method="Only-MS$^2$", ms2scorer="MetFrag"), res__ssvm__sirius[(res__ssvm__sirius["scoring_method"] == "Only MS") & (res__ssvm__sirius["n_models"] == 8)].assign(scoring_method="Only-MS$^2$", ms2scorer="SIRIUS") ], res__ssvm=[ res__ssvm__cfmid4[(res__ssvm__cfmid4["scoring_method"] == "MS + RT") & (res__ssvm__cfmid4["n_models"] == 8)].assign(scoring_method="LC-MS$^2$Struct", ms2scorer="CFM-ID"), res__ssvm__metfrag[(res__ssvm__metfrag["scoring_method"] == "MS + RT") & (res__ssvm__metfrag["n_models"] == 8)].assign(scoring_method="LC-MS$^2$Struct", ms2scorer="MetFrag"), res__ssvm__sirius[(res__ssvm__sirius["scoring_method"] == "MS + RT") & (res__ssvm__sirius["n_models"] == 8)].assign(scoring_method="LC-MS$^2$Struct", ms2scorer="SIRIUS") ], res__rtfilter=[ res__rtfilter__cfmid4[(res__rtfilter__cfmid4["scoring_method"] == "MS + RT (filtering__global)")].assign(scoring_method="MS$^2$+RT", ms2scorer="CFM-ID"), res__rtfilter__metfrag[(res__rtfilter__metfrag["scoring_method"] == "MS + RT (filtering__global)")].assign(scoring_method="MS$^2$+RT", ms2scorer="MetFrag"), res__rtfilter__sirius[(res__rtfilter__sirius["scoring_method"] == "MS + RT (filtering__global)")].assign(scoring_method="MS$^2$+RT", ms2scorer="SIRIUS") ], res__xlogp3=[ res__xlogp3__cfmid4[(res__xlogp3__cfmid4["scoring_method"] == "MS + RT (score_combination)")].assign(scoring_method="MS$^2$+logP", ms2scorer="CFM-ID"), res__xlogp3__metfrag[(res__xlogp3__metfrag["scoring_method"] == "MS + RT (score_combination)")].assign(scoring_method="MS$^2$+logP", ms2scorer="MetFrag"), res__xlogp3__sirius[(res__xlogp3__sirius["scoring_method"] == "MS + RT (score_combination)")].assign(scoring_method="MS$^2$+logP", ms2scorer="SIRIUS") ], res__bach2020=[ res__bach2020__cfmid4[(res__bach2020__cfmid4["scoring_method"] == "MS + RT (msms_pl_rt_score_integration)")].assign(scoring_method="MS$^2$+RO", ms2scorer="CFM-ID"), res__bach2020__metfrag[(res__bach2020__metfrag["scoring_method"] == "MS + RT (msms_pl_rt_score_integration)")].assign(scoring_method="MS$^2$+RO", ms2scorer="MetFrag"), res__bach2020__sirius[(res__bach2020__sirius["scoring_method"] == "MS + RT (msms_pl_rt_score_integration)")].assign(scoring_method="MS$^2$+RO", ms2scorer="SIRIUS") ], max_k=20, weighted_average=False, raise_on_missing_results=False, aspect="landscape", verbose=True ) for ext in ["pdf", "svg"]: plt.savefig(os.path.join(".", os.extsep.join(["plot_01__a", ext]))) __tmp__01__b = plot__01__b( res__baseline=[ res__ssvm__cfmid4[(res__ssvm__cfmid4["scoring_method"] == "Only MS") & (res__ssvm__cfmid4["n_models"] == 8)].assign(scoring_method="Only-MS$^2$", ms2scorer="CFM-ID"), res__ssvm__metfrag[(res__ssvm__metfrag["scoring_method"] == "Only MS") & (res__ssvm__metfrag["n_models"] == 8)].assign(scoring_method="Only-MS$^2$", ms2scorer="MetFrag"), res__ssvm__sirius[(res__ssvm__sirius["scoring_method"] == "Only MS") & (res__ssvm__sirius["n_models"] == 8)].assign(scoring_method="Only-MS$^2$", ms2scorer="SIRIUS") ], res__ssvm=[ res__ssvm__cfmid4[(res__ssvm__cfmid4["scoring_method"] == "MS + RT") & (res__ssvm__cfmid4["n_models"] == 8)].assign(scoring_method="LC-MS$^2$Struct", ms2scorer="CFM-ID"), res__ssvm__metfrag[(res__ssvm__metfrag["scoring_method"] == "MS + RT") & (res__ssvm__metfrag["n_models"] == 8)].assign(scoring_method="LC-MS$^2$Struct", ms2scorer="MetFrag"), res__ssvm__sirius[(res__ssvm__sirius["scoring_method"] == "MS + RT") & (res__ssvm__sirius["n_models"] == 8)].assign(scoring_method="LC-MS$^2$Struct", ms2scorer="SIRIUS") ], res__rtfilter=[ res__rtfilter__cfmid4[(res__rtfilter__cfmid4["scoring_method"] == "MS + RT (filtering__global)")].assign(scoring_method="MS$^2$+RT", ms2scorer="CFM-ID"), res__rtfilter__metfrag[(res__rtfilter__metfrag["scoring_method"] == "MS + RT (filtering__global)")].assign(scoring_method="MS$^2$+RT", ms2scorer="MetFrag"), res__rtfilter__sirius[(res__rtfilter__sirius["scoring_method"] == "MS + RT (filtering__global)")].assign(scoring_method="MS$^2$+RT", ms2scorer="SIRIUS") ], res__xlogp3=[ res__xlogp3__cfmid4[(res__xlogp3__cfmid4["scoring_method"] == "MS + RT (score_combination)")].assign(scoring_method="MS$^2$+logP", ms2scorer="CFM-ID"), res__xlogp3__metfrag[(res__xlogp3__metfrag["scoring_method"] == "MS + RT (score_combination)")].assign(scoring_method="MS$^2$+logP", ms2scorer="MetFrag"), res__xlogp3__sirius[(res__xlogp3__sirius["scoring_method"] == "MS + RT (score_combination)")].assign(scoring_method="MS$^2$+logP", ms2scorer="SIRIUS") ], res__bach2020=[ res__bach2020__cfmid4[(res__bach2020__cfmid4["scoring_method"] == "MS + RT (msms_pl_rt_score_integration)")].assign(scoring_method="MS$^2$+RO", ms2scorer="CFM-ID"), res__bach2020__metfrag[(res__bach2020__metfrag["scoring_method"] == "MS + RT (msms_pl_rt_score_integration)")].assign(scoring_method="MS$^2$+RO", ms2scorer="MetFrag"), res__bach2020__sirius[(res__bach2020__sirius["scoring_method"] == "MS + RT (msms_pl_rt_score_integration)")].assign(scoring_method="MS$^2$+RO", ms2scorer="SIRIUS") ], ks=[1, 20], weighted_average=False, raise_on_missing_results=False, ctype="improvement", label_format=".0f" ) for ext in ["pdf", "svg"]: plt.savefig(os.path.join(".", os.extsep.join(["plot_01__b", ext]))) ###Output _____no_output_____
Bloque 1 - Ramp-Up/05_Python/04_Clases y objetos/02_RESU_Clases y Objetos.ipynb
###Markdown ![imagen](../../imagenes/python.jpg) Clases y Objetos en PythonComo sabes, Python es un lenguaje de programación orientado a objetos. ¿Esto qué es? El código se organiza en elementos denominados objetos, que vienen definidos por las clases. Es una manera de expresar en lenguaje máquina cosas de la vida real.1. [Clases](1.-Clases)1. [Atributos](2.-Atributos)3. [Constructor](3.-Constructor)4. [Métodos](4.-Métodos)5. [Documentación](5.-Documentación)6. [Resumen](6.-Resumen) 1. ClasesLas clases son la manera que tenemos de describir los objetos. Hasta ahora hemos visto clases básicas que vienen incluidas en Python como *int*, *str* o clases algo más complejas como los *dict*. Pero, **¿y si queremos crear nuestros propios objetos?** En los lenguajes orientados a objetos tenemos la posibilidad de definir nuevos objetos que se asemejen más a nuestros casos de uso y hagan la programación más sencilla de desarrollar y entender.Un número entero es un objeto de la clase *int* que posee unas características diferentes a un texto, que es de la clase *str*. Por ejemplo, ¿cómo sabemos que un coche es un coche? ¿qué características tiene? Los coches tienen una marca, una cantidad de caballos, hay unos automáticos, otros no… De esta manera traducimos a lenguaje de máquina, a programación, un concepto que tenemos nosotros muy claro e interiorizado. Hasta ahora, hemos visto varias clases, por ejemplo la clase *str*. Cuando veiamos el tipo de dato, Python imprimía por pantalla `str`. Y al ser `str`, tenía unas propiedades que no tenían otros objetos, como las funciones .upper() o .lower().La sintaxis para crear una clase es:```Pythonclass NombreClase: Cosas de la clase```Normalmente para el nombre de la clase se usa *CamelCase*, que quiere decir que se define en minúscila, sin espacios ni guiones, y jugando con las mayúsculas para diferenciar palabras.Mira cómo es la [clase *built_in* de *String*](https://docs.python.org/3/library/stdtypes.htmlstr) ###Code class Coche: # cosas de la clase pass ###Output _____no_output_____ ###Markdown La sentencia `pass` se usa para forzar el fin de la clase *Coche*. La hemos declarado, pero no lleva nada. Python demanda una definición de la clase y podemos ignorar esa demanda mediante la sentencia `pass`. ###Code print(type("texto")) # print(type(str)) # print(type(Coche)) ###Output <class 'str'> ###Markdown Bien, coche es de tipo `type`, claro porque **no es un objeto como tal**, sino que es una clase. Cuando creemos coches, estos serán de clase *Coche*, es decir, de tipo *Coche*, por lo que tiene sentido que *Coche* sea de tipo `type`. Clase vs ObjetoLa clase se usa para definir algo. Al igual que con las funciones. Creamos el esqueleto de lo que será un objeto de esa clase. Por tanto, **una vez tengamos la clase definida, instanciaremos un objeto de esa clase**. Es como crear el concepto de coche, con todas sus características y funcionalidades. Después, a lo largo del programa, podremos crear objetos de tipo coche, que se ajusten a lo definido en la clase coche. Cada coche tendrá una marca, una potencia, etc… ###Code primer_coche = Coche() print(primer_coche) print(type(primer_coche)) ###Output <__main__.Coche object at 0x000001E08CB45508> <class '__main__.Coche'> ###Markdown Ahora sí tenemos un objeto de tipo Coche, que se llama `primer_coche`. Cuando imprimimos su tipo, vemos que es de tipo Coche, y cuando lo imprimes el objeto por pantalla, simplemente nos dice su tipo y un identificador.Podremos crear todos los coches que queramos ###Code citroen = Coche() seat = Coche() print(citroen) print(seat) # Dos objetos diferentes citroen == seat ###Output <__main__.Coche object at 0x000002130F007850> <__main__.Coche object at 0x000002130F007040> ###Markdown De momento todos nuestros coches son iguales, no hemos definido bien la clase, por lo que va a ser difícil diferenciar un coche de otro. Vamos a ver cómo lograr esa diferenciación. 2. AtributosSon las características que definen a los objetos de una clase. La marca, el color, potencia del coche. Estos son atributos, que se definen de manera genérica en la clase y luego cada objeto *Coche* tendrá un valor para cada uno de sus atributos.Los atributos los definimos tras la declaración de la clase. Y luego se accede a ellos mediante la sintaxis `objeto.atributo`Vamos a empezar a definir atributos en los coches. ###Code class Coche: puertas = 4 ruedas = 4 ###Output _____no_output_____ ###Markdown Ahora todos los coches que creamos, tendrán 4 puertas y 4 ruedas. ###Code citroen = Coche() print(citroen) print(citroen.puertas) print(citroen.ruedas) seat = Coche() print(seat.puertas) print(seat.ruedas) ###Output <__main__.Coche object at 0x000002130F0070D0> 4 4 4 4 ###Markdown También podemos modificar los atributos. Esto Python lo hace muy sencillo, los cambiamos directamente reasignando valores. En otros lenguajes de programación hay que implementar esto mediante métodos denominados `getters` y `setters`. ###Code citroen = Coche() citroen.puertas = 2 print(citroen.puertas) seat = Coche() print(seat.puertas) ###Output 2 4 ###Markdown ERRORES atributos que no existen ###Code seat = Coche() print(seat.motor) ###Output _____no_output_____ ###Markdown Seguimos sin poder diferenciar claramente un coche de otro, pero ya vamos definiendo sus características, que será posible ir modificándolas tanto en la inicialización del objeto, como después. De momento, tenemos características comunes a todos los coches... o no, ¿todos los coches tienen 4 puertas? 3. ConstructorCuando creamos un objeto de la clase *Coche*, tenemos que definirlo bien para diferenciarlo de otros coches. Esa definición inicial se realiza en el constructor de la clase. Son unos argumentos de entrada que nos pide el objeto, para definir esa instancia de otras instancias de la misma clase.**¿Cómo definimos esto?** Mediante la sentencia `__init__`, dentro de la clase. ###Code class Coche: ruedas = 4 def __init__(self, puertas_coche): self.puertas = puertas_coche ###Output _____no_output_____ ###Markdown En la declaración del constructor hemos metido la palabra `self`. Lo tendremos que poner siempre. Hace referencia a la propia instancia de coche, es decir, a cuando creemos coches nuevos.En este caso estamos diferenciando los atributos comunes de la clase *Coche*, de los atributos particulares de los coches, como por ejemplo, el número de puertas. Por eso el número de puertas va junto con `self`, porque no hace referencia a la clase genércia de coche, sino a cada coche que creemos. ###Code citroen = Coche(2) print(citroen) print(citroen.ruedas) print(citroen.puertas) ###Output <__main__.Coche object at 0x000002130F007F10> 4 2 ###Markdown ¿Y si queremos añadir más variables particulares de nuestro coche? Pues del mismo modo que lo hacíamos antes, será añadir un parámetro más al constructor y crear una nueva variable con el ```self.``` delante a la hora de hacer la asignación. ###Code class Coche: ruedas = 4 def __init__(self, marca, puertas): self.marca_coche = marca self.puertas = puertas ###Output _____no_output_____ ###Markdown Ejercicio. Crea tu clase cocheCrea tu propia clase coche a partir de la que acabamos de ver. La clase coche tiene que llevar un par de atributos comunes a todos los coches, y otros tres que los introduciremos mediante el constructor. ###Code class Coche: motor = True retrovisores = 3 def __init__(self, marca_coche, num_puertas, combustible = "diesel"): self.marca_coche = marca_coche self.num_puertas = num_puertas self.combustible = combustible audi = Coche("Audi", 2, "gasolina") print(audi.motor) print(audi.retrovisores) print(audi.marca_coche) print(audi.num_puertas) print(audi.combustible) ###Output True 3 Audi 2 gasolina ###Markdown 4. MétodosSon funciones que podemos definir dentro de las clases. Estas funciones cambiarán el estado de algún atributo o realizarán calculos que nos sirvan de output. Un ejemplo sencillo puede ser, un método de la clase coche que saque la potencia en kilovatios, en vez de en caballos. O si tiene un estado de mantenimiento (ITV pasada o no), que modifique ese estado.El constructor es un tipo de método. La diferencia con el resto de métodos radica en su nombre, `__init__`. La sintaxis para definir los métodos es como si fuese una función. Y luego para llamar al método se utiliza `objeto.metodo(argumentos_metodo)`. Esto ya lo hemos usado anteriormente, cuando haciamos un `string.lower()`, simplemente llamábamos al método `lower()`, que no requería de argumentos, de la clase *string*. ###Code class Coche: ruedas = 4 def __init__(self, marca, puertas): self.marca_coche = marca self.puertas = puertas def caracteriscticas(self): return "Marca: " + self.marca_coche + "; Num. Puertas: " + str(self.puertas) + "; Num. Ruedas: " + str(self.ruedas) audi = Coche("Audi", 2) audi.caracteriscticas() ###Output _____no_output_____ ###Markdown Fíjate que para llamar a las ruedas se usa `self`, a pesar de que no lo habíamos metido en el constructor. Así evitamos llamar a otra variable del programa que se llame *ruedas*. Nos aseguramos que son las ruedas de ese coche con el `self`. Ejercicio. Crea nuevos métodosCrea dos métodos nuevos en la clase coche. Introduce dos atributos nuevos en el constructor: Años desde su compra, y precio de compra. Crea un método nuevo que calcule su precio actual. Si el coche tiene 5 años o menos, su precio será del 50% del precio de compra, en caso de que sean más años, será de un 30% ###Code anios_compra = 555555 class Coche: ruedas = 4 def __init__(self, marca, puertas, anios_compra, precio): self.marca_coche = str(marca) self.puertas = puertas self.anios_compra = anios_compra self.precio = precio def caracteriscticas(self): return "Marca: " + self.marca_coche + "; Num. Puertas: " + str(self.puertas) def precio_actual(self): if self.anios_compra <=5: return 0.5 * self.precio else: return 0.3 * self.precio ###Output _____no_output_____ ###Markdown 5. DocumentaciónAl igual que con las funciones, en las clases también podemos documentar con el método *built-in* `__doc__`. Es un método de `class`. Por tanto, podremos poner al principio de la clase una documentación con todo lo que hace esta clase. Ocurre lo mismo con los métodos de la clase. Se recomienda dar una breve definición de las funcionalidades de las clases/métodos y describir cómo son las entradas y salidas de los métodos. Qué espera recibir y de qué tipo. ###Code class Coche: ''' Clase coche utilizada como ejemplo para la clase Parametros del constructor: marca_coche: distingue el fabricante del coche. String puertas: hay coches de 2 o 4 puertas. Int ''' ruedas = 4 def __init__(self, marca, puertas): """ Documentacion del init """ self.marca_coche = marca self.puertas = puertas def caracteriscticas(self): ''' comentario caracteristicas ''' return "Marca: " + self.marca_coche + "; Num. Puertas: " + str(self.puertas) print(Coche.__doc__) print(Coche.__init__.__doc__) print(Coche.caracteriscticas.__doc__) ###Output Clase coche utilizada como ejemplo para la clase Parametros del constructor: marca_coche: distingue el fabricante del coche. String puertas: hay coches de 2 o 4 puertas. Int Documentacion del init comentario caracteristicas ###Markdown 6. Resumen ###Code # Las clases se declaran con la siguiente sintaxis class Coche: # Estos son los atributos comunes a los objetos de esta clase ruedas = 4 # Constructor de la clase def __init__(self, marca_coche, num_puertas): # Atributos particulares de cada instancia self.marca_coche = marca_coche self.num_puertas = num_puertas # Metodo propio de esta clase def caracteristicas(self): return "Marca: " + self.marca_coche + ". Num Puertas: " + str(self.num_puertas) + ". Num Ruedas: " + str(self.ruedas) audi = Coche("Audi", 2) print(audi.ruedas) print(audi.marca_coche) print(audi.num_puertas) print(audi.caracteristicas()) ###Output 4 Audi 2 Marca: Audi. Num Puertas: 2. Num Ruedas: 4
Labs/第6章 逻辑斯谛回归/Labs/example.ipynb
###Markdown 我们利用混淆矩阵评估,结果说明我们有66+24=90正确预测而有2+8=10预测错误 ###Code # 准确率评估 accuracy_score(y_true=y_train, y_pred=clg.predict(X_train)) ###Output _____no_output_____
Day 4/.ipynb_checkpoints/Matplotlib Task-checkpoint.ipynb
###Markdown Matplolib Task ###Code import pandas as pd df = pd.read_csv("data/pandas_tasks.csv") ###Output _____no_output_____ ###Markdown Instruction * Using the loaded dataset, make a boxplot, barplot, histogram, and a scatter plot. * Ensure to provide appropriate labels, and titles where necessary. * In each plot, ensure to use all parameters used in the Matplolib tutorial.* Optionally, add additional parameters for each plot for some extra points.* For each of the plot, choose appropriate columns in the dataset. The explanation for each of the variables has been provided in a word document called `Variable_Codebook.docx`. ###Code df.head() ###Output _____no_output_____
temas/IV.optimizacion_convexa_y_machine_learning/4.3.Minimos_cuadrados_R.ipynb
###Markdown **Notas para contenedor de docker:** Comando de docker para ejecución de la nota de forma local:nota: cambiar `` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker.```docker run --rm -v :/datos --name jupyterlab_r_kernel_tidyverse -p 8888:8888 -d palmoreck/jupyterlab_r_kernel_tidyverse:1.1.0```password para jupyterlab: `qwerty`Detener el contenedor de docker:```docker stop jupyterlab_r_kernel_tidyverse``` Documentación de la imagen de docker `palmoreck/jupyterlab_r_kernel_tidyverse:1.1.0` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/r_kernel_tidyverse). --- Nota generada a partir de [liga](https://www.dropbox.com/s/6isby5h1e5f2yzs/4.2.Problemas_de_optimizacion_convexa.pdf?dl=0) En esta nota revisamos a los **mínimos cuadrados lineales con y sin regularización**. La **regularización** que utilizamos es la de **[Tikhonov](https://en.wikipedia.org/wiki/Tikhonov_regularization)** también nombrada $\ell_2$ o ***ridge*** y la $\ell_1$ o también conocida como **[*lasso*](https://en.wikipedia.org/wiki/Lasso_(statistics)** (*least absolute shrinkage and selection operator*, Tibshirani, 1996). Se muestra el uso de **métodos de descenso** (ver [4.2.Algoritmos_para_UCO](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/IV.optimizacion_convexa_y_machine_learning/4.2.Algoritmos_para_UCO.ipynb)) para resolver los problemas de optimización que surgen en los modelos anteriores y **no se tiene por objetivo la interpretación de los coeficientes estimados**. Se comparan los resultados del paquete [glmnet stanford](https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html), [glmnet cran](https://cran.r-project.org/web/packages/glmnet/index.html) de R con los obtenidos en la implementación hecha por el prof en [algoritmos/R](algoritmos/R), en específico [algoritmos/R/algorithms_for_uco.R](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/IV.optimizacion_convexa_y_machine_learning/algoritmos/R/algorithms_for_uco.R) para problemas tipo UCO (Unconstrained Convex Optimization). Mínimos cuadradosObsérvese que hay una gran cantidad de modelos por mínimos cuadrados, por ejemplo:* [Lineales](https://en.wikipedia.org/wiki/Linear_least_squares) u [ordinarios](https://en.wikipedia.org/wiki/Ordinary_least_squares) (nombre más usado en Estadística y Econometría).* [Generalizados](https://en.wikipedia.org/wiki/Generalized_least_squares), [ponderados](https://en.wikipedia.org/wiki/Weighted_least_squares).* [No lineales](https://en.wikipedia.org/wiki/Non-linear_least_squares).* [Totales](https://en.wikipedia.org/wiki/Total_least_squares) y [parciales](https://en.wikipedia.org/wiki/Partial_least_squares_regression).* [No negativos](https://en.wikipedia.org/wiki/Non-negative_least_squares).* [Rango reducido](https://epubs.siam.org/doi/abs/10.1137/1.9780898718867.ch7). Mínimos cuadrados lineales Cada uno de los modelos anteriores tienen diversas aplicaciones y propósitos. Los lineales son un caso particular del problema más general de **aproximación por normas**:$$\displaystyle \min_{x \in \mathbb{R}^n} ||Ax-b||$$donde: $A \in \mathbb{R}^{m \times n}$, $b \in \mathbb{R}^m$ son datos del problema, $x \in \mathbb{R}^n$ es la variable de optimización y $|| \cdot||$ es una norma en $\mathbb{R}^m$. **Se asume en toda la nota que $m \geq n $ (más renglones que columnas en $A$)**.$x^* = \text{argmin}_{x \in \mathbb{R}^n} ||Ax-b||$ se le nombra **solución aproximada** de $Ax \approx b$ en la norma $|| \cdot ||$. El vector: $r(x) = Ax -b$ se le nombra **residual** del problema.**Comentario:** el problema de aproximación por normas también se le nombra **problema de regresión**. En este contexto, los vectores $a_1, a_2, \dots, a_n$ (columnas de $A$) son nombradas regresoras o vector de atributos y el vector $\displaystyle \sum_{j=1}^n x_j^*a_j$ con $x^*$ óptimo del problema es nombrado la **regresión de $b$ sobre las regresoras**. $b$ es la **respuesta.** Si en el problema de aproximación de normas anterior se utiliza la norma Euclidiana o norma $2$, $|| \cdot ||_2$, y se eleva al cuadrado la función objetivo se tiene:$$\displaystyle \min_{x \in \mathbb{R}^n} ||Ax-b||^2_2$$que es el modelo por mínimos cuadrados lineales cuyo objetivo es minimizar la suma de cuadrados de las componentes del residual $r(x)$. **A partir de aquí, la variable de optimización será $\beta$ y no $x$**: Supóngase que se han realizado mediciones de un fenómeno de interés en diferentes puntos $x_i$'s resultando en cantidades $y_i$'s $\forall i=0,1,\dots, m$ (se tienen $m+1$ puntos) y además las $y_i$'s contienen un ruido aleatorio causado por errores de medición: El objetivo de los mínimos cuadrados es construir una curva, $f(x|\beta)$ que "mejor" se ajuste a los datos $(x_i,y_i)$, $\forall i=0,1,\dots,m$. El término de "mejor" se refiere a que la suma: $$\displaystyle \sum_{i=0}^m (y_i -f(x_i|\beta))^2$$ sea lo más pequeña posible, esto es, a que la suma de las distancias verticales entre $y_i$ y $f(x_i|\beta)$ $\forall i=0,1,\dots,m$ al cuadrado sea mínima. Por ejemplo: **Obs:*** La notación $f(x|\beta)$ se utiliza para denotar que $\beta$ es un vector de parámetros a estimar, en específico $\beta_0, \beta_1, \dots \beta_n$, esto es: $n+1$ parámetros a estimar. Si $m=3$ y $A \in \mathbb{R}^{3 \times 2}$ geométricamente el problema de **mínimos cuadrados lineales** se puede visualizar con el siguiente dibujo: donde: $r(\beta) = y-A\beta$, el vector $y \in \mathbb{R}^m$ contiene las entradas $y_i$'s y la matriz $A \in \mathbb{R}^{m \times n}$ contiene a las entradas $x_i$'s o funciones de éstas $\forall i=0,1,\dots,m$.. Por el dibujo se tiene que cumplir que $A^Tr(\beta)=0$, esto es: las columnas de $A$ son ortogonales a $r(\beta)$. La condición anterior conduce a las **ecuaciones normales**: $$0=A^Tr(\beta)=A^T(y-A\beta)=A^Ty-A^TA\beta.$$ Finalmente, considerando la variable de optimización $\beta$ y al vector $y$ tenemos: $A^TA \beta = A^Ty$. * En los mínimos cuadrados lineales se supone: $f(x|\beta) = \displaystyle \sum_{j=0}^n\beta_j\phi_j(x)$ con $\phi_j: \mathbb{R} \rightarrow \mathbb{R}$ funciones conocidas por lo que se tiene una gran flexibilidad para el proceso de ajuste. Con las funciones $\phi_j (\cdot)$ se construye a la matriz $A$. * La función objetivo en los mínimos cuadrados lineales puede escribirse de las siguientes formas:$$f_o(\beta)=\displaystyle \sum_{i=1}^{20} (y_i -f_o(x_i|\beta))^2 = \displaystyle \sum_{i=1}^{20} (y_i - A[i,:]^T\beta)^2 = ||y - A \beta||_2^2= (y-A\beta)^T(y-A\beta) = y^Ty-2\beta^TA^Ty + \beta^TA^TA\beta$$ con $A[i,:]$ $i$-ésimo renglón de $A$ visto como un vector en $\mathbb{R}^n$. Es común dividir por $2$ la función objetivo para finalmente tener el problema:$$\displaystyle \min_{\beta \in \mathbb{R}^n} \quad \frac{1}{2}y^Ty-\beta^TA^Ty + \frac{1}{2}\beta^TA^TA\beta.$$En cualquier reescritura de la función $f_o$, el problema de aproximación con normas, o bien en su caso particular de mínimos cuadrados, es un problema de **optimización convexa** (ver [4.1.Optimizacion_numerica_y_machine_learning](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/IV.optimizacion_convexa_y_machine_learning/4.1.Optimizacion_numerica_y_machine_learning.ipynb)). Regularización En lo que sigue se utiliza una nomenclatura similar del paquete [glmnet](https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html) de R. Una técnica muy utilizada en el contexto de *machine learning* es la regularización, la cual tiene diferentes efectos en la solución de los problemas que surgen en esta área (por ejemplo lidiar con multicolinealidad entre variables, ver [Multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity), o el sobre ajuste, ver [Overfitting](https://en.wikipedia.org/wiki/Overfitting)) . La regularización es un **caso particular** del problema más general de **optimización multicriterio, multiobjetivo, vectorial o también nombrada Pareto**, ver [Multi objective optimization](https://en.wikipedia.org/wiki/Multi-objective_optimization). Al añadir regularización al problema de aproximación por normas, se obtiene un problema de optimización *bi criterion* en el que además de minimizar la norma $||A\beta-y||$, se tiene que encontrar $\beta \in \mathbb{R}^n$ con norma $||\cdot||$ lo más pequeña posible. Esto es, se debe resolver el siguiente problema de **optimización convexa** con dos objetivos $||A\beta-y||$, $||\beta||$:$$\displaystyle \min (||A\beta-y||,||\beta||)$$respecto a $\mathbb{R}^2_+ = \{(u,v) \in \mathbb{R}^2 : u \geq 0, v \geq 0\}$.**Comentario:** en este problema se tiene el *tradeoff* entre tener $||\beta||$ mínima y $||A\beta-y||$ "grande" o mínima $||A\beta-y||$ y $||\beta||$ "grande". La regularización es una técnica para resolver el problema anterior pues se propone una función objetivo como una **suma ponderada** de los dos objetivos anteriores:$$\displaystyle \min_{\beta \in \mathbb{R}^n} ||A\beta-y|| + \lambda ||\beta||$$donde: $\lambda > 0 $ es un **parámetro** del problema. En esta formulación $\lambda$ varía en $(0, \infty)$ y permite realizar el *tradeoff* en el tamaño entre $||A\beta-y||$ vs $||\beta||$ descrito anteriormente. Entre las elecciones de norma más populares para el problema de regresión con regularización están:* La norma $2$ o $\ell_2$ o Euclidiana que da lugar a la regularización [Tikhonov](https://en.wikipedia.org/wiki/Tikhonov_regularization) o *ridge*: $$\displaystyle \min_{\beta \in \mathbb{R}^n} ||A\beta-y||_2^2 + \lambda ||\beta||_2^2 = \beta^T(A^TA + \lambda I )\beta - 2\beta^TAy + y^Ty$$donde: $I$ es la matriz identidad. Este problema **siempre** tiene solución (aún si $A$ es de $rank$ incompleto) pues $A^TA + \lambda I$ es una matriz definida positiva para $\lambda >0$. La solución está dada por: $\beta^* = (A^TA + \lambda I)^{-1}A^Ty$. * La norma $1$ o $\ell_1$ o del [taxi](https://en.wikipedia.org/wiki/Taxicab_geometry) que produce la regularización conocida como **[*lasso*](https://en.wikipedia.org/wiki/Lasso_(statistics)** (*least absolute shrinkage and selection operator*, Tibshirani, 1996):$$\displaystyle \min_{\beta \in \mathbb{R}^n} ||A\beta-y||_2^2 + \lambda ||\beta||_1$$ **Comentario:** es posible probar que los problemas anteriores son equivalentes a:$$\displaystyle \min_{\beta \in \mathbb{R}^n} ||A\beta-y||_2^2$$ $$\text{sujeto a: } ||\beta||^2_2 \leq t$$ para el caso de *ridge* y: $$\displaystyle \min_{\beta \in \mathbb{R}^n} ||A\beta-y||_2^2$$ $$\text{sujeto a: } ||\beta||_1 \leq t.$$ para el caso de *lasso* y con $t$ un parámetro que define la regularización y está relacionado con $\lambda$. Las formulaciones anteriores ayudan a visualizar lo que en el proceso de optimización se está buscando: en el dibujo anterior las curvas de nivel de la función objetivo (convexa) se representan como elipses y la variable de optimización es $\beta \in \mathbb{R}^2$. Del lado izquierdo tenemos la bola unitaria bajo la norma $1$ que corresponde a la regularización *lasso* y del lado derecho la bola unitaria bajo la norma $2$ que corresponde a la regularización *ridge*. En ambos dibujos se observa que la solución está dada por $\beta^*$ y que resulta de la intersección de la curva de nivel que toca a la bola unitaria respectiva. * [*Elastic net*](https://www.rdocumentation.org/packages/glmnet/versions/3.0-2/topics/glmnet):$$\displaystyle \min_{\beta \in \mathbb{R}^n} ||A\beta-y||_2^2 + \lambda ((1-\alpha)||\beta||^2_2 + \alpha ||\beta||_1)$$para valores $\alpha \in [0,1]$. Obsérvese si $\alpha = 0$ se tiene la regularización *ridge* y si $\alpha=1$ se tiene la regularización *lasso*. Este tipo de regularización realiza un equilibrio entre ambas regularizaciones. Ejemplo sin regularización vía descenso en gradiente ###Code install.packages(c("latex2exp","glmnet"),lib="/usr/local/lib/R/site-library/", repos="https://cran.itam.mx/") ###Output also installing the dependencies ‘iterators’, ‘foreach’, ‘shape’ ###Markdown **En este primer ejemplo no usamos regularización, es un problema de mínimos cuadrados lineales.** ###Code #load numerical differentiation #load utils #load algorithms for unconstrained convex optimization #load line search dir_R="algoritmos/R" source(paste(dir_R,"/numerical_differentiation.R", sep="")) source(paste(dir_R,"/utils.R", sep="")) source(paste(dir_R,"/algorithms_for_uco.R", sep="")) source(paste(dir_R,"/line_search.R", sep="")) library(ggplot2) library(latex2exp) library(glmnet) library(magrittr) library(dplyr) ###Output Loading required package: Matrix Loaded glmnet 3.0-2 Attaching package: ‘dplyr’ The following objects are masked from ‘package:stats’: filter, lag The following objects are masked from ‘package:base’: intersect, setdiff, setequal, union ###Markdown Generamos puntos pseudo aleatorios: ###Code set.seed(1989) #para reproducibilidad mpoints <- 20 df <- data.frame(x=rnorm(mpoints)) y <- -3*df$x + rnorm(mpoints,2,1) df$y <- y gg <- ggplot(data=df, aes(x=x, y=y)) gg + geom_point(aes(x=x,y=y),size=2) ###Output _____no_output_____ ###Markdown Usamos la función [lm](https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/lm) del paquete `stats` de R para ajustar un modelo de regresión lineal: ###Code linear_model <- lm(df$y~df$x) print(linear_model$coefficients) gg + geom_point(aes(x=x,y=y),size=2) + geom_smooth(method='lm',colour='red') ###Output _____no_output_____ ###Markdown **Aplicamos el método de descenso en gradiente para comparar con lo calculado vía `lm`** Recordamos que el problema de optimización es: $$\displaystyle \min_{\beta \in \mathbb{R}^n} \quad \frac{1}{2}y^Ty-\beta^TA^Ty + \frac{1}{2}\beta^TA^TA\beta$$ **Función objetivo:** ###Code cte <- sum(y*y) A <- matrix(c(rep(1,mpoints),df$x),nrow=mpoints) fo <-function(beta)1/2*cte - sum(beta*(t(A)%*%y)) + 1/2*sum(beta*(t(A)%*%(A%*%beta))) #obsérvese que no se realiza el producto A^TA ###Output _____no_output_____ ###Markdown **Punto inicial $\beta^{(0)}:$** ###Code beta_0 <- matrix(c(0,0),nrow=2) beta_ast <- c(linear_model$coefficients[1],linear_model$coefficients[2]) ###Output _____no_output_____ ###Markdown **$\beta^*$ (punto óptimo)**: ###Code print(beta_ast) ###Output (Intercept) df$x 1.565663 -2.810582 ###Markdown **$p^*$ (valor óptimo)**: ###Code p_ast <- fo(beta_ast) p_ast ###Output _____no_output_____ ###Markdown **argumentos para el método de descenso en gradiente:** ###Code tol <- 1e-8 tol_backtracking <- 1e-14 maxiter <- 30 l<-gradient_descent(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) ###Output I Normagf Error x_ast Error p_ast line search 1 7.73e+01 1.00e+00 1.11e+01 --- 2 4.17e+01 5.17e-01 3.11e+00 6.25e-02 3 2.41e+01 2.95e-01 1.03e+00 6.25e-02 4 1.41e+01 1.73e-01 3.52e-01 6.25e-02 5 8.27e+00 1.01e-01 1.21e-01 6.25e-02 6 4.86e+00 5.95e-02 4.18e-02 6.25e-02 7 2.85e+00 3.49e-02 1.44e-02 6.25e-02 8 1.68e+00 2.05e-02 4.97e-03 6.25e-02 9 9.84e-01 1.20e-02 1.72e-03 6.25e-02 10 5.78e-01 7.07e-03 5.92e-04 6.25e-02 11 3.39e-01 4.15e-03 2.04e-04 6.25e-02 12 1.99e-01 2.44e-03 7.04e-05 6.25e-02 13 1.17e-01 1.43e-03 2.43e-05 6.25e-02 14 6.88e-02 8.42e-04 8.38e-06 6.25e-02 15 4.04e-02 4.94e-04 2.89e-06 6.25e-02 16 2.37e-02 2.90e-04 9.96e-07 6.25e-02 17 1.39e-02 1.70e-04 3.43e-07 6.25e-02 18 8.19e-03 1.00e-04 1.19e-07 6.25e-02 19 4.81e-03 5.89e-05 4.10e-08 6.25e-02 20 2.82e-03 3.46e-05 1.41e-08 6.25e-02 21 1.65e-03 2.03e-05 4.87e-09 6.25e-02 22 9.67e-04 1.18e-05 1.66e-09 6.25e-02 23 5.69e-04 6.95e-06 5.71e-10 6.25e-02 24 3.34e-04 4.11e-06 1.99e-10 6.25e-02 25 1.95e-04 2.39e-06 6.76e-11 6.25e-02 26 1.15e-04 1.40e-06 2.30e-11 6.25e-02 27 6.98e-05 8.42e-07 8.39e-12 6.25e-02 28 3.99e-05 5.13e-07 3.11e-12 6.25e-02 29 2.45e-05 2.62e-07 8.09e-13 6.25e-02 Error of x with respect to x_ast: 2.62e-07 Approximate solution: [,1] [1,] 1.565663 [2,] -2.810581 [1] "Reached maximum number of iterations, check approximation" ###Markdown **Soluciones que están en la lista `l`:** ###Code beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de descenso en gradiente:** ###Code print(beta) ###Output [,1] [1,] 1.565663 [2,] -2.810581 ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output (Intercept) df$x 1.565663 -2.810582 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown **Tenemos alrededor de $7$ dígitos de precisión.** **Secuencia de minimización $\beta^{(k)}$**: ###Code beta_plot total_of_iterations ###Output _____no_output_____ ###Markdown **Gráfica de error relativo:** ###Code gg <- ggplot() gg + geom_line(aes(x=25:total_of_iterations,y=Err_plot[25:length(Err_plot)])) + xlab('Iterations') + ylab(TeX('Error relativo entre f_o(x^k) y p^*')) ###Output _____no_output_____ ###Markdown Ejemplos con regularización En el paquete de R [glmnet package](https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html) se tiene la función del mismo nombre [glmnet](https://www.rdocumentation.org/packages/glmnet/versions/3.0-2/topics/glmnet) en la que se ajustan [modelos lineales generalizados](https://en.wikipedia.org/wiki/Generalized_linear_model) con penalización *elastic net* (compromiso entre *lasso* y *ridge*). La **función objetivo** en el caso de regresión lineal (familia Gaussiana) utiliza una **pérdida cuadrática** con regularización *elastic net*:$$\displaystyle \min_{(\beta_0, \beta) \in \mathbb{R}^{n+1}} \frac{1}{2m} \sum_{i=1}^m(y_i -\beta_0 - x_i^T\beta)^2 + \lambda \left (\frac{(1-\alpha)}{2}||\beta||^2_2 + \alpha ||\beta||_1 \right )$$ donde: $x_i \in \mathbb{R}^n$. Véase el artículo [Regularization Paths for Generalized Linear Models via Coordinate Descent](https://web.stanford.edu/~hastie/Papers/glmnet.pdf) para esta formulación. **Obsérvese que no** se penaliza la variable $\beta_0$. **Comentarios:*** **Lo que continúa en la nota son comparaciones entre lo obtenido por el paquete de `glmnet` de R vs implementaciones simples de los métodos de descenso por coordenadas y Newton realizadas por el prof. La implementación realizada en el paquete es mucho más general y eficiente que lo realizado por el prof. No se pretende realizar comparaciones en tiempo, memoria ni generalidad en la solución de problemas. Aún así, lo presentado a continuación ayuda a entender el problema que se resuelve y la metodología utilizada.*** También en los ejemplos **no se realizará estandarización de variables (aunque es recomendable realizar esto para ejemplos reales...).** Regularización lasso vía método de Newton **En este segundo ejemplo utilizamos la regularización *lasso***. Obsérvese que para este caso la función objetivo en `glmnet` es de la forma:$$\displaystyle \min_{(\beta_0, \beta) \in \mathbb{R}^{n+1}} \frac{1}{2m} \sum_{i=1}^m(y_i -\beta_0 - x_i^T\beta)^2 + \lambda \alpha ||\beta||_1$$ **Comentario:** recuérdese que $||\beta||_1 = \displaystyle \sum_{i=1}^n |\beta_i|$ por lo que la función objetivo continúa siendo convexa pero no es diferenciable en el vector $\beta = 0$ (el valor absoluto es una función no diferenciable en el punto $0$). Ejemplo Simulamos algunos datos: **Obs:** se utilizarán los siguientes argumentos para la función de R: ###Code reg<-.5 #este es lambda * alpha, el parámetro de regularización #en la formulación de glmnet fit <- glmnet(A,y,alpha=1,lambda=reg,standardize=F,nlambda=1,thresh=1e-8) beta_ast <- as.matrix(fit$beta) ###Output _____no_output_____ ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output s0 V1 0.000000 V2 -2.416619 ###Markdown **Obsérvese** que $\beta^*_0$ es $0$ y por tanto se puede eliminar el intercepto del modelo. **Comentario:** **A continuación se elimina la primera columna de la matriz $A$** por la observación anterior. La primer columna de $A$ (columna de $1$s) implica considerar un modelo con intercepto: $\beta_0$. Además como se mencionó anteriormente, la implementación en `glmnet` para el caso de *lasso* es más general y tiende a hacer $0$s los coeficientes estimados. La implementación del prof **no está realizando esto** (pues el objetivo es mostrar el uso de métodos de descenso para resolver diferentes problemas, el objetivo no es obtener los mismos resultados que `glmnet`) por lo que los coeficientes que son $0$ no serán estimados correctamente en esta implementación. ###Code print(head(A)) ###Output [,1] [,2] [1,] 1 1.1025783 [2,] 1 1.1178965 [3,] 1 -1.8181019 [4,] 1 -0.1944140 [5,] 1 -0.6131956 [6,] 1 -0.3462673 ###Markdown Eliminamos la primer columna de $A$ que ayuda a la estimación del intercepto: ###Code A<-A[,-1] print(head(A)) ###Output [1] 1.1025783 1.1178965 -1.8181019 -0.1944140 -0.6131956 -0.3462673 ###Markdown **El objetivo entonces es estimar una sola $\beta$: $\beta_1$.** ###Code beta_ast<-beta_ast[2] print(beta_ast) ###Output [1] -2.416619 ###Markdown Usaremos el método de **descenso por coordenadas** y el método de Newton para aproximar a $\beta_1^*$. Ver [4.2.Descenso_por_coordenadas_R.ipynb](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/IV.optimizacion_convexa_y_machine_learning/4.2.Descenso_por_coordenadas_R.ipynb), [4.2.Metodo_de_Newton_Python](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/IV.optimizacion_convexa_y_machine_learning/4.2.Metodo_de_Newton_Python.ipynb). Además usaremos la siguiente función `quita_signo` que ayuda a aproximar la derivada de la función objetivo $f_o$ y así lidiar con la no diferenciabilidad en $0$. La función `quita_signo` realiza:$$\text{quita_signo}(x) = \begin{cases}x & \text{si } x > 0,\\-x & \text{si } x <0,\\\approx 2.22 \times 10^{-308} & \text{si } |x| \approx 0\end{cases}$$ **obsérvese que el valor elegido en el tercer caso es el más pequeño positivo normalizado en un sistema de punto flotante, ver [1.2.Sistema_de_punto_flotante](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/I.computo_cientifico/1.2.Sistema_de_punto_flotante.ipynb).** ###Code quita_signo<-function(beta){ beta<-sign(beta)*beta #la siguiente variable es un índice que localiza aquellas entradas del #vector beta en valor absoluto que son cercanas a 0. ind <- beta < .Machine$double.xmin & beta > -.Machine$double.xmin #se asigna a cada entrada localizada en ind el valor más pequeño normalizado #en un sistema de punto flotante beta[ind] <- .Machine$double.xmin beta } .Machine$double.xmin ###Output _____no_output_____ ###Markdown **Comentario:** la definición de la función `quita_signo` se basa en lo que se conoce como *subdifferential* que es un conjunto de [subderivatives](https://en.wikipedia.org/wiki/Subderivative), útiles para generalizar las derivadas para funciones convexas que no son diferenciables en puntos de su dominio. **Así, la función objetivo es:** ###Code fo <-function(beta)1/mpoints*(1/2*cte - sum(beta*(A*y)) + 1/2*sum(beta*(A*(A*beta)))) + reg*sum(quita_signo(beta)) ###Output _____no_output_____ ###Markdown **Valor óptimo:** ###Code p_ast <- fo(beta_ast) p_ast ###Output _____no_output_____ ###Markdown **Punto inicial $\beta^{(0)}:$** ###Code beta_0<-0 ###Output _____no_output_____ ###Markdown Solución vía descenso por coordenadas **Argumentos para el método de descenso por coordenadas:** ###Code tol <- 1e-8 tol_backtracking <- 1e-14 maxiter <- 30 l<-coordinate_descent(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) ###Output I Normagf Error x_ast Error p_ast line search 1 4.05e+00 1.00e+00 1.19e+00 --- 2 4.79e-01 1.62e-01 2.93e-02 5.00e-01 3 1.29e-01 3.59e-02 2.08e-03 1.00e+00 4 3.47e-02 1.75e-02 1.08e-04 1.00e+00 5 9.36e-03 3.15e-03 3.50e-05 1.00e+00 6 2.52e-03 7.02e-03 4.54e-05 1.00e+00 7 6.79e-04 5.98e-03 4.62e-05 1.00e+00 8 1.83e-04 6.26e-03 4.62e-05 1.00e+00 9 4.92e-05 6.18e-03 4.62e-05 1.00e+00 10 1.32e-05 6.20e-03 4.62e-05 1.00e+00 11 3.51e-06 6.20e-03 4.62e-05 1.00e+00 12 8.88e-07 6.20e-03 4.62e-05 1.00e+00 13 2.22e-07 6.20e-03 4.62e-05 1.00e+00 14 8.88e-08 6.20e-03 4.62e-05 1.00e+00 15 0.00e+00 6.20e-03 4.62e-05 1.00e+00 Error of x with respect to x_ast: 6.20e-03 Approximate solution:[1] -2.401638 ###Markdown **Soluciones que están en la lista `l`:** ###Code beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de descenso por coordenadas:** ###Code print(beta) ###Output [1] -2.401638 ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output [1] -2.416619 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown **Tenemos alrededor de $2$ dígitos de precisión**. **Secuencia de minimización $\beta^{(k)}$**: ###Code print(beta_plot) gg + geom_point(aes(x=beta_plot,y=0),size=2) + annotate(geom='text', x=0, y=0, label=TeX("x^{(0)}", output='character'), parse=TRUE) + xlab('x') + ylab('y') + ggtitle(TeX('Iter del método de descenso por coordenadas para $f_o$')) ###Output _____no_output_____ ###Markdown Solución vía el método de Newton ###Code l<-Newtons_method(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) ###Output I Normgf Newton Decrement Error x_ast Error p_ast line search condHf 1 4.05e+00 1.29e+01 1.00e+00 1.19e+00 --- 1.00e+00 2 1.00e+00 7.92e-01 3.21e-01 1.29e-01 1.00e+00 1.00e+00 3 6.24e-04 3.07e-07 6.00e-03 4.62e-05 1.00e+00 1.00e+00 4 1.78e-07 2.48e-14 6.20e-03 4.62e-05 1.00e+00 1.00e+00 Error of x with respect to x_ast: 6.20e-03 Approximate solution:[1] -2.401638 ###Markdown **Soluciones que están en la lista `l`:** ###Code beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de Newton:** ###Code print(beta) ###Output [1] -2.401638 ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output [1] -2.416619 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown **Tenemos alrededor de $2$ dígitos de precisión.** **Secuencia de minimización $\beta^{(k)}$**: ###Code print(beta_plot) gg + geom_point(aes(x=beta_plot,y=0),size=2) + annotate(geom='text', x=0, y=0, label=TeX("x^{(0)}", output='character'), parse=TRUE) + xlab('x') + ylab('y') + ggtitle(TeX('Iter del método de Newton para $f_o$')) ###Output _____no_output_____ ###Markdown **Comentario:** En ambos métodos se aproxima de forma correcta a $\beta_1^*$. Otro ejemplo: modelo sin intercepto Simulamos otros datos: ###Code set.seed(1989) #para reproducibilidad mpoints <- 50 x1 <- rnorm(mpoints) x2 <- rnorm(mpoints,2,1) y <- 3*x1 -.5*x2 A<-cbind(x1,x2) print(head(A)) print(head(y)) ###Output [1] 2.270786 2.645456 -6.162110 -1.982280 -3.555726 -1.810662 ###Markdown **Reconstruímos a la función objetivo con el nuevo valor de la constante y el número de puntos:** ###Code cte <- sum(y*y) mpoints<-nrow(A) cte mpoints fo <-function(beta)1/mpoints*(1/2*cte - sum(beta*(t(A)%*%y)) + 1/2*sum(beta*(t(A)%*%(A%*%beta)))) + reg*sum(quita_signo(beta)) ###Output _____no_output_____ ###Markdown **Solución vía `glmnet` sin intercepto:** ###Code fit <- glmnet(A,y,alpha=1,lambda=reg,standardize=F,nlambda=1,intercept=F,thresh=1e-8) beta_ast <- as.matrix(fit$beta) print(beta_ast) ###Output s0 x1 2.6122015 x2 -0.3794804 ###Markdown **Solución vía método de Newton:** ###Code beta_0<-c(1,1) tol <- 1e-8 tol_backtracking <- 1e-14 maxiter <- 30 p_ast <- fo(beta_ast) l<-Newtons_method(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de Newton:** ###Code print(beta) ###Output [1] 2.6122013 -0.3794804 ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output s0 x1 2.6122015 x2 -0.3794804 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown Otro ejemplo: dataset [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.6.2/topics/mtcars) de R ###Code print(head(mtcars)) ###Output mpg cyl disp hp drat wt qsec vs am gear carb Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4 Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4 Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1 Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1 Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2 Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1 ###Markdown **Utilizamos las variables numéricas `disp` y `drat`:** ###Code y <- mtcars %>% select(mpg) %>% as.matrix() X <- mtcars %>% select(-mpg) %>% as.matrix() A<-X[,c(2,4)] print(head(A)) ###Output disp drat Mazda RX4 160 3.90 Mazda RX4 Wag 160 3.90 Datsun 710 108 3.85 Hornet 4 Drive 258 3.08 Hornet Sportabout 360 3.15 Valiant 225 2.76 ###Markdown **Ajuste vía `glmnet`:** ###Code fit <- glmnet(A,y,alpha=1,lambda=reg,standardize=F,nlambda=1,intercept=F,thresh=1e-8) beta_ast <- as.matrix(fit$beta) ###Output _____no_output_____ ###Markdown **$\beta^*$**: ###Code print(beta_ast) ###Output s0 disp -0.01682177 drat 6.59053287 ###Markdown **Solución vía método de Newton** **Reconstruímos a la función objetivo con el nuevo valor de la constante y el número de puntos:** ###Code cte <- sum(y*y) mpoints<-nrow(A) cte mpoints beta_0<-c(1,1) tol <- 1e-8 tol_backtracking <- 1e-14 maxiter <- 30 p_ast <- fo(beta_ast) l<-Newtons_method(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de Newton:** ###Code print(beta) ###Output [1] -0.01682385 6.59064139 ###Markdown **$\beta^*$**: ###Code print(beta_ast) ###Output s0 disp -0.01682177 drat 6.59053287 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown **Tenemos alrededor de $5$ dígitos de precisión.** Probamos cambiar el parámetro de regularización ###Code reg<-0.2 ###Output _____no_output_____ ###Markdown **Ajuste vía `glmnet`:** ###Code fit <- glmnet(A,y,alpha=1,lambda=reg,standardize=F,nlambda=1,intercept=F,thresh=1e-8) ###Output _____no_output_____ ###Markdown **$\beta^*:$** ###Code beta_ast <- as.matrix(fit$beta) print(beta_ast) ###Output s0 disp -0.01766132 drat 6.66307033 ###Markdown **Solución vía método de Newton:** ###Code beta_0<-c(1,1) tol <- 1e-8 tol_backtracking <- 1e-14 maxiter <- 30 p_ast <- fo(beta_ast) l<-Newtons_method(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de Newton:** ###Code print(beta) ###Output [1] -0.01766474 6.66329488 ###Markdown **$\beta^*:$** ###Code print(beta_ast) ###Output s0 disp -0.01766132 drat 6.66307033 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown **Tenemos alrededor de $5$ dígitos de precisión.** **Solución vía descenso por coordenadas:** ###Code beta_0<-c(0,0) l<-coordinate_descent(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de descenso por coordenadas:** ###Code print(beta) ###Output [1] -0.01756723 6.65444873 ###Markdown **$\beta^*:$** ###Code print(beta_ast) ###Output s0 disp -0.01766132 drat 6.66307033 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown **Tenemos alrededor de $2$ dígitos de precisión.** **Secuencia de minimización:** ###Code beta_plot gg + geom_point(aes(x=beta_plot[1,],y=beta_plot[2,]),size=2) + annotate(geom='text', x=0, y=-0.1, label=TeX("x^{(0)}", output='character'), parse=TRUE) + xlab('x') + ylab('y') + ggtitle(TeX('Iter del método de descenso por coordenadas para $f_o$')) ###Output _____no_output_____ ###Markdown Comparación con descenso en gradiente ###Code l<-gradient_descent(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de descenso en gradiente:** ###Code print(beta) ###Output [1] 0.05714807 0.15508541 ###Markdown **$\beta^*:$** ###Code print(beta_ast) ###Output s0 disp -0.01766132 drat 6.66307033 ###Markdown **Error relativo:** ###Code compute_error(beta_ast,beta) ###Output _____no_output_____ ###Markdown **Tenemos un error del $97\%$!**. ###Code beta_plot gg + geom_point(aes(x=beta_plot[1,],y=beta_plot[2,]),size=2) + annotate(geom='text', x=0, y=-0.01, label=TeX("x^{(0)}", output='character'), parse=TRUE) + xlab('x') + ylab('y') + ggtitle(TeX('Iter del método de descenso en gradiente para $f_o$')) ###Output _____no_output_____ ###Markdown Otro ejemplo: aumentamos número de columnas para el mismo dataset de [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.6.2/topics/mtcars) de R ###Code A<-X[,c(2,4,5,6)] print(head(A)) ###Output disp drat wt qsec Mazda RX4 160 3.90 2.620 16.46 Mazda RX4 Wag 160 3.90 2.875 17.02 Datsun 710 108 3.85 2.320 18.61 Hornet 4 Drive 258 3.08 3.215 19.44 Hornet Sportabout 360 3.15 3.440 17.02 Valiant 225 2.76 3.460 20.22 ###Markdown **Solución vía `glmnet`:** ###Code fit <- glmnet(A,y,alpha=1,lambda=reg,standardize=F,nlambda=1,intercept=F,thresh=1e-8) ###Output _____no_output_____ ###Markdown **$\beta^*$**: ###Code beta_ast <- as.matrix(fit$beta) print(beta_ast) ###Output s0 disp 0.0006973293 drat 2.6120443627 wt -3.6222009972 qsec 1.2403485581 ###Markdown **Solución vía método de Newton:** ###Code beta_0<-c(1,1,1,1) tol <- 1e-8 tol_backtracking <- 1e-14 maxiter <- 30 p_ast <- fo(beta_ast) ###Output _____no_output_____ ###Markdown **Newtons method** ###Code l<-Newtons_method(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de Newton:** ###Code print(beta) ###Output [1] 0.0006261966 2.6223707978 -3.6105443552 1.2371047935 ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output s0 disp 0.0006973293 drat 2.6120443627 wt -3.6222009972 qsec 1.2403485581 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown **Tenemos alrededor de $3$ dígitos de precisión.** **Secuencia de minimización:** ###Code beta_plot ###Output _____no_output_____ ###Markdown Regularización ridge vía SVD Función objetivo en `glmnet`:$$\displaystyle \min_{(\beta_0, \beta) \in \mathbb{R}^{n+1}} \frac{1}{2m} \sum_{i=1}^m(y_i -\beta_0 - x_i^T\beta)^2 + \lambda \left (\frac{(1-\alpha)}{2}||\beta||^2_2 + \alpha ||\beta||_1 \right )$$ Obsérvese que para este caso la función objetivo en `glmnet` es de la forma ($\alpha=0$):$$\displaystyle \min_{(\beta_0, \beta) \in \mathbb{R}^{n+1}} \frac{1}{2m} \sum_{i=1}^m(y_i -\beta_0 - x_i^T\beta)^2 + \frac{\lambda}{2} ||\beta||^2_2$$ **Comentarios:** * A diferencia del caso de *lasso*, la función objetivo para la regularización *ridge* sí es diferenciable en cualquier punto de su dominio. Recuérdese que $||\beta||^2_2 = \displaystyle \sum_{i=1}^n \beta_i^2$ y por tanto la función objetivo continúa siendo convexa.* También en este caso a diferencia de *lasso* la solución está dada por la expresión vía la **descomposición en valores singulares de la matriz** $A$, ver [3.3.d.SVD.ipynb](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/III.computo_matricial/3.3.d.SVD.ipynb): $$\begin{eqnarray}\beta^* &=& (A^TA + \lambda I)^{-1} A^Ty \nonumber \\&=& (V (\Sigma^{T})^2 V^T + \lambda I)^{-1} V \Sigma^TU^Ty \nonumber \\&=& V((\Sigma^{T})^2 + \lambda I)^{-1} V^T V \Sigma^T U^Ty \nonumber \\&=& V D U^Ty \nonumber\end{eqnarray}$$donde: $D$ es una matriz diagonal con entradas $\frac{\sigma_i}{\sigma_i^2 + \lambda}$ para $i=1,\dots , n$. * Por la forma de la matriz del sistema de ecuaciones lineales: $A^TA + \lambda I$ aplicar el método de descenso por dirección de Newton equivale a resolver tal sistema con métodos o algoritmos para sistemas de ecuaciones lineales como los revisados en [3.3.Solucion_de_SEL_y_FM](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/III.computo_matricial/3.3.Solucion_de_SEL_y_FM.ipynb). Por esta razón se elige a la SVD como método para resolver el problema de mínimos cuadrados lineales con regularización *ridge*. Usamos el siguiente parámetro de regularización: ###Code reg<-.5 #este es lambda, el parámetro de regularización #en la formulación de glmnet ###Output _____no_output_____ ###Markdown En un primer ejemplo utilizamos las variables numéricas `disp` y `drat` ###Code y <- mtcars %>% select(mpg) %>% as.matrix() X <- mtcars %>% select(-mpg) %>% as.matrix() A<-X[,c(2,4)] print(head(A)) ###Output disp drat Mazda RX4 160 3.90 Mazda RX4 Wag 160 3.90 Datsun 710 108 3.85 Hornet 4 Drive 258 3.08 Hornet Sportabout 360 3.15 Valiant 225 2.76 ###Markdown **Solución vía `glmnet`:** ###Code fit <- glmnet(A,y,alpha=0,lambda=reg,standardize=F,nlambda=1,intercept=F,thresh=1e-8) beta_ast <- as.matrix(fit$beta) ###Output _____no_output_____ ###Markdown **$\beta^*:$** ###Code print(beta_ast) ###Output s0 disp -0.01669418 drat 6.57883316 ###Markdown **Solución vía SVD:** ###Code #svd of A singular_value_decomposition <- svd(A) s <- singular_value_decomposition$d u <- singular_value_decomposition$u v <- singular_value_decomposition$v cte_svd <- s/(s^2+reg)*(t(u)%*%y) beta_ridge <- v%*%cte_svd print(beta_ridge) compute_error(beta_ast,beta_ridge) ###Output _____no_output_____ ###Markdown **Tenemos aproximadamente dos dígitos de precisión.** Añadimos más columnas ###Code A<-X[,c(2,4,5,6)] print(head(A)) ###Output disp drat wt qsec Mazda RX4 160 3.90 2.620 16.46 Mazda RX4 Wag 160 3.90 2.875 17.02 Datsun 710 108 3.85 2.320 18.61 Hornet 4 Drive 258 3.08 3.215 19.44 Hornet Sportabout 360 3.15 3.440 17.02 Valiant 225 2.76 3.460 20.22 ###Markdown **Solución vía `glmnet`:** ###Code fit <- glmnet(A,y,alpha=0,lambda=reg,standardize=F,nlambda=1,intercept=F,thresh=1e-8) beta_ast <- as.matrix(fit$beta) ###Output _____no_output_____ ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output s0 disp -0.002266897 drat 2.574343046 wt -3.235364158 qsec 1.216575201 ###Markdown **Solución vía SVD:** ###Code #svd of A singular_value_decomposition <- svd(A) s <- singular_value_decomposition$d u <- singular_value_decomposition$u v <- singular_value_decomposition$v cte_svd <- s/(s^2+reg)*(t(u)%*%y) beta_ridge <- v%*%cte_svd print(beta_ridge) ###Output mpg [1,] 0.007263538 [2,] 2.819440010 [3,] -4.490056911 [4,] 1.271431770 ###Markdown **Error relativo:** ###Code compute_error(beta_ast,beta_ridge) ###Output _____no_output_____ ###Markdown **Tenemos $29\%$ de error**. La razón de la mala estimación es debido al **mal condicionamiento** de la matriz $A^TA$: ###Code kappa(t(A)%*%A,exact=TRUE) ###Output _____no_output_____ ###Markdown El mal condicionamiento de $A$ en Estadística lo relacionan con problemas de multicolinealidad, véase [Multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity): ###Code cor(A) ###Output _____no_output_____ ###Markdown Entonces tenemos alta correlación entre la variable `disp` y `wt`. Seleccionamos otras columnas de $A$ de modo que el número de condición de $A^TA$ no sea como el ejemplo anterior ###Code A<-X[,c(4,5,6)] kappa(t(A)%*%A, exact=TRUE) cor(A) ###Output _____no_output_____ ###Markdown **Solución vía `glmnet`:** ###Code fit <- glmnet(A,y,alpha=0,lambda=reg,standardize=F,nlambda=1,intercept=F,thresh=1e-8) beta_ast <- as.matrix(fit$beta) ###Output _____no_output_____ ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output s0 drat 2.568637 wt -3.478654 qsec 1.232553 ###Markdown **Solución vía SVD:** ###Code #svd of A singular_value_decomposition <- svd(A) s <- singular_value_decomposition$d u <- singular_value_decomposition$u v <- singular_value_decomposition$v cte_svd <- s/(s^2+reg)*(t(u)%*%y) beta_ridge <- v%*%cte_svd print(beta_ridge) ###Output mpg [1,] 2.892362 [2,] -3.643442 [3,] 1.197395 ###Markdown **Error relativo:** ###Code compute_error(beta_ast,beta_ridge) ###Output _____no_output_____ ###Markdown **Notas para contenedor de docker:** Comando de docker para ejecución de la nota de forma local:nota: cambiar `` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker.```docker run --rm -v :/datos --name jupyterlab_r_kernel_tidyverse -p 8888:8888 -d palmoreck/jupyterlab_r_kernel_tidyverse:1.1.0```password para jupyterlab: `qwerty`Detener el contenedor de docker:```docker stop jupyterlab_r_kernel_tidyverse``` Documentación de la imagen de docker `palmoreck/jupyterlab_r_kernel_tidyverse:1.1.0` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/r_kernel_tidyverse). --- Nota generada a partir de [liga](https://www.dropbox.com/s/6isby5h1e5f2yzs/4.2.Problemas_de_optimizacion_convexa.pdf?dl=0) En esta nota revisamos a los **mínimos cuadrados lineales con y sin regularización**. La **regularización** que utilizamos es la de **[Tikhonov](https://en.wikipedia.org/wiki/Tikhonov_regularization)** también nombrada $\ell_2$ o ***ridge*** y la $\ell_1$ o también conocida como **[*lasso*](https://en.wikipedia.org/wiki/Lasso_(statistics)** (*least absolute shrinkage and selection operator*, Tibshirani, 1996). Se muestra el uso de **métodos de descenso** (ver [4.2.Algoritmos_para_UCO](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/IV.optimizacion_convexa_y_machine_learning/4.2.Algoritmos_para_UCO.ipynb)) para resolver los problemas de optimización que surgen en los modelos anteriores y **no se tiene por objetivo la interpretación de los coeficientes estimados**. Se comparan los resultados del paquete [glmnet stanford](https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html), [glmnet cran](https://cran.r-project.org/web/packages/glmnet/index.html) de R con los obtenidos en la implementación hecha por el prof en [algoritmos/R](algoritmos/R), en específico [algoritmos/R/algorithms_for_uco.R](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/IV.optimizacion_convexa_y_machine_learning/algoritmos/R/algorithms_for_uco.R) para problemas tipo UCO (Unconstrained Convex Optimization). Mínimos cuadradosObsérvese que hay una gran cantidad de modelos por mínimos cuadrados, por ejemplo:* [Lineales](https://en.wikipedia.org/wiki/Linear_least_squares) u [ordinarios](https://en.wikipedia.org/wiki/Ordinary_least_squares) (nombre más usado en Estadística y Econometría).* [Generalizados](https://en.wikipedia.org/wiki/Generalized_least_squares), [ponderados](https://en.wikipedia.org/wiki/Weighted_least_squares).* [No lineales](https://en.wikipedia.org/wiki/Non-linear_least_squares).* [Totales](https://en.wikipedia.org/wiki/Total_least_squares) y [parciales](https://en.wikipedia.org/wiki/Partial_least_squares_regression).* [No negativos](https://en.wikipedia.org/wiki/Non-negative_least_squares).* [Rango reducido](https://epubs.siam.org/doi/abs/10.1137/1.9780898718867.ch7). Mínimos cuadrados lineales Cada uno de los modelos anteriores tienen diversas aplicaciones y propósitos. Los lineales son un caso particular del problema más general de **aproximación por normas**:$$\displaystyle \min_{x \in \mathbb{R}^n} ||Ax-b||$$donde: $A \in \mathbb{R}^{m \times n}$, $b \in \mathbb{R}^m$ son datos del problema, $x \in \mathbb{R}^n$ es la variable de optimización y $|| \cdot||$ es una norma en $\mathbb{R}^m$. **Se asume en toda la nota que $m \geq n $ (más renglones que columnas en $A$)**.$x^* = \text{argmin}_{x \in \mathbb{R}^n} ||Ax-b||$ se le nombra **solución aproximada** de $Ax \approx b$ en la norma $|| \cdot ||$. El vector: $r(x) = Ax -b$ se le nombra **residual** del problema.**Comentario:** el problema de aproximación por normas también se le nombra **problema de regresión**. En este contexto, los vectores $a_1, a_2, \dots, a_n$ (columnas de $A$) son nombradas regresoras o vector de atributos y el vector $\displaystyle \sum_{j=1}^n x_j^*a_j$ con $x^*$ óptimo del problema es nombrado la **regresión de $b$ sobre las regresoras**. $b$ es la **respuesta.** Si en el problema de aproximación de normas anterior se utiliza la norma Euclidiana o norma $2$, $|| \cdot ||_2$, y se eleva al cuadrado la función objetivo se tiene:$$\displaystyle \min_{x \in \mathbb{R}^n} ||Ax-b||^2_2$$que es el modelo por mínimos cuadrados lineales cuyo objetivo es minimizar la suma de cuadrados de las componentes del residual $r(x)$. **A partir de aquí, la variable de optimización será $\beta$ y no $x$**: Supóngase que se han realizado mediciones de un fenómeno de interés en diferentes puntos $x_i$'s resultando en cantidades $y_i$'s $\forall i=0,1,\dots, m$ (se tienen $m+1$ puntos) y además las $y_i$'s contienen un ruido aleatorio causado por errores de medición: El objetivo de los mínimos cuadrados es construir una curva, $f(x|\beta)$ que "mejor" se ajuste a los datos $(x_i,y_i)$, $\forall i=0,1,\dots,m$. El término de "mejor" se refiere a que la suma: $$\displaystyle \sum_{i=0}^m (y_i -f(x_i|\beta))^2$$ sea lo más pequeña posible, esto es, a que la suma de las distancias verticales entre $y_i$ y $f(x_i|\beta)$ $\forall i=0,1,\dots,m$ al cuadrado sea mínima. Por ejemplo: **Obs:*** La notación $f(x|\beta)$ se utiliza para denotar que $\beta$ es un vector de parámetros a estimar, en específico $\beta_0, \beta_1, \dots \beta_n$, esto es: $n+1$ parámetros a estimar. Si $m=3$ y $A \in \mathbb{R}^{3 \times 2}$ geométricamente el problema de **mínimos cuadrados lineales** se puede visualizar con el siguiente dibujo: donde: $r(\beta) = y-A\beta$, el vector $y \in \mathbb{R}^m$ contiene las entradas $y_i$'s y la matriz $A \in \mathbb{R}^{m \times n}$ contiene a las entradas $x_i$'s o funciones de éstas $\forall i=0,1,\dots,m$.. Por el dibujo se tiene que cumplir que $A^Tr(\beta)=0$, esto es: las columnas de $A$ son ortogonales a $r(\beta)$. La condición anterior conduce a las **ecuaciones normales**: $$0=A^Tr(\beta)=A^T(y-A\beta)=A^Ty-A^TA\beta.$$ Finalmente, considerando la variable de optimización $\beta$ y al vector $y$ tenemos: $A^TA \beta = A^Ty$. * En los mínimos cuadrados lineales se supone: $f(x|\beta) = \displaystyle \sum_{j=0}^n\beta_j\phi_j(x)$ con $\phi_j: \mathbb{R} \rightarrow \mathbb{R}$ funciones conocidas por lo que se tiene una gran flexibilidad para el proceso de ajuste. Con las funciones $\phi_j (\cdot)$ se construye a la matriz $A$. * La función objetivo en los mínimos cuadrados lineales puede escribirse de las siguientes formas:$$f_o(\beta)=\displaystyle \sum_{i=1}^{20} (y_i -f_o(x_i|\beta))^2 = \displaystyle \sum_{i=1}^{20} (y_i - A[i,:]^T\beta)^2 = ||y - A \beta||_2^2= (y-A\beta)^T(y-A\beta) = y^Ty-2\beta^TA^Ty + \beta^TA^TA\beta$$ con $A[i,:]$ $i$-ésimo renglón de $A$ visto como un vector en $\mathbb{R}^n$. Es común dividir por $2$ la función objetivo para finalmente tener el problema:$$\displaystyle \min_{\beta \in \mathbb{R}^n} \quad \frac{1}{2}y^Ty-\beta^TA^Ty + \frac{1}{2}\beta^TA^TA\beta.$$En cualquier reescritura de la función $f_o$, el problema de aproximación con normas, o bien en su caso particular de mínimos cuadrados, es un problema de **optimización convexa** (ver [4.1.Optimizacion_numerica_y_machine_learning](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/IV.optimizacion_convexa_y_machine_learning/4.1.Optimizacion_numerica_y_machine_learning.ipynb)). Regularización En lo que sigue se utiliza una nomenclatura similar del paquete [glmnet](https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html) de R. Una técnica muy utilizada en el contexto de *machine learning* es la regularización, la cual tiene diferentes efectos en la solución de los problemas que surgen en esta área (por ejemplo lidiar con multicolinealidad entre variables, ver [Multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity), o el sobre ajuste, ver [Overfitting](https://en.wikipedia.org/wiki/Overfitting)) . La regularización es un **caso particular** del problema más general de **optimización multicriterio, multiobjetivo, vectorial o también nombrada Pareto**, ver [Multi objective optimization](https://en.wikipedia.org/wiki/Multi-objective_optimization). Al añadir regularización al problema de aproximación por normas, se obtiene un problema de optimización *bi criterion* en el que además de minimizar la norma $||A\beta-y||$, se tiene que encontrar $\beta \in \mathbb{R}^n$ con norma $||\cdot||$ lo más pequeña posible. Esto es, se debe resolver el siguiente problema de **optimización convexa** con dos objetivos $||A\beta-y||$, $||\beta||$:$$\displaystyle \min (||A\beta-y||,||\beta||)$$respecto a $\mathbb{R}^2_+ = \{(u,v) \in \mathbb{R}^2 : u \geq 0, v \geq 0\}$.**Comentario:** en este problema se tiene el *tradeoff* entre tener $||\beta||$ mínima y $||A\beta-y||$ "grande" o mínima $||A\beta-y||$ y $||\beta||$ "grande". La regularización es una técnica para resolver el problema anterior pues se propone una función objetivo como una **suma ponderada** de los dos objetivos anteriores:$$\displaystyle \min_{\beta \in \mathbb{R}^n} ||A\beta-y|| + \lambda ||\beta||$$donde: $\lambda > 0 $ es un **parámetro** del problema. En esta formulación $\lambda$ varía en $(0, \infty)$ y permite realizar el *tradeoff* en el tamaño entre $||A\beta-y||$ vs $||\beta||$ descrito anteriormente. Entre las elecciones de norma más populares para el problema de regresión con regularización están:* La norma $2$ o $\ell_2$ o Euclidiana que da lugar a la regularización [Tikhonov](https://en.wikipedia.org/wiki/Tikhonov_regularization) o *ridge*: $$\displaystyle \min_{\beta \in \mathbb{R}^n} ||A\beta-y||_2^2 + \lambda ||\beta||_2^2 = \beta^T(A^TA + \lambda I )\beta - 2\beta^TAy + y^Ty$$donde: $I$ es la matriz identidad. Este problema **siempre** tiene solución (aún si $A$ es de $rank$ incompleto) pues $A^TA + \lambda I$ es una matriz definida positiva para $\lambda >0$. La solución está dada por: $\beta^* = (A^TA + \lambda I)^{-1}A^Ty$. * La norma $1$ o $\ell_1$ o del [taxi](https://en.wikipedia.org/wiki/Taxicab_geometry) que produce la regularización conocida como **[*lasso*](https://en.wikipedia.org/wiki/Lasso_(statistics)** (*least absolute shrinkage and selection operator*, Tibshirani, 1996):$$\displaystyle \min_{\beta \in \mathbb{R}^n} ||A\beta-y||_2^2 + \lambda ||\beta||_1$$ **Comentario:** es posible probar que los problemas anteriores son equivalentes a:$$\displaystyle \min_{\beta \in \mathbb{R}^n} ||A\beta-y||_2^2$$ $$\text{sujeto a: } ||\beta||^2_2 \leq t$$ para el caso de *ridge* y: $$\displaystyle \min_{\beta \in \mathbb{R}^n} ||A\beta-y||_2^2$$ $$\text{sujeto a: } ||\beta||_1 \leq t.$$ para el caso de *lasso* y con $t$ un parámetro que define la regularización y está relacionado con $\lambda$. Las formulaciones anteriores ayudan a visualizar lo que en el proceso de optimización se está buscando: en el dibujo anterior las curvas de nivel de la función objetivo (convexa) se representan como elipses y la variable de optimización es $\beta \in \mathbb{R}^2$. Del lado izquierdo tenemos la bola unitaria bajo la norma $1$ que corresponde a la regularización *lasso* y del lado derecho la bola unitaria bajo la norma $2$ que corresponde a la regularización *ridge*. En ambos dibujos se observa que la solución está dada por $\beta^*$ y que resulta de la intersección de la curva de nivel que toca a la bola unitaria respectiva. * [*Elastic net*](https://www.rdocumentation.org/packages/glmnet/versions/3.0-2/topics/glmnet):$$\displaystyle \min_{\beta \in \mathbb{R}^n} ||A\beta-y||_2^2 + \lambda ((1-\alpha)||\beta||^2_2 + \alpha ||\beta||_1)$$para valores $\alpha \in [0,1]$. Obsérvese si $\alpha = 0$ se tiene la regularización *ridge* y si $\alpha=1$ se tiene la regularización *lasso*. Este tipo de regularización realiza un equilibrio entre ambas regularizaciones. Ejemplo sin regularización vía descenso en gradiente ###Code install.packages(c("latex2exp","glmnet"),lib="/usr/local/lib/R/site-library/", repos="https://cran.itam.mx/") ###Output _____no_output_____ ###Markdown **En este primer ejemplo no usamos regularización, es un problema de mínimos cuadrados lineales.** ###Code #load numerical differentiation #load utils #load algorithms for unconstrained convex optimization #load line search dir_R="algoritmos/R" source(paste(dir_R,"/numerical_differentiation.R", sep="")) source(paste(dir_R,"/utils.R", sep="")) source(paste(dir_R,"/algorithms_for_uco.R", sep="")) source(paste(dir_R,"/line_search.R", sep="")) library(ggplot2) library(latex2exp) library(glmnet) library(magrittr) library(dplyr) ###Output Loading required package: Matrix Loaded glmnet 4.0 Attaching package: ‘dplyr’ The following objects are masked from ‘package:stats’: filter, lag The following objects are masked from ‘package:base’: intersect, setdiff, setequal, union ###Markdown Generamos puntos pseudo aleatorios: ###Code set.seed(1989) #para reproducibilidad mpoints <- 20 df <- data.frame(x=rnorm(mpoints)) y <- -3*df$x + rnorm(mpoints,2,1) df$y <- y gg <- ggplot(data=df, aes(x=x, y=y)) gg + geom_point(aes(x=x,y=y),size=2) ###Output _____no_output_____ ###Markdown Usamos la función [lm](https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/lm) del paquete `stats` de R para ajustar un modelo de regresión lineal: ###Code linear_model <- lm(df$y~df$x) print(linear_model$coefficients) gg + geom_point(aes(x=x,y=y),size=2) + geom_smooth(method='lm',colour='red') ###Output _____no_output_____ ###Markdown **Aplicamos el método de descenso en gradiente para comparar con lo calculado vía `lm`** Recordamos que el problema de optimización es: $$\displaystyle \min_{\beta \in \mathbb{R}^n} \quad \frac{1}{2}y^Ty-\beta^TA^Ty + \frac{1}{2}\beta^TA^TA\beta$$ **Función objetivo:** ###Code cte <- sum(y*y) A <- matrix(c(rep(1,mpoints),df$x),nrow=mpoints) fo <-function(beta)1/2*cte - sum(beta*(t(A)%*%y)) + 1/2*sum(beta*(t(A)%*%(A%*%beta))) #obsérvese que no se realiza el producto A^TA ###Output _____no_output_____ ###Markdown **Punto inicial $\beta^{(0)}:$** ###Code beta_0 <- matrix(c(0,0),nrow=2) beta_ast <- c(linear_model$coefficients[1],linear_model$coefficients[2]) ###Output _____no_output_____ ###Markdown **$\beta^*$ (punto óptimo)**: ###Code print(beta_ast) ###Output (Intercept) df$x 1.565663 -2.810582 ###Markdown **$p^*$ (valor óptimo)**: ###Code p_ast <- fo(beta_ast) p_ast ###Output _____no_output_____ ###Markdown **argumentos para el método de descenso en gradiente:** ###Code tol <- 1e-8 tol_backtracking <- 1e-14 maxiter <- 30 l<-gradient_descent(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) ###Output I Normagf Error x_ast Error p_ast line search 1 7.73e+01 1.00e+00 1.11e+01 --- 2 4.17e+01 5.17e-01 3.11e+00 6.25e-02 3 2.41e+01 2.95e-01 1.03e+00 6.25e-02 4 1.41e+01 1.73e-01 3.52e-01 6.25e-02 5 8.27e+00 1.01e-01 1.21e-01 6.25e-02 6 4.86e+00 5.95e-02 4.18e-02 6.25e-02 7 2.85e+00 3.49e-02 1.44e-02 6.25e-02 8 1.68e+00 2.05e-02 4.97e-03 6.25e-02 9 9.84e-01 1.20e-02 1.72e-03 6.25e-02 10 5.78e-01 7.07e-03 5.92e-04 6.25e-02 11 3.39e-01 4.15e-03 2.04e-04 6.25e-02 12 1.99e-01 2.44e-03 7.04e-05 6.25e-02 13 1.17e-01 1.43e-03 2.43e-05 6.25e-02 14 6.88e-02 8.42e-04 8.38e-06 6.25e-02 15 4.04e-02 4.94e-04 2.89e-06 6.25e-02 16 2.37e-02 2.90e-04 9.96e-07 6.25e-02 17 1.39e-02 1.70e-04 3.43e-07 6.25e-02 18 8.19e-03 1.00e-04 1.19e-07 6.25e-02 19 4.81e-03 5.89e-05 4.10e-08 6.25e-02 20 2.82e-03 3.46e-05 1.41e-08 6.25e-02 21 1.65e-03 2.03e-05 4.87e-09 6.25e-02 22 9.67e-04 1.18e-05 1.66e-09 6.25e-02 23 5.69e-04 6.95e-06 5.71e-10 6.25e-02 24 3.34e-04 4.11e-06 1.99e-10 6.25e-02 25 1.95e-04 2.39e-06 6.76e-11 6.25e-02 26 1.15e-04 1.40e-06 2.30e-11 6.25e-02 27 6.98e-05 8.42e-07 8.39e-12 6.25e-02 28 3.99e-05 5.13e-07 3.11e-12 6.25e-02 29 2.45e-05 2.62e-07 8.09e-13 6.25e-02 Error of x with respect to x_ast: 2.62e-07 Approximate solution: [,1] [1,] 1.565663 [2,] -2.810581 [1] "Reached maximum number of iterations, check approximation" ###Markdown **Soluciones que están en la lista `l`:** ###Code beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de descenso en gradiente:** ###Code print(beta) ###Output [,1] [1,] 1.565663 [2,] -2.810581 ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output (Intercept) df$x 1.565663 -2.810582 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown **Tenemos alrededor de $7$ dígitos de precisión.** **Secuencia de minimización $\beta^{(k)}$**: ###Code beta_plot total_of_iterations ###Output _____no_output_____ ###Markdown **Gráfica de error relativo:** ###Code gg <- ggplot() gg + geom_line(aes(x=25:total_of_iterations,y=Err_plot[25:length(Err_plot)])) + xlab('Iterations') + ylab(TeX('Error relativo entre f_o(x^k) y p^*')) ###Output _____no_output_____ ###Markdown Ejemplos con regularización En el paquete de R [glmnet package](https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html) se tiene la función del mismo nombre [glmnet](https://www.rdocumentation.org/packages/glmnet/versions/3.0-2/topics/glmnet) en la que se ajustan [modelos lineales generalizados](https://en.wikipedia.org/wiki/Generalized_linear_model) con penalización *elastic net* (compromiso entre *lasso* y *ridge*). La **función objetivo** en el caso de regresión lineal (familia Gaussiana) utiliza una **pérdida cuadrática** con regularización *elastic net*:$$\displaystyle \min_{(\beta_0, \beta) \in \mathbb{R}^{n+1}} \frac{1}{2m} \sum_{i=1}^m(y_i -\beta_0 - x_i^T\beta)^2 + \lambda \left (\frac{(1-\alpha)}{2}||\beta||^2_2 + \alpha ||\beta||_1 \right )$$ donde: $x_i \in \mathbb{R}^n$. Véase el artículo [Regularization Paths for Generalized Linear Models via Coordinate Descent](https://web.stanford.edu/~hastie/Papers/glmnet.pdf) para esta formulación. **Obsérvese que no** se penaliza la variable $\beta_0$. **Comentarios:*** **Lo que continúa en la nota son comparaciones entre lo obtenido por el paquete de `glmnet` de R vs implementaciones simples de los métodos de descenso por coordenadas y Newton realizadas por el prof. La implementación realizada en el paquete es mucho más general y eficiente que lo realizado por el prof. No se pretende realizar comparaciones en tiempo, memoria ni generalidad en la solución de problemas. Aún así, lo presentado a continuación ayuda a entender el problema que se resuelve y la metodología utilizada.*** También en los ejemplos **no se realizará estandarización de variables (aunque es recomendable realizar esto para ejemplos reales...).** Regularización lasso vía método de Newton **En este segundo ejemplo utilizamos la regularización *lasso***. Obsérvese que para este caso la función objetivo en `glmnet` es de la forma:$$\displaystyle \min_{(\beta_0, \beta) \in \mathbb{R}^{n+1}} \frac{1}{2m} \sum_{i=1}^m(y_i -\beta_0 - x_i^T\beta)^2 + \lambda \alpha ||\beta||_1$$ **Comentario:** recuérdese que $||\beta||_1 = \displaystyle \sum_{i=1}^n |\beta_i|$ por lo que la función objetivo continúa siendo convexa pero no es diferenciable en el vector $\beta = 0$ (el valor absoluto es una función no diferenciable en el punto $0$). Ejemplo Simulamos algunos datos: **Obs:** se utilizarán los siguientes argumentos para la función de R: ###Code reg<-.5 #este es lambda * alpha, el parámetro de regularización #en la formulación de glmnet fit <- glmnet(A,y,alpha=1,lambda=reg,standardize=F,nlambda=1,thresh=1e-8) beta_ast <- as.matrix(fit$beta) ###Output _____no_output_____ ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output s0 V1 0.000000 V2 -2.416619 ###Markdown **Obsérvese** que $\beta^*_0$ es $0$ y por tanto se puede eliminar el intercepto del modelo. **Comentario:** **A continuación se elimina la primera columna de la matriz $A$** por la observación anterior. La primer columna de $A$ (columna de $1$s) implica considerar un modelo con intercepto: $\beta_0$. Además como se mencionó anteriormente, la implementación en `glmnet` para el caso de *lasso* es más general y tiende a hacer $0$s los coeficientes estimados. La implementación del prof **no está realizando esto** (pues el objetivo es mostrar el uso de métodos de descenso para resolver diferentes problemas, el objetivo no es obtener los mismos resultados que `glmnet`) por lo que los coeficientes que son $0$ no serán estimados correctamente en esta implementación. ###Code print(head(A)) ###Output [,1] [,2] [1,] 1 1.1025783 [2,] 1 1.1178965 [3,] 1 -1.8181019 [4,] 1 -0.1944140 [5,] 1 -0.6131956 [6,] 1 -0.3462673 ###Markdown Eliminamos la primer columna de $A$ que ayuda a la estimación del intercepto: ###Code A<-A[,-1] print(head(A)) ###Output [1] 1.1025783 1.1178965 -1.8181019 -0.1944140 -0.6131956 -0.3462673 ###Markdown **El objetivo entonces es estimar una sola $\beta$: $\beta_1$.** ###Code beta_ast<-beta_ast[2] print(beta_ast) ###Output [1] -2.416619 ###Markdown Usaremos el método de **descenso por coordenadas** y el método de Newton para aproximar a $\beta_1^*$. Ver [4.2.Descenso_por_coordenadas_R.ipynb](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/IV.optimizacion_convexa_y_machine_learning/4.2.Descenso_por_coordenadas_R.ipynb), [4.2.Metodo_de_Newton_Python](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/IV.optimizacion_convexa_y_machine_learning/4.2.Metodo_de_Newton_Python.ipynb). Además usaremos la siguiente función `quita_signo` que ayuda a aproximar la derivada de la función objetivo $f_o$ y así lidiar con la no diferenciabilidad en $0$. La función `quita_signo` realiza:$$\text{quita_signo}(x) = \begin{cases}x & \text{si } x > 0,\\-x & \text{si } x <0,\\\approx 2.22 \times 10^{-308} & \text{si } |x| \approx 0\end{cases}$$ **obsérvese que el valor elegido en el tercer caso es el más pequeño positivo normalizado en un sistema de punto flotante, ver [1.2.Sistema_de_punto_flotante](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/I.computo_cientifico/1.2.Sistema_de_punto_flotante.ipynb).** ###Code quita_signo<-function(beta){ beta<-sign(beta)*beta #la siguiente variable es un índice que localiza aquellas entradas del #vector beta en valor absoluto que son cercanas a 0. ind <- beta < .Machine$double.xmin & beta > -.Machine$double.xmin #se asigna a cada entrada localizada en ind el valor más pequeño normalizado #en un sistema de punto flotante beta[ind] <- .Machine$double.xmin beta } .Machine$double.xmin ###Output _____no_output_____ ###Markdown **Comentario:** la definición de la función `quita_signo` se basa en lo que se conoce como *subdifferential* que es un conjunto de [subderivatives](https://en.wikipedia.org/wiki/Subderivative), útiles para generalizar las derivadas para funciones convexas que no son diferenciables en puntos de su dominio. **Así, la función objetivo es:** ###Code fo <-function(beta)1/mpoints*(1/2*cte - sum(beta*(A*y)) + 1/2*sum(beta*(A*(A*beta)))) + reg*sum(quita_signo(beta)) ###Output _____no_output_____ ###Markdown **Valor óptimo:** ###Code p_ast <- fo(beta_ast) p_ast ###Output _____no_output_____ ###Markdown **Punto inicial $\beta^{(0)}:$** ###Code beta_0<-0 ###Output _____no_output_____ ###Markdown Solución vía descenso por coordenadas **Argumentos para el método de descenso por coordenadas:** ###Code tol <- 1e-8 tol_backtracking <- 1e-14 maxiter <- 30 l<-coordinate_descent(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) ###Output I Normagf Error x_ast Error p_ast line search 1 4.05e+00 1.00e+00 1.19e+00 --- 2 4.79e-01 1.62e-01 2.93e-02 5.00e-01 3 1.29e-01 3.59e-02 2.08e-03 1.00e+00 4 3.47e-02 1.75e-02 1.08e-04 1.00e+00 5 9.36e-03 3.15e-03 3.50e-05 1.00e+00 6 2.52e-03 7.02e-03 4.54e-05 1.00e+00 7 6.79e-04 5.98e-03 4.62e-05 1.00e+00 8 1.83e-04 6.26e-03 4.62e-05 1.00e+00 9 4.92e-05 6.18e-03 4.62e-05 1.00e+00 10 1.32e-05 6.20e-03 4.62e-05 1.00e+00 11 3.51e-06 6.20e-03 4.62e-05 1.00e+00 12 8.88e-07 6.20e-03 4.62e-05 1.00e+00 13 2.22e-07 6.20e-03 4.62e-05 1.00e+00 14 8.88e-08 6.20e-03 4.62e-05 1.00e+00 15 0.00e+00 6.20e-03 4.62e-05 1.00e+00 Error of x with respect to x_ast: 6.20e-03 Approximate solution:[1] -2.401638 ###Markdown **Soluciones que están en la lista `l`:** ###Code beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de descenso por coordenadas:** ###Code print(beta) ###Output [1] -2.401638 ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output [1] -2.416619 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown **Tenemos alrededor de $2$ dígitos de precisión**. **Secuencia de minimización $\beta^{(k)}$**: ###Code print(beta_plot) gg + geom_point(aes(x=beta_plot,y=0),size=2) + annotate(geom='text', x=0, y=0, label=TeX("x^{(0)}", output='character'), parse=TRUE) + xlab('x') + ylab('y') + ggtitle(TeX('Iter del método de descenso por coordenadas para $f_o$')) ###Output _____no_output_____ ###Markdown Solución vía el método de Newton ###Code l<-Newtons_method(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) ###Output I Normgf Newton Decrement Error x_ast Error p_ast line search condHf 1 4.05e+00 1.29e+01 1.00e+00 1.19e+00 --- 1.00e+00 2 1.00e+00 7.92e-01 3.21e-01 1.29e-01 1.00e+00 1.00e+00 3 6.24e-04 3.07e-07 6.00e-03 4.62e-05 1.00e+00 1.00e+00 4 1.78e-07 2.48e-14 6.20e-03 4.62e-05 1.00e+00 1.00e+00 Error of x with respect to x_ast: 6.20e-03 Approximate solution:[1] -2.401638 ###Markdown **Soluciones que están en la lista `l`:** ###Code beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de Newton:** ###Code print(beta) ###Output [1] -2.401638 ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output [1] -2.416619 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown **Tenemos alrededor de $2$ dígitos de precisión.** **Secuencia de minimización $\beta^{(k)}$**: ###Code print(beta_plot) gg + geom_point(aes(x=beta_plot,y=0),size=2) + annotate(geom='text', x=0, y=0, label=TeX("x^{(0)}", output='character'), parse=TRUE) + xlab('x') + ylab('y') + ggtitle(TeX('Iter del método de Newton para $f_o$')) ###Output _____no_output_____ ###Markdown **Comentario:** En ambos métodos se aproxima de forma correcta a $\beta_1^*$. Otro ejemplo: modelo sin intercepto Simulamos otros datos: ###Code set.seed(1989) #para reproducibilidad mpoints <- 50 x1 <- rnorm(mpoints) x2 <- rnorm(mpoints,2,1) y <- 3*x1 -.5*x2 A<-cbind(x1,x2) print(head(A)) print(head(y)) ###Output [1] 2.270786 2.645456 -6.162110 -1.982280 -3.555726 -1.810662 ###Markdown **Reconstruímos a la función objetivo con el nuevo valor de la constante y el número de puntos:** ###Code cte <- sum(y*y) mpoints<-nrow(A) cte mpoints fo <-function(beta)1/mpoints*(1/2*cte - sum(beta*(t(A)%*%y)) + 1/2*sum(beta*(t(A)%*%(A%*%beta)))) + reg*sum(quita_signo(beta)) ###Output _____no_output_____ ###Markdown **Solución vía `glmnet` sin intercepto:** ###Code fit <- glmnet(A,y,alpha=1,lambda=reg,standardize=F,nlambda=1,intercept=F,thresh=1e-8) beta_ast <- as.matrix(fit$beta) print(beta_ast) ###Output s0 x1 2.6122015 x2 -0.3794804 ###Markdown **Solución vía método de Newton:** ###Code beta_0<-c(1,1) tol <- 1e-8 tol_backtracking <- 1e-14 maxiter <- 30 p_ast <- fo(beta_ast) l<-Newtons_method(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de Newton:** ###Code print(beta) ###Output [1] 2.6122013 -0.3794804 ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output s0 x1 2.6122015 x2 -0.3794804 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown Otro ejemplo: dataset [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.6.2/topics/mtcars) de R ###Code print(head(mtcars)) ###Output mpg cyl disp hp drat wt qsec vs am gear carb Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4 Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4 Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1 Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1 Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2 Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1 ###Markdown **Utilizamos las variables numéricas `disp` y `drat`:** ###Code y <- mtcars %>% select(mpg) %>% as.matrix() X <- mtcars %>% select(-mpg) %>% as.matrix() A<-X[,c(2,4)] print(head(A)) ###Output disp drat Mazda RX4 160 3.90 Mazda RX4 Wag 160 3.90 Datsun 710 108 3.85 Hornet 4 Drive 258 3.08 Hornet Sportabout 360 3.15 Valiant 225 2.76 ###Markdown **Ajuste vía `glmnet`:** ###Code fit <- glmnet(A,y,alpha=1,lambda=reg,standardize=F,nlambda=1,intercept=F,thresh=1e-8) beta_ast <- as.matrix(fit$beta) ###Output _____no_output_____ ###Markdown **$\beta^*$**: ###Code print(beta_ast) ###Output s0 disp -0.01681622 drat 6.59020339 ###Markdown **Solución vía método de Newton** **Reconstruímos a la función objetivo con el nuevo valor de la constante y el número de puntos:** ###Code cte <- sum(y*y) mpoints<-nrow(A) cte mpoints beta_0<-c(1,1) tol <- 1e-8 tol_backtracking <- 1e-14 maxiter <- 30 p_ast <- fo(beta_ast) l<-Newtons_method(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de Newton:** ###Code print(beta) ###Output [1] -0.01682385 6.59064139 ###Markdown **$\beta^*$**: ###Code print(beta_ast) ###Output s0 disp -0.01681622 drat 6.59020339 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown **Tenemos alrededor de $5$ dígitos de precisión.** Probamos cambiar el parámetro de regularización ###Code reg<-0.2 ###Output _____no_output_____ ###Markdown **Ajuste vía `glmnet`:** ###Code fit <- glmnet(A,y,alpha=1,lambda=reg,standardize=F,nlambda=1,intercept=F,thresh=1e-8) ###Output _____no_output_____ ###Markdown **$\beta^*:$** ###Code beta_ast <- as.matrix(fit$beta) print(beta_ast) ###Output s0 disp -0.0176557 drat 6.6627369 ###Markdown **Solución vía método de Newton:** ###Code beta_0<-c(1,1) tol <- 1e-8 tol_backtracking <- 1e-14 maxiter <- 30 p_ast <- fo(beta_ast) l<-Newtons_method(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de Newton:** ###Code print(beta) ###Output [1] -0.01766474 6.66329488 ###Markdown **$\beta^*:$** ###Code print(beta_ast) ###Output s0 disp -0.0176557 drat 6.6627369 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown **Tenemos alrededor de $5$ dígitos de precisión.** **Solución vía descenso por coordenadas:** ###Code beta_0<-c(0,0) l<-coordinate_descent(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de descenso por coordenadas:** ###Code print(beta) ###Output [1] -0.01756723 6.65444873 ###Markdown **$\beta^*:$** ###Code print(beta_ast) ###Output s0 disp -0.0176557 drat 6.6627369 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown **Tenemos alrededor de $2$ dígitos de precisión.** **Secuencia de minimización:** ###Code beta_plot gg + geom_point(aes(x=beta_plot[1,],y=beta_plot[2,]),size=2) + annotate(geom='text', x=0, y=-0.1, label=TeX("x^{(0)}", output='character'), parse=TRUE) + xlab('x') + ylab('y') + ggtitle(TeX('Iter del método de descenso por coordenadas para $f_o$')) ###Output _____no_output_____ ###Markdown Comparación con descenso en gradiente ###Code l<-gradient_descent(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de descenso en gradiente:** ###Code print(beta) ###Output [1] 0.05714807 0.15508541 ###Markdown **$\beta^*:$** ###Code print(beta_ast) ###Output s0 disp -0.0176557 drat 6.6627369 ###Markdown **Error relativo:** ###Code compute_error(beta_ast,beta) ###Output _____no_output_____ ###Markdown **Tenemos un error del $97\%$!**. ###Code beta_plot gg + geom_point(aes(x=beta_plot[1,],y=beta_plot[2,]),size=2) + annotate(geom='text', x=0, y=-0.01, label=TeX("x^{(0)}", output='character'), parse=TRUE) + xlab('x') + ylab('y') + ggtitle(TeX('Iter del método de descenso en gradiente para $f_o$')) ###Output _____no_output_____ ###Markdown Otro ejemplo: aumentamos número de columnas para el mismo dataset de [mtcars](https://www.rdocumentation.org/packages/datasets/versions/3.6.2/topics/mtcars) de R ###Code A<-X[,c(2,4,5,6)] print(head(A)) ###Output disp drat wt qsec Mazda RX4 160 3.90 2.620 16.46 Mazda RX4 Wag 160 3.90 2.875 17.02 Datsun 710 108 3.85 2.320 18.61 Hornet 4 Drive 258 3.08 3.215 19.44 Hornet Sportabout 360 3.15 3.440 17.02 Valiant 225 2.76 3.460 20.22 ###Markdown **Solución vía `glmnet`:** ###Code fit <- glmnet(A,y,alpha=1,lambda=reg,standardize=F,nlambda=1,intercept=F,thresh=1e-8) ###Output _____no_output_____ ###Markdown **$\beta^*$**: ###Code beta_ast <- as.matrix(fit$beta) print(beta_ast) ###Output s0 disp 0.0005464236 drat 2.6004590349 wt -3.6058002328 qsec 1.2416391573 ###Markdown **Solución vía método de Newton:** ###Code beta_0<-c(1,1,1,1) tol <- 1e-8 tol_backtracking <- 1e-14 maxiter <- 30 p_ast <- fo(beta_ast) ###Output _____no_output_____ ###Markdown **Newtons method** ###Code l<-Newtons_method(fo, beta_0, tol, tol_backtracking, beta_ast, p_ast, maxiter) beta <- l[[1]] total_of_iterations <- l[[2]] Err_plot <- l[[3]] beta_plot <- l[[4]] ###Output _____no_output_____ ###Markdown **$\beta$ aproximada por el método de Newton:** ###Code print(beta) ###Output [1] 0.0006261966 2.6223707978 -3.6105443552 1.2371047935 ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output s0 disp 0.0005464236 drat 2.6004590349 wt -3.6058002328 qsec 1.2416391573 ###Markdown **Error relativo:** ###Code compute_error(beta_ast, beta) ###Output _____no_output_____ ###Markdown **Tenemos alrededor de $3$ dígitos de precisión.** **Secuencia de minimización:** ###Code beta_plot ###Output _____no_output_____ ###Markdown Regularización ridge vía SVD Función objetivo en `glmnet`:$$\displaystyle \min_{(\beta_0, \beta) \in \mathbb{R}^{n+1}} \frac{1}{2m} \sum_{i=1}^m(y_i -\beta_0 - x_i^T\beta)^2 + \lambda \left (\frac{(1-\alpha)}{2}||\beta||^2_2 + \alpha ||\beta||_1 \right )$$ Obsérvese que para este caso la función objetivo en `glmnet` es de la forma ($\alpha=0$):$$\displaystyle \min_{(\beta_0, \beta) \in \mathbb{R}^{n+1}} \frac{1}{2m} \sum_{i=1}^m(y_i -\beta_0 - x_i^T\beta)^2 + \frac{\lambda}{2} ||\beta||^2_2$$ **Comentarios:** * A diferencia del caso de *lasso*, la función objetivo para la regularización *ridge* sí es diferenciable en cualquier punto de su dominio. Recuérdese que $||\beta||^2_2 = \displaystyle \sum_{i=1}^n \beta_i^2$ y por tanto la función objetivo continúa siendo convexa.* También en este caso a diferencia de *lasso* la solución está dada por la expresión vía la **descomposición en valores singulares de la matriz** $A$, ver [3.3.d.SVD.ipynb](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/III.computo_matricial/3.3.d.SVD.ipynb): $$\begin{eqnarray}\beta^* &=& (A^TA + \lambda I)^{-1} A^Ty \nonumber \\&=& (V (\Sigma^{T})^2 V^T + \lambda I)^{-1} V \Sigma^TU^Ty \nonumber \\&=& V((\Sigma^{T})^2 + \lambda I)^{-1} V^T V \Sigma^T U^Ty \nonumber \\&=& V D U^Ty \nonumber\end{eqnarray}$$donde: $D$ es una matriz diagonal con entradas $\frac{\sigma_i}{\sigma_i^2 + \lambda}$ para $i=1,\dots , n$. * Por la forma de la matriz del sistema de ecuaciones lineales: $A^TA + \lambda I$ aplicar el método de descenso por dirección de Newton equivale a resolver tal sistema con métodos o algoritmos para sistemas de ecuaciones lineales como los revisados en [3.3.Solucion_de_SEL_y_FM](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/III.computo_matricial/3.3.Solucion_de_SEL_y_FM.ipynb). Por esta razón se elige a la SVD como método para resolver el problema de mínimos cuadrados lineales con regularización *ridge*. Usamos el siguiente parámetro de regularización: ###Code reg<-.5 #este es lambda, el parámetro de regularización #en la formulación de glmnet ###Output _____no_output_____ ###Markdown En un primer ejemplo utilizamos las variables numéricas `disp` y `drat` ###Code y <- mtcars %>% select(mpg) %>% as.matrix() X <- mtcars %>% select(-mpg) %>% as.matrix() A<-X[,c(2,4)] print(head(A)) ###Output disp drat Mazda RX4 160 3.90 Mazda RX4 Wag 160 3.90 Datsun 710 108 3.85 Hornet 4 Drive 258 3.08 Hornet Sportabout 360 3.15 Valiant 225 2.76 ###Markdown **Solución vía `glmnet`:** ###Code fit <- glmnet(A,y,alpha=0,lambda=reg,standardize=F,nlambda=1,intercept=F,thresh=1e-8) beta_ast <- as.matrix(fit$beta) ###Output _____no_output_____ ###Markdown **$\beta^*:$** ###Code print(beta_ast) ###Output s0 disp -0.01777365 drat 6.67282053 ###Markdown **Solución vía SVD:** ###Code #svd of A singular_value_decomposition <- svd(A) s <- singular_value_decomposition$d u <- singular_value_decomposition$u v <- singular_value_decomposition$v cte_svd <- s/(s^2+reg)*(t(u)%*%y) beta_ridge <- v%*%cte_svd print(beta_ridge) compute_error(beta_ast,beta_ridge) ###Output _____no_output_____ ###Markdown **Tenemos aproximadamente dos dígitos de precisión.** Añadimos más columnas ###Code A<-X[,c(2,4,5,6)] print(head(A)) ###Output disp drat wt qsec Mazda RX4 160 3.90 2.620 16.46 Mazda RX4 Wag 160 3.90 2.875 17.02 Datsun 710 108 3.85 2.320 18.61 Hornet 4 Drive 258 3.08 3.215 19.44 Hornet Sportabout 360 3.15 3.440 17.02 Valiant 225 2.76 3.460 20.22 ###Markdown **Solución vía `glmnet`:** ###Code fit <- glmnet(A,y,alpha=0,lambda=reg,standardize=F,nlambda=1,intercept=F,thresh=1e-8) beta_ast <- as.matrix(fit$beta) ###Output _____no_output_____ ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output s0 disp 0.005687125 drat 2.769306293 wt -4.287106707 qsec 1.265099835 ###Markdown **Solución vía SVD:** ###Code #svd of A singular_value_decomposition <- svd(A) s <- singular_value_decomposition$d u <- singular_value_decomposition$u v <- singular_value_decomposition$v cte_svd <- s/(s^2+reg)*(t(u)%*%y) beta_ridge <- v%*%cte_svd print(beta_ridge) ###Output mpg [1,] 0.007263538 [2,] 2.819440010 [3,] -4.490056911 [4,] 1.271431770 ###Markdown **Error relativo:** ###Code compute_error(beta_ast,beta_ridge) ###Output _____no_output_____ ###Markdown **Tenemos $3\%$ de error**. La razón de la mala estimación es debido al **mal condicionamiento** de la matriz $A^TA$: ###Code kappa(t(A)%*%A,exact=TRUE) ###Output _____no_output_____ ###Markdown El mal condicionamiento de $A$ en Estadística lo relacionan con problemas de multicolinealidad, véase [Multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity): ###Code cor(A) ###Output _____no_output_____ ###Markdown Entonces tenemos alta correlación entre la variable `disp` y `wt`. Seleccionamos otras columnas de $A$ de modo que el número de condición de $A^TA$ no sea como el ejemplo anterior ###Code A<-X[,c(4,5,6)] kappa(t(A)%*%A, exact=TRUE) cor(A) ###Output _____no_output_____ ###Markdown **Solución vía `glmnet`:** ###Code fit <- glmnet(A,y,alpha=0,lambda=reg,standardize=F,nlambda=1,intercept=F,thresh=1e-8) beta_ast <- as.matrix(fit$beta) ###Output _____no_output_____ ###Markdown **$\beta^*$:** ###Code print(beta_ast) ###Output s0 drat 2.864098 wt -3.615436 qsec 1.198038 ###Markdown **Solución vía SVD:** ###Code #svd of A singular_value_decomposition <- svd(A) s <- singular_value_decomposition$d u <- singular_value_decomposition$u v <- singular_value_decomposition$v cte_svd <- s/(s^2+reg)*(t(u)%*%y) beta_ridge <- v%*%cte_svd print(beta_ridge) ###Output mpg [1,] 2.892362 [2,] -3.643442 [3,] 1.197395 ###Markdown **Error relativo:** ###Code compute_error(beta_ast,beta_ridge) ###Output _____no_output_____
ch09/09.ipynb
###Markdown 9.1 모듈 설정 모듈 설치 ###Code !pip install pykrx ###Output _____no_output_____ ###Markdown 모듈 업데이트 ###Code !pip install -U pykrx import pykrx print(pykrx.__version__) ###Output 1.0.19 ###Markdown 9.3 티커 `2019-02-25`의 티커 목록을 구합니다. ###Code from pykrx import stock tickers = stock.get_market_ticker_list("20190225") print(tickers) tickers = stock.get_market_ticker_list(market="KOSDAQ") print(tickers) t0 = stock.get_market_ticker_list("19950502") t1 = stock.get_market_ticker_list("20210930") print(len(t0), len(t1)) intersection = set(t0) & set(t1) print(len(intersection)) complement = set(t1) - set(t0) print(len(complement)) tickers = stock.get_index_ticker_list("20190225") print(tickers) for t in tickers: name = stock.get_index_ticker_name(t) print(t, name) ###Output 1001 코스피 1002 코스피 대형주 1003 코스피 중형주 1004 코스피 소형주 1005 음식료품 1006 섬유의복 1007 종이목재 1008 화학 1009 의약품 1010 비금속광물 1011 철강금속 1012 기계 1013 전기전자 1014 의료정밀 1015 운수장비 1016 유통업 1017 전기가스업 1018 건설업 1019 운수창고업 1020 통신업 1021 금융업 1022 은행 1024 증권 1025 보험 1026 서비스업 1027 제조업 1028 코스피 200 1034 코스피 100 1035 코스피 50 1150 코스피 200 커뮤니케이션서비스 1151 코스피 200 건설 1152 코스피 200 중공업 1153 코스피 200 철강/소재 1154 코스피 200 에너지/화학 1155 코스피 200 정보기술 1156 코스피 200 금융 1157 코스피 200 생활소비재 1158 코스피 200 경기소비재 1159 코스피 200 산업재 1160 코스피 200 헬스케어 1167 코스피 200 중소형주 1182 코스피 200 초대형제외 지수 1224 코스피 200 비중상한 30% 1227 코스피 200 비중상한 25% 1232 코스피 200 비중상한 20% 1244 코스피200제외 코스피지수 1894 코스피 200 TOP 10 ###Markdown 9.4 OHLCV 조회 ###Code from pykrx import stock df = stock.get_market_ohlcv("20190501", "20190531", "005930") df.head() df = stock.get_market_ohlcv("20180810", "20181212", "005930", "m") df.head() df = stock.get_market_ohlcv("20180401", "20181212", "005930", adjusted=False) df.head() df = stock.get_market_ohlcv("20210104") df.head() df = stock.get_market_ohlcv("20210101", prev=True) df.head() import time tickers = stock.get_market_ticker_list() for t in tickers: df = stock.get_market_ohlcv("20210101", "20210131", t) df.to_excel(f"{t}.xlsx") time.sleep(1) import time tickers = stock.get_index_ticker_list() for t in tickers: df = stock.get_index_ohlcv("20210101", "20210131", t) df.to_excel(f"{t}.xlsx") time.sleep(1) ###Output _____no_output_____ ###Markdown 9.5 BPS/PER/PBR/EPS/DIV/DPS ###Code df = stock.get_market_fundamental("20210108") df.head() df.sort_values("PER") df.query("PER != 0").sort_values("PER") df = stock.get_market_fundamental("20210104", "20210108", "005930") df.head() ###Output _____no_output_____ ###Markdown 9.6 다양한 PYKRX의 함수 ###Code from pykrx import stock df = stock.get_market_price_change("20190101", "20191231") df.head( ) df = stock.get_market_cap("20190102") df.head() df = stock.get_market_cap("20190101") df.head() df = stock.get_market_cap("20190101", prev=True) df.head() import pandas as pd df = stock.get_market_cap("20000104") s0 = df['시가총액'].sort_values(ascending=False).iloc[:5] df = stock.get_market_cap("20190102") s1 = df['시가총액'].sort_values(ascending=False).iloc[:5] data = [s0.reset_index(), s1.reset_index()] df = pd.concat(data, keys=["2000년도", "2019년도"], axis=1) df.head() ###Output _____no_output_____ ###Markdown 9.7 FinanceDataReader ###Code !pip install -U finance-datareader import FinanceDataReader as fdr df = fdr.DataReader(symbol="005930") df df = fdr.DataReader(symbol="AAPL", start="2017") df kospi = fdr.DataReader(symbol="KS11", start="2000") kosdaq = fdr.DataReader(symbol="KQ11", start="2000") kospi kosdaq ###Output _____no_output_____
code/explore_housing.ipynb
###Markdown Explore Housing Price DataThe dataset included in this repository is from [Zillow.com](https://www.zillow.com/research/data/) -- it represents the median price of a 3-bedroom house in every state on a monthly basis since 1996. We're going to use it to practice getting data from the file, compute averages, look for outliers, and explore.First, we import the data and inspect it ###Code # This cell imports python modules that make it easier to access the data import pandas as pd import matplotlib.pyplot as plt import numpy as np # This cell imports the data file and stores it in the variable df df = pd.read_csv('../data/State_Zhvi_3bedroom.csv', index_col='RegionName') ###Output _____no_output_____ ###Markdown Inspect by printing the names of the data columns and the whole date frame. You can also inspect this by opening the data file using the file browser to the left. ###Code df.columns df ###Output _____no_output_____ ###Markdown What do we notice?You should see that the file is organized on a monthly basis, with the rows ordered by the population of the state (Wyoming is last, California is first).You should also notice that there are blanks or "NaN" values for North Dakota between 1996 and 2005 (NaN stands for "Not a Number"). Why? I don't know for sure, but this presumably means that North Dakota didn't provide data to Zillow for that time period, or they didn't have it. This is going to be an issue because some of the analysis we will want to do will be confused by the missing entries.People who do data analysis for a living say that they end up spending **a large percentage of their time** dealing with issues like this -- cleaning up and preparing data in a useable format, even before they are able to do any actual analysis.In this case, since it's only the one state with any missing data, we're just going to ignore that entire row, using the following: ###Code df.dropna(inplace=True) ###Output _____no_output_____ ###Markdown "dropna" is a command that "drops" the "na" values. the **inplace=True** part does this in a way that stores the result in the same variable as before, so we don't have to make a copy. ###Code months = df.columns[2:] months ###Output _____no_output_____ ###Markdown **months** is now a variable holding all the months in the data table, and you can see that it started in April of 1996 and ended in October of 2018. You can now refer to the data in one of two ways, using the month or the state name: ###Code df.loc['Washington'] df['2018-10'] ###Output _____no_output_____ ###Markdown Or you can use the two methods together to access a specific state in a specific month: ###Code print('Apr 1996:', df.loc['Washington']['1996-04']) print('Oct 2018:', df.loc['Washington']['2018-10']) ###Output Apr 1996: 137200.0 Oct 2018: 369300.0 ###Markdown First taskYour first task is to calculate the average nationwide price in October of 2018 using the **accumulator** pattern we used for the rainfall problem, which means you need to **initialize a variable** to hold the total of all the home prices during October 2018, then use a **loop** to accumulate the values, and divide by the number of data points to get the average.The cell below is setup with a loop that will access each state in turn -- right now it prints out the state name. You will need to add some code before the loop to initialize the variables, and some code in the loop to access the home price for every state during the month of October 2018 and add it to the running total: ###Code total_price = 0 for state in df.index: state_price = df.loc[state]['2018-10'] total_price = total_price + state_price #print(state) #print(df.loc[state]['2018-10']) #print(total_price) average = (total_price/len(df.index)) average = round(average, 2) print('Average:', '$', average) #max_value = max(df.loc[df.index]['2018-10']) #print('Maximum value:', '$', max_value) #min_value = min(df.loc[df.index]['2018-10']) #print('Minimum value:', '$', min_value) ###Output Average: $ 239131.91 ###Markdown Second Task * How would you modify or use the code above to compute the national average for a different month? * How would you modify the code above to find the maximum value in a given month? The minimum? * How could you modify or use this code to quickly and easily compute the average price for every month in the data set? * Complete at least one of these tasks and make a markdown cell to explain what you did. To compute the national average for a different month, you would just change the month referenced in the loop from '2018-10' to another month, for example '2000-01'. To find the maximum value in a given month, I would add a line at the end to create a new variable that sets itself to the maximum value in a given month using the max() command. I would do the same to find the minimum, but instead use the min() command.To compute the average price for every month in the data set, I would add a second for loop that executes the code above for every month. ###Code total_price = 0 for state in df.index: state_price = df.loc[state]['2018-09'] total_price = total_price + state_price #print(state) #print(df.loc[state]['2018-10']) #print(total_price) average = (total_price/len(df.index)) average = round(average, 2) print('Average:', '$', average) ###Output Average: $ 237878.72 ###Markdown To compute the national average for a different month, I changed the month referenced when the for loop defines the state_price variable. ###Code for month in months: total_price = 0 for state in df.index: state_price = df.loc[state][month] total_price = total_price + state_price monthly_average = (total_price/len(df.index)) monthly_average = round(monthly_average, 2) print(month, 'Average:', '$', monthly_average) ###Output 1996-04 Average: $ 102910.64 1996-05 Average: $ 102982.98 1996-06 Average: $ 103065.96 1996-07 Average: $ 103138.3 1996-08 Average: $ 103261.7 1996-09 Average: $ 103389.36 1996-10 Average: $ 103551.06 1996-11 Average: $ 103757.45 1996-12 Average: $ 104025.53 1997-01 Average: $ 104351.06 1997-02 Average: $ 104685.11 1997-03 Average: $ 104929.79 1997-04 Average: $ 105193.62 1997-05 Average: $ 105446.81 1997-06 Average: $ 105695.74 1997-07 Average: $ 105948.94 1997-08 Average: $ 106223.4 1997-09 Average: $ 106525.53 1997-10 Average: $ 106876.6 1997-11 Average: $ 107255.32 1997-12 Average: $ 107695.74 1998-01 Average: $ 108174.47 1998-02 Average: $ 108629.79 1998-03 Average: $ 109027.66 1998-04 Average: $ 109412.77 1998-05 Average: $ 109800.0 1998-06 Average: $ 110197.87 1998-07 Average: $ 110614.89 1998-08 Average: $ 111042.55 1998-09 Average: $ 111465.96 1998-10 Average: $ 111893.62 1998-11 Average: $ 112329.79 1998-12 Average: $ 112797.87 1999-01 Average: $ 113336.17 1999-02 Average: $ 113848.94 1999-03 Average: $ 114336.17 1999-04 Average: $ 114855.32 1999-05 Average: $ 115404.26 1999-06 Average: $ 115972.34 1999-07 Average: $ 116540.43 1999-08 Average: $ 117114.89 1999-09 Average: $ 117702.13 1999-10 Average: $ 118317.02 1999-11 Average: $ 118934.04 1999-12 Average: $ 119621.28 2000-01 Average: $ 120359.57 2000-02 Average: $ 121089.36 2000-03 Average: $ 121763.83 2000-04 Average: $ 122429.79 2000-05 Average: $ 123063.83 2000-06 Average: $ 123676.6 2000-07 Average: $ 124268.09 2000-08 Average: $ 124855.32 2000-09 Average: $ 125425.53 2000-10 Average: $ 126046.81 2000-11 Average: $ 126731.91 2000-12 Average: $ 127510.64 2001-01 Average: $ 128372.34 2001-02 Average: $ 129240.43 2001-03 Average: $ 130100.0 2001-04 Average: $ 130961.7 2001-05 Average: $ 131806.38 2001-06 Average: $ 132644.68 2001-07 Average: $ 133442.55 2001-08 Average: $ 134227.66 2001-09 Average: $ 134982.98 2001-10 Average: $ 135708.51 2001-11 Average: $ 136442.55 2001-12 Average: $ 137206.38 2002-01 Average: $ 138000.0 2002-02 Average: $ 138806.38 2002-03 Average: $ 139612.77 2002-04 Average: $ 140391.49 2002-05 Average: $ 141159.57 2002-06 Average: $ 141963.83 2002-07 Average: $ 142806.38 2002-08 Average: $ 143714.89 2002-09 Average: $ 144685.11 2002-10 Average: $ 145717.02 2002-11 Average: $ 146831.91 2002-12 Average: $ 147972.34 2003-01 Average: $ 149040.43 2003-02 Average: $ 149995.74 2003-03 Average: $ 151000.0 2003-04 Average: $ 152044.68 2003-05 Average: $ 153119.15 2003-06 Average: $ 154257.45 2003-07 Average: $ 155448.94 2003-08 Average: $ 156648.94 2003-09 Average: $ 157891.49 2003-10 Average: $ 159140.43 2003-11 Average: $ 160400.0 2003-12 Average: $ 161678.72 2004-01 Average: $ 163014.89 2004-02 Average: $ 164459.57 2004-03 Average: $ 166040.43 2004-04 Average: $ 167738.3 2004-05 Average: $ 169531.91 2004-06 Average: $ 171423.4 2004-07 Average: $ 173357.45 2004-08 Average: $ 175293.62 2004-09 Average: $ 177214.89 2004-10 Average: $ 179153.19 2004-11 Average: $ 181002.13 2004-12 Average: $ 182806.38 2005-01 Average: $ 184510.64 2005-02 Average: $ 186087.23 2005-03 Average: $ 187621.28 2005-04 Average: $ 189212.77 2005-05 Average: $ 190806.38 2005-06 Average: $ 192429.79 2005-07 Average: $ 194070.21 2005-08 Average: $ 195687.23 2005-09 Average: $ 197244.68 2005-10 Average: $ 198746.81 2005-11 Average: $ 200155.32 2005-12 Average: $ 201455.32 2006-01 Average: $ 202653.19 2006-02 Average: $ 203731.91 2006-03 Average: $ 204793.62 2006-04 Average: $ 205825.53 2006-05 Average: $ 206746.81 2006-06 Average: $ 207546.81 2006-07 Average: $ 208208.51 2006-08 Average: $ 208700.0 2006-09 Average: $ 209014.89 2006-10 Average: $ 209185.11 2006-11 Average: $ 209234.04 2006-12 Average: $ 209276.6 2007-01 Average: $ 209361.7 2007-02 Average: $ 209434.04 2007-03 Average: $ 209491.49 2007-04 Average: $ 209517.02 2007-05 Average: $ 209387.23 2007-06 Average: $ 209117.02 2007-07 Average: $ 208751.06 2007-08 Average: $ 208282.98 2007-09 Average: $ 207729.79 2007-10 Average: $ 207085.11 2007-11 Average: $ 206295.74 2007-12 Average: $ 205385.11 2008-01 Average: $ 204393.62 2008-02 Average: $ 203302.13 2008-03 Average: $ 202174.47 2008-04 Average: $ 200974.47 2008-05 Average: $ 199627.66 2008-06 Average: $ 198138.3 2008-07 Average: $ 196625.53 2008-08 Average: $ 195117.02 2008-09 Average: $ 193702.13 2008-10 Average: $ 192363.83 2008-11 Average: $ 191085.11 2008-12 Average: $ 189863.83 2009-01 Average: $ 188738.3 2009-02 Average: $ 187614.89 2009-03 Average: $ 186436.17 2009-04 Average: $ 185104.26 2009-05 Average: $ 183608.51 2009-06 Average: $ 182121.28 2009-07 Average: $ 180936.17 2009-08 Average: $ 180076.6 2009-09 Average: $ 179455.32 2009-10 Average: $ 179034.04 2009-11 Average: $ 178765.96 2009-12 Average: $ 178700.0 2010-01 Average: $ 178778.72 2010-02 Average: $ 178959.57 2010-03 Average: $ 178757.45 2010-04 Average: $ 178274.47 2010-05 Average: $ 178012.77 2010-06 Average: $ 177921.28 2010-07 Average: $ 177474.47 2010-08 Average: $ 176763.83 2010-09 Average: $ 175982.98 2010-10 Average: $ 175221.28 2010-11 Average: $ 174414.89 2010-12 Average: $ 173480.85 2011-01 Average: $ 172491.49 2011-02 Average: $ 171510.64 2011-03 Average: $ 170597.87 2011-04 Average: $ 169744.68 2011-05 Average: $ 169029.79 2011-06 Average: $ 168344.68 2011-07 Average: $ 167776.6 2011-08 Average: $ 167293.62 2011-09 Average: $ 166793.62 2011-10 Average: $ 166302.13 2011-11 Average: $ 165876.6 2011-12 Average: $ 165551.06 2012-01 Average: $ 165244.68 2012-02 Average: $ 165065.96 2012-03 Average: $ 165153.19 2012-04 Average: $ 165440.43 2012-05 Average: $ 165817.02 2012-06 Average: $ 166234.04 2012-07 Average: $ 166691.49 2012-08 Average: $ 167165.96 2012-09 Average: $ 167619.15 2012-10 Average: $ 168100.0 2012-11 Average: $ 168736.17 2012-12 Average: $ 169425.53 2013-01 Average: $ 170070.21 2013-02 Average: $ 170770.21 2013-03 Average: $ 171663.83 2013-04 Average: $ 172742.55 2013-05 Average: $ 173865.96 2013-06 Average: $ 174968.09 2013-07 Average: $ 176095.74 2013-08 Average: $ 177210.64 2013-09 Average: $ 178189.36 2013-10 Average: $ 179019.15 2013-11 Average: $ 179797.87 2013-12 Average: $ 180536.17 2014-01 Average: $ 181236.17 2014-02 Average: $ 181848.94 2014-03 Average: $ 182485.11 2014-04 Average: $ 183193.62 2014-05 Average: $ 183887.23 2014-06 Average: $ 184489.36 2014-07 Average: $ 185136.17 2014-08 Average: $ 185804.26 2014-09 Average: $ 186438.3 2014-10 Average: $ 187031.91 2014-11 Average: $ 187693.62 2014-12 Average: $ 188448.94 2015-01 Average: $ 189165.96 2015-02 Average: $ 189934.04 2015-03 Average: $ 190774.47 2015-04 Average: $ 191640.43 2015-05 Average: $ 192478.72 2015-06 Average: $ 193314.89 2015-07 Average: $ 194165.96 2015-08 Average: $ 195065.96 2015-09 Average: $ 195991.49 2015-10 Average: $ 196902.13 2015-11 Average: $ 197772.34 2015-12 Average: $ 198670.21 2016-01 Average: $ 199617.02 2016-02 Average: $ 200631.91 2016-03 Average: $ 201753.19 2016-04 Average: $ 202906.38 2016-05 Average: $ 203991.49 2016-06 Average: $ 205097.87 2016-07 Average: $ 206310.64 2016-08 Average: $ 207468.09 2016-09 Average: $ 208568.09 2016-10 Average: $ 209746.81 2016-11 Average: $ 211025.53 2016-12 Average: $ 212389.36 2017-01 Average: $ 213668.09 2017-02 Average: $ 214819.15 2017-03 Average: $ 216027.66 2017-04 Average: $ 217272.34 2017-05 Average: $ 218431.91 2017-06 Average: $ 219372.34 2017-07 Average: $ 220240.43 2017-08 Average: $ 221308.51 2017-09 Average: $ 222427.66 2017-10 Average: $ 223610.64 2017-11 Average: $ 224972.34 2017-12 Average: $ 226342.55 2018-01 Average: $ 227902.13 2018-02 Average: $ 229640.43 2018-03 Average: $ 231012.77 2018-04 Average: $ 232008.51 2018-05 Average: $ 233176.6 2018-06 Average: $ 234425.53 2018-07 Average: $ 235559.57 2018-08 Average: $ 236587.23 2018-09 Average: $ 237878.72 2018-10 Average: $ 239131.91 ###Markdown To find the monthly average for every month, I added a second for loop that executed the for loop from before for each month, then printed that out. I also had to make it set the total_price back to 0 after each month, or the values would get progressively larger instead of being a true average. ###Code total_price = 0 maximum_price = 0 minimum_price = df.iloc[0]['2018-10'] for state in df.index: state_price = df.loc[state]['2018-10'] total_price = total_price + state_price if state_price > maximum_price: maximum_price = state_price if state_price < minimum_price: minimum_price = state_price #print(state) #print(df.loc[state]['2018-10']) #print(total_price) average = (total_price/len(df.index)) average = round(average, 2) print('Average:', '$', average) print("") print('Maximum price:', '$', int(maximum_price)) print('Minimum price:', '$', int(minimum_price)) print("") max_value = max(df.loc[df.index]['2018-10']) print('Maximum value:', '$', max_value) min_value = min(df.loc[df.index]['2018-10']) print('Minimum value:', '$', min_value) ###Output Average: $ 239131.91 Maximum price: $ 656900 Minimum price: $ 109100 Maximum value: $ 656900 Minimum value: $ 109100 ###Markdown This cell finds the maximum price of any state by first setting the variable to 0, and then using the for loop to check whether each new state price found is greater than the previous maximum value. If it is, it sets the maximum to be the new price, and if it isn't, it keeps it the same. It finds the minimum in much the same way, but instead of setting the initial minimum_price variable to be 0 (since no state price would ever be less than that) it sets it to the price of the first state in the list. Interestingly, the for loop method of calculating the maximum and minimum adds a decimal point to the values (converts them to a float) whereas the max and min operators method keeps them as integers. Third Task * Which state changed the most in the 22 years? The least? * How could you find the median value in a given month? * The most common value? * Is there a way to identify states whose pricing is very different from the national average or trends? * Complete at least the first of these (determine which states changed the most and the least in the 22 years) and explain how you did it. ###Code greatest_change = 0 least_change = (df.iloc[0]['2018-10'] - df.iloc[0]['1996-04']) greatest_change_state = '' least_change_state = '' for state in df.index: price_difference = (df.loc[state]['2018-10'] - df.loc[state]['1996-04']) if price_difference > greatest_change: greatest_change = price_difference greatest_change_state = state if price_difference < least_change: least_change = price_difference least_change_state = state print('State with greatest change:', greatest_change_state, '.', 'Change: $', int(greatest_change)) print('State with least change:', least_change_state, '.', 'Change: $', int(least_change)) ###Output State with greatest change: District of Columbia . Change: $ 525200 State with least change: Indiana . Change: $ 48800 ###Markdown This cell finds the greatest change of any state over the 22 year range by defining a variable called greatest_change, and then using the for loop to calculate the price difference of each state between October 2018 and April 1996, then checking if that new value is greater than the previous greatest change. If it is, it sets greatest_change to be equal to that price_difference and sets the greatest_change_state variable to be equal to that state, and if not, it keeps the value the same. It works much the same to find the least change, but sets the initial value for least_change to be equal to the price difference for the first state in the list, since no difference would ever be smaller than 0. ###Code reduced_list = df.index[0:] state_prices = [] month = '2018-10' median_state = [] for state in reduced_list: price = int(df.loc[state][month]) state_prices.append(price) state_prices.sort() print('State prices:', state_prices) state_count = len(state_prices) print('Number of states:', state_count) if state_count % 2 != 0: print('Odd') median_value = state_prices[(round(state_count / 2)) - 1] else: print('Even') median_value = (state_prices[int(state_count / 2)] + state_prices[(int(state_count / 2)) - 1]) / 2 for state in reduced_list: if state_count % 2 != 0: if df.loc[state][month] == median_value: median_state.append(state) else: if (df.loc[state][month] == state_prices[int(state_count / 2)]) or (df.loc[state][month] == state_prices[(int(state_count / 2)) - 1]): median_state.append(state) #state_prices.index(median_value) print('Median value: $', median_value) print('State(s) with median value:', median_state) #median_price = median(state_prices) #print('Median price: $', median_price) ###Output State prices: [109100, 124400, 130500, 131200, 135200, 141300, 142900, 146800, 152400, 156300, 158200, 164600, 164700, 167000, 167200, 169100, 169300, 170700, 177000, 179100, 181100, 187700, 187900, 192700, 200000, 220100, 224000, 230300, 233700, 236700, 238700, 241600, 244000, 246200, 277000, 278700, 281300, 316700, 320000, 331200, 366000, 369300, 378300, 383000, 518200, 640900, 656900] Number of states: 47 Odd Median value: $ 192700 State(s) with median value: ['Wisconsin'] ###Markdown This cell finds the median value of all states for a given month, and then searches the list of states to find which state corresponds to that value. For an odd number of items in the list, it finds the median value by looking at the middle term (in the ordered list), then uses a for loop to match that to the correct state. However, having an even number of items in the list creates an added challenge, since there is no one median state: the median value is the average of the two center values. For this reason, it calculates the median value by averaging the two center items in the list, and then finds the two states that correspond to the center values. ###Code state_prices = [] month = '2018-10' interval_count = 5 for state in reduced_list: price = int(df.loc[state][month]) state_prices.append(price) state_prices.sort() print('State prices:', state_prices) state_count = len(state_prices) print('Number of states:', state_count) state_range = state_prices[(state_count-1)] - state_prices[0] print("Range:", state_range) interval_size = int(state_range / interval_count) print("Interval size:", interval_size) bottom_value = state_prices[0] interval_one = [] interval_two = [] interval_three = [] interval_four = [] interval_five = [] interval_frequency = [] for i in range(interval_count): interval_frequency.append(0) #print("Interval Frequency:", interval_frequency) for i in range(interval_count): for state in df.index: if (int(df.loc[state][month]) >= (bottom_value + (interval_size * i))) and (int(df.loc[state][month]) < (bottom_value + (interval_size * (i + 1)))): interval_frequency[i] = interval_frequency[i] + 1 elif i == (interval_count - 1) and (int(df.loc[state][month]) >= (bottom_value + (interval_size * i))) and (int(df.loc[state][month]) <= (bottom_value + (interval_size * (i + 1)))): interval_frequency[i] = interval_frequency[i] + 1 for state in df.index: if (int(df.loc[state][month]) >= bottom_value) and (int(df.loc[state][month]) < (bottom_value + interval_size)): interval_one.append(state) #interval_frequency[0] = interval_frequency[0] + 1 elif (int(df.loc[state][month]) >= (bottom_value + interval_size)) and (int(df.loc[state][month]) < (bottom_value + 2 * interval_size)): interval_two.append(state) elif (int(df.loc[state][month]) >= (bottom_value + 2 * interval_size)) and (int(df.loc[state][month]) < (bottom_value + 3 * interval_size)): interval_three.append(state) elif (int(df.loc[state][month]) >= (bottom_value + 3 * interval_size)) and (int(df.loc[state][month]) < (bottom_value + 4 * interval_size)): interval_four.append(state) elif (int(df.loc[state][month]) >= (bottom_value + 4 * interval_size)) and (int(df.loc[state][month]) <= (bottom_value + 5 * interval_size)): interval_five.append(state) print("Interval Frequency:", interval_frequency) print("Interval 1:", interval_one) print("Interval 2:", interval_two) print("Interval 3:", interval_three) print("Interval 4:", interval_four) print("Interval 5:", interval_five) ###Output State prices: [109100, 124400, 130500, 131200, 135200, 141300, 142900, 146800, 152400, 156300, 158200, 164600, 164700, 167000, 167200, 169100, 169300, 170700, 177000, 179100, 181100, 187700, 187900, 192700, 200000, 220100, 224000, 230300, 233700, 236700, 238700, 241600, 244000, 246200, 277000, 278700, 281300, 316700, 320000, 331200, 366000, 369300, 378300, 383000, 518200, 640900, 656900] Number of states: 47 Range: 547800 Interval size: 109560 Interval Frequency: [25, 14, 5, 1, 2] Interval 1: ['Texas', 'New York', 'Illinois', 'Pennsylvania', 'Ohio', 'Michigan', 'Georgia', 'North Carolina', 'Indiana', 'Arizona', 'Tennessee', 'Missouri', 'Wisconsin', 'Alabama', 'South Carolina', 'Kentucky', 'Oklahoma', 'Iowa', 'Mississippi', 'Arkansas', 'Kansas', 'New Mexico', 'West Virginia', 'Nebraska', 'South Dakota'] Interval 2: ['Florida', 'Virginia', 'Minnesota', 'Connecticut', 'Utah', 'Nevada', 'Idaho', 'Maine', 'New Hampshire', 'Rhode Island', 'Montana', 'Delaware', 'Alaska', 'Wyoming'] Interval 3: ['New Jersey', 'Washington', 'Massachusetts', 'Colorado', 'Oregon'] Interval 4: ['California'] Interval 5: ['Hawaii', 'District of Columbia'] ###Markdown This cell creates a histrogram distribution for all state prices in a given month. To do this, it creates a list containing all of the prices and sorts it, then calculates the range. It then uses the range to calculate the size of the intervals based on the modifiable value of the number of intervals. Using a for loop, it adds 1 to the interval that each state fits into, providing an overall tally. ###Code state_prices = [] month = '2018-10' total_price = 0 national_average = 0 for state in df.index: price = int(df.loc[state][month]) state_prices.append(price) total_price = total_price + price state_prices.sort() #del state_prices[46] #del state_prices[45] #del state_prices[44] print('State prices:', state_prices) print("Total price: $", total_price) national_average = int((total_price) / len(state_prices)) print("National Average: $", national_average) outlier_states = [] for state in df.index: if df.loc[state][month] > (national_average + (2 * df[month].std())) or df.loc[state][month] < (national_average - (2 * df[month].std())): outlier_states.append(state) print("Standard deviaton: $", int(df[month].std())) print("Outlier states:", outlier_states) ###Output State prices: [109100, 124400, 130500, 131200, 135200, 141300, 142900, 146800, 152400, 156300, 158200, 164600, 164700, 167000, 167200, 169100, 169300, 170700, 177000, 179100, 181100, 187700, 187900, 192700, 200000, 220100, 224000, 230300, 233700, 236700, 238700, 241600, 244000, 246200, 277000, 278700, 281300, 316700, 320000, 331200, 366000, 369300, 378300, 383000, 518200, 640900, 656900] Total price: $ 11239200 National Average: $ 239131 Standard deviaton: $ 121750 Outlier states: ['California', 'Hawaii', 'District of Columbia'] ###Markdown Explore Housing Price DataThe dataset included in this repository is from [Zillow.com](https://www.zillow.com/research/data/) -- it represents the median price of a 3-bedroom house in every state on a monthly basis since 1996. We're going to use it to practice getting data from the file, compute averages, look for outliers, and explore.First, we import the data and inspect it ###Code # This cell imports python modules that make it easier to access the data import pandas as pd import matplotlib.pyplot as plt import numpy as np # This cell imports the data file and stores it in the variable df df = pd.read_csv('../data/State_Zhvi_3bedroom.csv', index_col='RegionName') ###Output _____no_output_____ ###Markdown Inspect by printing the names of the data columns and the whole date frame. You can also inspect this by opening the data file using the file browser to the left. ###Code df.columns df ###Output _____no_output_____ ###Markdown What do we notice?You should see that the file is organized on a monthly basis, with the rows ordered by the population of the state (Wyoming is last, California is first).You should also notice that there are blanks or "NaN" values for North Dakota between 1996 and 2005 (NaN stands for "Not a Number"). Why? I don't know for sure, but this presumably means that North Dakota didn't provide data to Zillow for that time period, or they didn't have it. This is going to be an issue because some of the analysis we will want to do will be confused by the missing entries.People who do data analysis for a living say that they end up spending **a large percentage of their time** dealing with issues like this -- cleaning up and preparing data in a useable format, even before they are able to do any actual analysis.In this case, since it's only the one state with any missing data, we're just going to ignore that entire row, using the following: ###Code df.dropna(inplace=True) ###Output _____no_output_____ ###Markdown "dropna" is a command that "drops" the "na" values. the **inplace=True** part does this in a way that stores the result in the same variable as before, so we don't have to make a copy. ###Code months = df.columns[2:] months ###Output _____no_output_____ ###Markdown **months** is now a variable holding all the months in the data table, and you can see that it started in April of 1996 and ended in October of 2018. You can now refer to the data in one of two ways, using the month or the state name: ###Code df.loc['Washington'] df['2018-10'] ###Output _____no_output_____ ###Markdown Or you can use the two methods together to access a specific state in a specific month: ###Code print('Apr 1996:', df.loc['Washington']['1996-04']) print('Oct 2018:', df.loc['Washington']['2018-10']) ###Output _____no_output_____ ###Markdown First taskYour first task is to calculate the average nationwide price in October of 2018 using the **accumulator** pattern we used for the rainfall problem, which means you need to **initialize a variable** to hold the total of all the home prices during October 2018, then use a **loop** to accumulate the values, and divide by the number of data points to get the average.The cell below is setup with a loop that will access each state in turn -- right now it prints out the state name. You will need to add some code before the loop to initialize the variables, and some code in the loop to access the home price for every state during the month of October 2018 and add it to the running total: ###Code # Add some lines before the loop to initialize the accumulator variables for state in df.index: # add some lines in the loop to access the price during October 2018 # and add it to the running total print(state) #add some lines after the loop to compute the average and print it ###Output _____no_output_____
notebooks/challenge.ipynb
###Markdown Workshop challenge Package installing and data import ###Code # standard library imports import os import sys from collections import Counter # pandas, seaborn etc. import seaborn as sns import sklearn import matplotlib.pyplot as plt %matplotlib inline import pandas as pd import numpy as np # sklearn outlier models from sklearn.neighbors import NearestNeighbors # from sklearn.neighbors import LocalOutlierFactor # from sklearn.ensemble import IsolationForest from sklearn.cluster import DBSCAN from sklearn.mixture import GaussianMixture # other sklearn functions from sklearn.decomposition import PCA from sklearn.covariance import MinCovDet, EmpiricalCovariance from sklearn.metrics import roc_auc_score from sklearn.preprocessing import StandardScaler, RobustScaler, MinMaxScaler from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import scale as preproc_scale from sklearn.manifold import TSNE # pyod import pyod from pyod.models.auto_encoder import AutoEncoder from pyod.models.knn import KNN from pyod.models.lof import LOF # from pyod.models.pca import PCA as pyod_PCA from pyod.models.iforest import IForest sys.path.append("..") #to enable importing from outlierutils from outlierutils import plot_top_N, plot_outlier_scores, LabelSubmitter url = "https://unsupervised-label-api-pg.herokuapp.com/" # Link to the API ###Output _____no_output_____ ###Markdown Data Imports ###Code data_path = '../data' x_kdd = pd.read_pickle(os.path.join(data_path, 'x_kdd.pkl')) x_kdd = x_kdd.drop_duplicates() if x_kdd.index.max() > len(x_kdd): x_kdd = x_kdd.reset_index() print(f'Data set size: {x_kdd.shape}') ###Output _____no_output_____ ###Markdown Challenge DescriptionYou just imported a data set, `x_kdd`, with 48K rows. The dataset was collected by by MIT Lincoln Labs in 1999, by operating a LAN-network as usual, and additionally carrying out various attacks. This specific dataset (which is a subset of the original dataset) has "normal" traffic as inlier class, and several attacks (buffer_overflow, ftp_write, imap, ...) as outlier class. Although this data does not represent payment fraud, it is relevant because of the mixed data type. There are no labels available, there is therefore also no split in train and test. The target is to predict as many true positives as possible (each positive gets you a positive score), and as few false positives as possible (each false positive subtracts a small score). So only submit points that may likely be positives!!Be selective, just submitting all points, or random points, will not get you a good score :)- Each true positive found yields **500** points- Each false positive costs **25** points**Hints**- The fraction of positives is less than 1%. Random guessing to gather labels is therefore unlikely to pay off. - When sufficiently many positive labels are available, this information may be used to further tune unsupervised algorithms, or to train a supervised classifier First clean up the data: convert categorical columns to one-hot encoded, and MinMax-scale all features. Do not remove any rows! ###Code # clean-up code here ###Output _____no_output_____ ###Markdown Outlier detection: your code! ###Code def get_top_N_indices(scores, N=100): """ Helper function. Returns the indices of the points with the top N highest outlier scores """ return np.argsort(scores)[::-1][:N] get_top_N_indices(np.array([5, 4, 3, 2, 1, 0]), N=2) ###Output _____no_output_____ ###Markdown API submissionSubmit your predictions to the API with a LabelSubmitter object. This object has a `.post_predictions()` method to submit predictions, and a `.get_labels()` method to retrieve the labels (positives and negatives) of all previous submissions. Use the parameter `endpoint='kdd'` option for this challenge. ###Code username='xxx' password='xxx' if not ('ls' in locals() and ls.jwt_token): #only if no labelsubmitter with .jwt_token is available ls = LabelSubmitter(username=username, password=password, url=url) ls.post_predictions(idx=xxx, endpoint='kdd') # ls.get_labels(endpoint='kdd') ###Output _____no_output_____ ###Markdown Workshop challenge Package installing and data import ###Code # standard library imports import os import sys from collections import Counter # pandas, seaborn etc. import seaborn as sns import sklearn import matplotlib.pyplot as plt %matplotlib inline import pandas as pd import numpy as np # sklearn outlier models from sklearn.neighbors import NearestNeighbors # from sklearn.neighbors import LocalOutlierFactor # from sklearn.ensemble import IsolationForest from sklearn.cluster import DBSCAN from sklearn.mixture import GaussianMixture # other sklearn functions from sklearn.decomposition import PCA from sklearn.covariance import MinCovDet, EmpiricalCovariance from sklearn.metrics import roc_auc_score from sklearn.preprocessing import StandardScaler, RobustScaler, MinMaxScaler from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import scale as preproc_scale from sklearn.manifold import TSNE # pyod import pyod from pyod.models.auto_encoder import AutoEncoder from pyod.models.knn import KNN from pyod.models.lof import LOF from pyod.models.pca import PCA from pyod.models.iforest import IForest sys.path.append("..") #to enable importing from outlierutils from outlierutils import plot_top_N, plot_outlier_scores, LabelSubmitter url = "https://unsupervised-label-api-pg.herokuapp.com/" # Link to the API ###Output _____no_output_____ ###Markdown Data Imports ###Code data_path = '../data' x_kdd = pd.read_pickle(os.path.join(data_path, 'x_kdd.pkl')) x_kdd = x_kdd.drop_duplicates() print(f'Data set size: {x_kdd.shape}') ###Output Data set size: (48113, 41) ###Markdown Challenge DescriptionYou just imported a data set, `x_kdd`, with 48K rows. The dataset was collected by by MIT Lincoln Labs in 1999, by operating a LAN-network as usual, and additionally carrying out various attacks. This specific dataset (which is a subset of the original dataset) has "normal" traffic as inlier class, and several attacks (buffer_overflow, ftp_write, imap, ...) as outlier class. Although this data does not represent payment fraud, it is relevant because of the mixed data type. There are no labels available, there is therefore also no split in train and test. The target is to predict as many true positives as possible (each positive gets you a positive score), and as few false positives as possible (each false positive subtracts a small score). So only submit points that may likely be positives!!Be selective, just submitting all points, or random points, will not get you a good score :)- Each true positive found yields **500** points- Each false positive costs **10** points**Hints**- The fraction of positives is less than 1%. Random guessing to gather labels is therefore unlikely to pay off. - As a start, try out several unsupervised algorithms, and see how they perform against eachother. Two strongly correlating predictions are likely to be both good - When sufficiently many positive labels are available, this information may be used to further tune unsupervised algorithms, or to train a supervised classifier First clean up the data: convert categorical columns to one-hot encoded, and MinMax-scale all features. Do not remove any rows! ###Code # clean-up code here ###Output _____no_output_____ ###Markdown Outlier detection: your code! ###Code def get_top_N_indices(scores, N=100): """ Helper function. Returns the indices of the points with the top N highest outlier scores """ return np.argsort(scores)[::-1][:N] get_top_N_indices(np.array([5, 4, 3, 2, 1, 0]), N=2) ###Output _____no_output_____ ###Markdown API submissionSubmit your predictions to the API with a LabelSubmitter object. This object has a `.post_predictions()` method to submit predictions, and a `.get_labels()` method to retrieve the labels (positives and negatives) of all previous submissions. Use the parameter `endpoint='kdd'` option for this challenge. ###Code username='xxx' password='xxx' if not ('ls' in locals() and ls.jwt_token): #only if no labelsubmitter with .jwt_token is available ls = LabelSubmitter(username=username, password=password, url=url) # ls.post_predictions(idx=predictions, endpoint='kdd') # ls.get_labels(endpoint='kdd') ###Output _____no_output_____ ###Markdown Workshop challenge Package installing and data import ###Code # load the required files... if 'google.colab' in str(get_ipython()): print('Running on CoLab, need to get data and install libraries..') data_path = './' !curl -O https://raw.githubusercontent.com/DonErnesto/amld2021-unsupervised/master/notebooks/outlierutils.py !curl -O https://raw.githubusercontent.com/DonErnesto/amld2021-unsupervised/master/data/x_kdd.csv !curl -O https://raw.githubusercontent.com/DonErnesto/amld2021-unsupervised/master/data/x_kdd_prepared.csv !pip install --upgrade pyod else: print('Not running on CoLab, data and libraries are already present') data_path = '../data' # standard library imports import os import sys from collections import Counter import getpass # pandas, seaborn etc. import seaborn as sns import sklearn import matplotlib.pyplot as plt %matplotlib inline import pandas as pd import numpy as np # sklearn outlier models from sklearn.neighbors import NearestNeighbors from sklearn.ensemble import IsolationForest from sklearn.cluster import DBSCAN from sklearn.mixture import GaussianMixture # other sklearn functions from sklearn.decomposition import PCA from sklearn.covariance import MinCovDet, EmpiricalCovariance from sklearn.metrics import roc_auc_score from sklearn.preprocessing import StandardScaler, RobustScaler, MinMaxScaler from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import scale as preproc_scale from sklearn.manifold import TSNE # pyod import pyod from pyod.models.auto_encoder import AutoEncoder from pyod.models.knn import KNN from pyod.models.lof import LOF # from pyod.models.pca import PCA as pyod_PCA from pyod.models.iforest import IForest from outlierutils import plot_top_N, plot_outlier_scores, LabelSubmitter, API_URL ###Output _____no_output_____ ###Markdown Dataset Import ###Code dataset_path = 'x_kdd_prepared.csv' x = pd.read_csv(os.path.join(data_path, dataset_path)) print(f'Data set size: {x.shape}') ###Output _____no_output_____ ###Markdown Challenge DescriptionYou have just imported a data set, `x_kdd`, with 48K rows. The dataset was collected by by MIT Lincoln Labs in 1999, by operating a LAN-network as usual and additionally carrying out various attacks. This specific dataset (which is a subset of the original dataset) has "normal" traffic as inlier class, and several attacks (buffer_overflow, ftp_write, imap, ...) as outlier class. Although this data does not represent payment fraud, it is relevant because of the mixed data type.The goal of the challenge is for you to tell which rows are the outliers, i.e. which rows correspond to network attacks.There are no labels available. The target is to predict as many true positives as possible and as few false positives as possible, with the following weights:- Each true positive reported yields **500** points- Each false positive reported costs **25** pointsYou submit your prediction (the indices of the rows that you think are outliers) to a server by means of of some code discussed below. The server will provide feedback: it will tell you which rows are actually outliers and which ones are not.**Hints**- proceed iteratively! Submit a few points, learn based on the feedback of the server, then submit a few more points, etc..- only submit points that you think are positives!! Just submitting all points, or random points, will not get you a good score :)- the fraction of positives is less than 1%. Random guessing to gather labels is therefore unlikely to pay off. - given the limited time available for the workshop, we have already cleaned the data for you. If you rather do the cleaning yourself, set `dataset_path = 'x_kdd.csv'` in the cell above Your Outlier detection code ###Code # Your code goes here!! # by using one or more than one method, you will estimate a vector of scores, like this from pyod.models.iforest import IForest ifo = IForest(n_estimators=1000, max_samples=1024, random_state=1, contamination=0.01, behaviour='new') ifo.fit(x) # get the outlier scores of the data scores_ifo = ifo.decision_scores_ # raw outlier scores plt.plot(sorted(scores_ifo)[::-1]) ###Output _____no_output_____ ###Markdown Example of a submission processGiven the `scores` array, you may want to submit for example the indices of the N points that have the highest score. You can use this helper function to calculate these indices: ###Code def get_top_N_indices(scores, N=100): """ Helper function. Returns the indices of the points with the top N highest outlier scores """ return np.argsort(scores)[::-1][:N] indices_submission = get_top_N_indices(scores_ifo, N=40) indices_submission ###Output _____no_output_____ ###Markdown For the example `score` vector `scores_example = np.array([5.23, 4.12, 1.45, 7.23, 19.2, 2.23])`, the N=2 highest scoring points are at indices 4 and 3 in the original table, captured in the `indices_submission` vector above API submissionSubmit your predictions to the API with the `LabelSubmitter` class. This class has two useful methods:- with `.post_predictions()` you submit the indices of the estimated outliers. Submitting more than once the same index has no additional effect on your score - with `.get_labels()` you retrieve the label (1 for outliers and 0 for inliers) of all previously submitted indices ###Code username='your_user_name' password = getpass.getpass() if not ('server' in locals() and server.jwt_token): #only if no labelsubmitter with .jwt_token is available server = LabelSubmitter(username=username, password=password, url=API_URL) ###Output _____no_output_____ ###Markdown Use the parameter `endpoint='kdd'` option for this challenge. ###Code server.post_predictions(idx=indices_submission, endpoint='kdd') labels = server.get_labels(endpoint='kdd') labels ###Output _____no_output_____ ###Markdown Challenge Mácio Matheus Santos de ArrudaObjetivo: previsão do IPCA (Índice Nacional de Preços ao Consumidor Amplo) do próximo mês.Prêmio: a metodologia que obtiver o menor MAE para as previsões de Jan/2017 a Set/2018 (base de dados de teste) ganhará um livro de Inteligência Artificial! Rode o seu método 30 vezes e obtenha a média e o desvio-padrão dos resultados. Dependências ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt from pandas.plotting import lag_plot import warnings from sklearn.metrics import mean_absolute_error warnings.filterwarnings('ignore') %matplotlib inline series = pd.read_csv('../dataset/dataset-desafio.csv', index_col='Month', header=0) dataframe = pd.read_csv('../dataset/dataset-desafio.csv', usecols=[0,1], engine='python', skipfooter=3) series.plot() series.plot.hist() lag_plot(series) ###Output _____no_output_____ ###Markdown LSTM for IPCA forecast (Experiment executed 30 x)![title](https://developer.nvidia.com/sites/default/files/pictures/2018/lstm.png) ###Code def train_lstm(dataframe): # LSTM for IPCA Problem import numpy import math from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error # create dataset matrix def create_dataset(dataset, look_back=1): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] dataX.append(a) dataY.append(dataset[i + look_back, 0]) return numpy.array(dataX), numpy.array(dataY) dataframe.sort_values(by='Month', ascending=True, inplace=True) dataset = dataframe['indice'].to_frame() dataset = dataset.astype('float32') # normalize scaler = MinMaxScaler(feature_range=(0, 1)) dataset = scaler.fit_transform(dataset) # Split train / test(Jan/2017 a Set/2018) train_size = 156 test_size = len(dataset) - train_size train = dataset[0:train_size] test = dataset[train_size:len(dataset)] # reshape into X=t and Y=t+1 look_back = 3 trainX, trainY = create_dataset(train, look_back) testX, testY = create_dataset(test, look_back) # reshape input to be [samples, time steps, features] trainX = numpy.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) testX = numpy.reshape(testX, (testX.shape[0], 1, testX.shape[1])) # create and fit the LSTM network model = Sequential() model.add(LSTM(4, input_shape=(1, look_back))) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='sgd') model.fit(trainX, trainY, epochs=200, batch_size=1, verbose=2) model.summary() # make predictions train_predict = model.predict(trainX) test_predict = model.predict(testX) # invert predictions train_predict = scaler.inverse_transform(train_predict) trainY = scaler.inverse_transform([trainY]) test_predict = scaler.inverse_transform(test_predict) testY = scaler.inverse_transform([testY]) # calculate root mean squared error train_score = math.sqrt(mean_squared_error(trainY[0], train_predict[:,0])) print('Train RMSE: %.2f' % (train_score)) test_score = math.sqrt(mean_squared_error(testY[0], test_predict[:,0])) print('Test RMSE: %.2f' % (test_score)) train_score = mean_absolute_error(trainY[0], train_predict[:,0]) print('Train MAE: %.2f' % (train_score)) test_score = mean_absolute_error(testY[0], test_predict[:,0]) print('Test MAE: %.2f' % (test_score)) # shift train predictions for plotting train_predict_plot = numpy.empty_like(dataset) train_predict_plot[:, :] = numpy.nan train_predict_plot[look_back:len(train_predict) + look_back, :] = train_predict # shift test predictions for plotting test_predict_plot = numpy.empty_like(dataset) test_predict_plot[:, :] = numpy.nan test_predict_plot[len(train_predict) + (look_back*2)+1:len(dataset)-1, :] = test_predict baseline = scaler.inverse_transform(dataset) mae_train = mean_absolute_error(trainY[0], train_predict[:,0]) mae_test = mean_absolute_error(testY[0], test_predict[:,0]) return baseline, train_predict_plot, test_predict_plot, mae_train, mae_test experiments = [] for i in range(30): print(f'Start experiment {(i + 1)}') experiments.append(train_lstm(dataframe)) print(f'End experiment {(i + 1)}') ###Output Start experiment 1 ###Markdown Result metrics ###Code test_values_mae = [i[4] for i in experiments] #variancia v = np.var(test_values_mae) #desvio padrão d = np.sqrt(v) #media m = np.mean(test_values_mae) print(f'Mean of MAE: {m}') print(f'Standard deviation: {d}') print(f'Variance: {v}') ###Output Mean of MAE: 0.21505316556209608 Standard deviation: 0.011281130460127181 Variance: 0.0001272639044584093 ###Markdown Plot experiment 1 ###Code plt.plot(experiments[0][0], color='blue', label='baseline') plt.plot(experiments[0][1], color='red', label='train predict') plt.plot(experiments[0][2], color='green', label='test predict') plt.legend(loc='upper rigth') plt.show() ###Output _____no_output_____ ###Markdown RandomForest Regressor ###Code def train_rfregressor(dataframe): # Random Forest Regressor for IPCA Problem import numpy import matplotlib.pyplot as plt from pandas import read_csv import math from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from sklearn.ensemble import RandomForestRegressor from sklearn.datasets import make_regression # create dataset matrix def create_dataset(dataset, look_back=1): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] dataX.append(a) dataY.append(dataset[i + look_back, 0]) return numpy.array(dataX), numpy.array(dataY) dataframe.sort_values(by='Month', ascending=True, inplace=True) dataset = dataframe['indice'].to_frame() dataset = dataset.astype('float32') # normalize scaler = MinMaxScaler(feature_range=(0, 1)) dataset = scaler.fit_transform(dataset) # Split train / test(Jan/2017 a Set/2018) train_size = 156 test_size = len(dataset) - train_size train = dataset[0:train_size] test = dataset[train_size:len(dataset)] # reshape into X=t and Y=t+1 look_back = 3 trainX, trainY = create_dataset(train, look_back) testX, testY = create_dataset(test, look_back) # reshape input to be [samples, time steps, features] regr = RandomForestRegressor(max_depth=50, n_estimators=200, criterion='mae') # max_depth=10, random_state=0, n_estimators=100 regr.fit(trainX, trainY) # make predictions train_predict = regr.predict(trainX) test_predict = regr.predict(testX) # calculate root mean squared error train_score = math.sqrt(mean_squared_error(trainY, train_predict)) print('Train RMSE: %.5f' % (train_score)) test_score = math.sqrt(mean_squared_error(testY, test_predict)) print('Test RMSE: %.5f' % (test_score)) train_score = mean_absolute_error(trainY, train_predict) print('Train MAE: %.5f' % (train_score)) test_score = mean_absolute_error(testY, test_predict) print('Test MAE: %.5f' % (test_score)) baseline = scaler.inverse_transform(dataset) return train_score, test_score experiments = [] for i in range(30):#30 print(f'------ Start experiment ------ {(i + 1)}') experiments.append(train_rfregressor(dataframe)) print(f'End experiment {(i + 1)}') def train_svm(dataframe): # Random Forest Regressor for IPCA Problem import numpy import matplotlib.pyplot as plt from pandas import read_csv import math from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from sklearn.svm import SVR # create dataset matrix def create_dataset(dataset, look_back=1): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] dataX.append(a) dataY.append(dataset[i + look_back, 0]) return numpy.array(dataX), numpy.array(dataY) dataframe.sort_values(by='Month', ascending=True, inplace=True) dataset = dataframe['indice'].to_frame() dataset = dataset.astype('float32') # normalize scaler = MinMaxScaler(feature_range=(0, 1)) dataset = scaler.fit_transform(dataset) # Split train / test(Jan/2017 a Set/2018) train_size = 156 test_size = len(dataset) - train_size train = dataset[0:train_size] test = dataset[train_size:len(dataset)] # reshape into X=t and Y=t+1 look_back = 12 trainX, trainY = create_dataset(train, look_back) testX, testY = create_dataset(test, look_back) # reshape input to be [samples, time steps, features] regr = SVR() # max_depth=10, random_state=0, n_estimators=100 regr.fit(trainX, trainY) # make predictions train_predict = regr.predict(trainX) test_predict = regr.predict(testX) # calculate root mean squared error test_score = math.sqrt(mean_squared_error(testY, test_predict)) print('RMSE: %.5f' % (test_score)) test_score = mean_absolute_error(testY, test_predict) print('Test MAE: %.5f' % (test_score)) baseline = scaler.inverse_transform(dataset) return test_score for i in range(30): print(f'------ Inicio ------ {(i + 1)}') train_svm(dataframe) print(f'Final {(i + 1)}') ###Output ------ Inicio ------ 1 RMSE: 0.22582 Test MAE: 0.22461 Final 1 ------ Inicio ------ 2 RMSE: 0.22582 Test MAE: 0.22461 Final 2 ------ Inicio ------ 3 RMSE: 0.22582 Test MAE: 0.22461 Final 3 ------ Inicio ------ 4 RMSE: 0.22582 Test MAE: 0.22461 Final 4 ------ Inicio ------ 5 RMSE: 0.22582 Test MAE: 0.22461 Final 5 ------ Inicio ------ 6 RMSE: 0.22582 Test MAE: 0.22461 Final 6 ------ Inicio ------ 7 RMSE: 0.22582 Test MAE: 0.22461 Final 7 ------ Inicio ------ 8 RMSE: 0.22582 Test MAE: 0.22461 Final 8 ------ Inicio ------ 9 RMSE: 0.22582 Test MAE: 0.22461 Final 9 ------ Inicio ------ 10 RMSE: 0.22582 Test MAE: 0.22461 Final 10 ------ Inicio ------ 11 RMSE: 0.22582 Test MAE: 0.22461 Final 11 ------ Inicio ------ 12 RMSE: 0.22582 Test MAE: 0.22461 Final 12 ------ Inicio ------ 13 RMSE: 0.22582 Test MAE: 0.22461 Final 13 ------ Inicio ------ 14 RMSE: 0.22582 Test MAE: 0.22461 Final 14 ------ Inicio ------ 15 RMSE: 0.22582 Test MAE: 0.22461 Final 15 ------ Inicio ------ 16 RMSE: 0.22582 Test MAE: 0.22461 Final 16 ------ Inicio ------ 17 RMSE: 0.22582 Test MAE: 0.22461 Final 17 ------ Inicio ------ 18 RMSE: 0.22582 Test MAE: 0.22461 Final 18 ------ Inicio ------ 19 RMSE: 0.22582 Test MAE: 0.22461 Final 19 ------ Inicio ------ 20 RMSE: 0.22582 Test MAE: 0.22461 Final 20 ------ Inicio ------ 21 RMSE: 0.22582 Test MAE: 0.22461 Final 21 ------ Inicio ------ 22 RMSE: 0.22582 Test MAE: 0.22461 Final 22 ------ Inicio ------ 23 RMSE: 0.22582 Test MAE: 0.22461 Final 23 ------ Inicio ------ 24 RMSE: 0.22582 Test MAE: 0.22461 Final 24 ------ Inicio ------ 25 RMSE: 0.22582 Test MAE: 0.22461 Final 25 ------ Inicio ------ 26 RMSE: 0.22582 Test MAE: 0.22461 Final 26 ------ Inicio ------ 27 RMSE: 0.22582 Test MAE: 0.22461 Final 27 ------ Inicio ------ 28 RMSE: 0.22582 Test MAE: 0.22461 Final 28 ------ Inicio ------ 29 RMSE: 0.22582 Test MAE: 0.22461 Final 29 ------ Inicio ------ 30 RMSE: 0.22582 Test MAE: 0.22461 Final 30 ###Markdown Metrics ###Code test_values_mae = [i[1] for i in experiments] #variancia v = np.var(test_values_mae) #desvio padrão d = np.sqrt(v) #media m = np.mean(test_values_mae) print(f'Mean of MAE: {m}') print(f'Standard deviation: {d}') print(f'Variance: {v}') ###Output Mean of MAE: 0.12421896387683362 Standard deviation: 0.0030893803868903267 Variance: 9.544271174902625e-06
stack/create_queue.ipynb
###Markdown Create a Queue ###Code class Queue(object): def __init__(self): self.items = [] def is_empty(self): return self.items == [] def enqueue(self,item): self.items.insert(0, item) def dequeue(self): return self.items.pop() def size(self): return len(self.items) q = Queue() q.size() q.is_empty() q.enqueue(1) q.enqueue(2) q.size() q.dequeue() q.size() ###Output _____no_output_____
preprocessing/trs_analysis/aux_2_trs_analysis.ipynb
###Markdown IntroductionNotebook to analyse the PyBossa taskruns from the Auxiliary App - Version 2. This app is an app for tests. Load Libraries and Data ###Code from mod_finder_util import mod_finder_util mod_finder_util.add_modules_origin_search_path() import pandas as pd import seaborn as sns import modules.utils.firefox_dataset_p2 as fd taskruns = fd.TaskRuns.read_aux2_taskruns_df() ###Output _____no_output_____ ###Markdown Comparing Answers of Aux App 2 with Answers of Expert App 2 ###Code from mod_finder_util import mod_finder_util mod_finder_util.add_modules_origin_search_path() import modules.utils.firefox_dataset_p2 as fd from sklearn.metrics import cohen_kappa_score bug_ids = [1248267,1270274,1271607,1277937,1278388,1287384,1287687,1287823]#,1289832] features = fd.Datasets.read_features_df() bugreports = fd.Datasets.read_selected_bugreports_df() df_2 = pd.DataFrame(columns=features.feat_name.values, index=bugreports[bugreports.Bug_Number.isin(bug_ids)].Bug_Number) taskruns = taskruns[taskruns.bug_id.isin(bug_ids)] display(taskruns[['bug_id','answers']]) for idx,row in taskruns.iterrows(): ans = row.answers.split(" ") for i in range(len(ans)): feat_name = df_2.columns[i] df_2.at[row.bug_id, feat_name] = int(ans[i]) br_2_feat_expert_matrix = fd.Feat_BR_Oracles.read_feat_br_expert_2_df() br_2_feat_expert_matrix = br_2_feat_expert_matrix[br_2_feat_expert_matrix.index.isin(bug_ids)] br_2_feat_expert_matrix.sort_index(inplace=True) df_2.sort_index(inplace=True) print(set(br_2_feat_expert_matrix.index.values) - set(df_2.index.values)) print(str(br_2_feat_expert_matrix.index.values[0:len(df_2)])) print(str(df_2.index.values)) print(str(br_2_feat_expert_matrix.index.values[0:len(df_2)]) == str(df_2.index.values)) a1,a2 = [],[] for idx,row in br_2_feat_expert_matrix.iterrows(): for col in br_2_feat_expert_matrix.columns: a1.append(df_2.at[idx,col]) a2.append(br_2_feat_expert_matrix.at[idx,col]) print(a1) print(a2) print(cohen_kappa_score(a1, a2, labels=[0,1])) display(br_2_feat_expert_matrix.head(10)) display(df_2.head(10)) assert " ".join([str(val) for val in df_2.iloc[0,:].values]) == "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0" assert " ".join([str(val) for val in df_2.iloc[1,:].values]) == "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0" assert " ".join([str(val) for val in df_2.iloc[2,:].values]) == "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0" assert " ".join([str(val) for val in df_2.iloc[3,:].values]) == "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0" assert " ".join([str(val) for val in df_2.iloc[4,:].values]) == "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0" assert " ".join([str(val) for val in df_2.iloc[5,:].values]) == "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0" assert " ".join([str(val) for val in df_2.iloc[6,:].values]) == "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0" assert " ".join([str(val) for val in df_2.iloc[7,:].values]) == "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0" assert " ".join([str(val) for val in df_2.iloc[8,:].values]) == "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0" ###Output _____no_output_____
Model3_SARIMA_GoogleMobility.ipynb
###Markdown SARIMA Model - Predicting Google Mobility IndexThis notebook demonstrates how I developed the SARIMA model for predicting google mobility index in each county and made a forecast for the year of 2022. Please read my data cleaning notebook for data cleaning, descriptive statistics, and EDA. Contents of this notebook: 1. Dickey-Fuller Test 2. Modeling (Grid Search)3. Model Validation 3. Forecasting ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from statsmodels.tsa.stattools import grangercausalitytests, adfuller from statsmodels.tsa.statespace.sarimax import SARIMAX import statsmodels.api as sm from sklearn.model_selection import TimeSeriesSplit from sklearn.metrics import mean_squared_error import itertools from matplotlib.pylab import rcParams import warnings warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown First, I read the data frame and I change the date to the datetime format and set date and zip code in indexes. ###Code # read data DF = pd.read_csv('Data/mobility.csv') # set date into datetime frame DF.date = pd.to_datetime(DF.date) # set date and countyfips as index DF = DF.set_index(['countyfips', 'date']).sort_index() DF print('Mobility data covers from', DF.index.unique(level='date').min(), 'to' , DF.index.unique(level='date').max()) print('It includes', len(DF.index.unique(level='countyfips')), 'counties in Washington DC metro area.' ) ###Output Mobility data covers from 2020-02-24 00:00:00 to 2022-01-09 00:00:00 It includes 16 counties in Washington DC metro area. ###Markdown Before start analysis, I split my data set into train and test set. I use the 2021-11-01 as a cutoff date. As a consequence, train set has 47 observation and test set has 7 observations. ###Code # use 2021-11-01 as the cutoff point train = DF[DF.index.get_level_values('date') < '2021-11-01'] test = DF[DF.index.get_level_values('date') >= '2021-11-01'] print('Number of observation in train set:', len(train.index.unique(level='date'))) print('Number of observation in test set for each county:', len(test.index.unique(level='date'))) ###Output Number of observation in train set: 616 Number of observation in test set for each county: 70 ###Markdown 1. Dickey-Fuller TestI performed the Dickey-Fuller test for stationarity check of a time series. Below, I conducted the ADFuller test three times. The first test was with an original time series (i.e., no differencing, no moving average). And if a county fails this first test, I take a difference of a time series and redo the Dickey-Fuller test. If failed again, I take the second difference and try the ADFuller test. And any counties which failed the test with a second difference were dropped from this analysis. To run the test throughout all counties in my sample, first, I run the test and store the result in the data frame. And then, I evaluate the p-value and filter the zip code that failed the test. Zip codes that passed the first test are stored in 'diff0' list. Zip codes which need a second test with difference terms, I stored in 'diff1'. ###Code # Check Dickey-Fuller test and report the p-values for each series # To store the test result from each county dtest = pd.DataFrame() df_p = [] Fips = [] fips = train.index.unique(level='countyfips') for x in fips: p_val_1 = adfuller(train.loc[(x, ),]['commercial'])[1] # extract p-value df_p.append(p_val_1) Fips.append(x) dtest['countyfips'] = Fips dtest['Dickey_Fuller_p'] = df_p dtest.head() ###Output _____no_output_____ ###Markdown The cell below filters out the zip codes by p-value. If it's smaller than 0.05, the zipcode is stored in 'diff0'. If it's larger than 0.05, the zipcode is sotred in 'diff1'. ###Code # Create a list of county IDs (conutyfips) which passed the test as well as which did not pass the test. #List of zipcode which pass Dickey-Fuller test without taking difference diff0 = list(dtest[dtest.Dickey_Fuller_p <=0.05].countyfips) # List a zipcode which failed Dickey-Fuller test, thus need to take difference diff1 = list(dtest[dtest.Dickey_Fuller_p > 0.05].countyfips) print(f'{len(diff0)} counties pass. {len(diff1)} counties does not pass the test, so redo the test after taking a difference.' ) ###Output 13 counties pass. 3 counties does not pass the test, so redo the test after taking a difference. ###Markdown For zip codes which failed the first dickey fuller test, I take a difference and redo the test. ###Code # Check Dickey-Fuller test and report the p-values for each county # To store the test result from each county dtest = pd.DataFrame() df_p = [] Fips = [] # now I do test only for the counties which failed the earlier test fips = diff1 for x in diff1: p_val_1 = adfuller(train.diff().dropna().loc[(x, ),]['commercial'])[1] # extract p-value df_p.append(p_val_1) Fips.append(x) dtest['countyfips'] = Fips dtest['Dickey_Fuller_p'] = df_p dtest.head() ###Output _____no_output_____ ###Markdown In below, I filter the zip codes which failed the second DF test. ###Code # Create a list of counties which passed the test as well as which did not pass the test. #List of zipcode which pass Dickey-Fuller test without taking difference diff1_1 = list(dtest[dtest.Dickey_Fuller_p <=0.05].countyfips) # List a zipcode which failed Dickey-Fuller test, thus need to take difference diff2 = list(dtest[dtest.Dickey_Fuller_p > 0.05].countyfips) print(f'{len(diff1_1)} counties passed the test after differencing. {len(diff2)} counties does not pass the test.' ) ###Output 3 counties passed the test after differencing. 0 counties does not pass the test. ###Markdown Now, all counties passed the Dickey-Fuller stationary test with original series or one-time differenced series. Next, I start modeling. 2. Modeling Rationality of using SARIMAX (seasonal ARIMA with exogenous variable)I chose SARIMAX model with the COVID-19 daily new cases as exogenous variable. As shows in EDA section, the google mobility data is clearly influenced by the spread of COVID-19 cases in a region. So I picked SARIMAX model with the COVID-19 daily cases as exogenous variable. Pre-processing to train the model individually for each county I train a model for each county individually. To streamline the modeling process, I create a list of a data frame. Each data frame is for one county. In this way, I can use for loop to run grid search, model validation, and forecasting for allcounties at once. I separate a list of a data frame by how many times I took the difference of a time series. Counties that passed the DF test without differencing, I stored in (diff0). Counties that took the first difference are in (diff1), and counties that took two times are in (diff2). This grouping helps me later when I roll back the differenced time series. ###Code # Make a list of data frame for each county which takes original scale # To store the data frame for train set, test set, and total periods. train_diff0 = [] test_diff0 = [] all_diff0 = [] # Split the data by county and store in a list for x in diff0: train_diff0.append(pd.DataFrame(train.loc[(x, ),])) test_diff0.append(pd.DataFrame(test.loc[(x, ),])) all_diff0.append(pd.DataFrame(DF.loc[(x, ),])) # Make a list of data frame for each county which takes a difference # To store original data frame (no differencing) train_orig1 = [] test_orig1 = [] all_orig1 = [] # To store differenced data frame train_diff1 = [] test_diff1 = [] all_diff1 = [] for x in diff1_1: train_orig1.append(pd.DataFrame(train.loc[(x, ),])) test_orig1.append(pd.DataFrame(test.loc[(x, ),])) all_orig1.append(pd.DataFrame(DF.loc[(x, ),])) train_diff1.append(pd.DataFrame(train.loc[(x, ),].diff().dropna())) test_diff1.append(pd.DataFrame(test.loc[(x, ),].diff().dropna())) all_diff1.append(pd.DataFrame(DF.loc[(x, ),].diff().dropna())) # rename diff1 = diff1_1 ###Output _____no_output_____ ###Markdown Grid search of SARIMA parameters In a Grid search, I try all possible combination of p,d,q and P,D,Q in below. The seasonal frequency was set to 7, because this is the daily time series. ###Code # Define the p, d and q parameters to take any value between 0 and 2 p = d = q = range(0, 2) # Generate all different combinations of p, q and q triplets pdq = list(itertools.product(p, d, q)) # Generate all different combinations of seasonal p, d and q triplets (use 7 for frequency) pdqs = [(x[0], x[1], x[2], 7) for x in list(itertools.product(p, d, q))] ###Output _____no_output_____ ###Markdown I run grid search with p,d,q and P,D,Q parameters defined above. For the model selection, I use AIC scores.I run the grid search separately for counties in diff0 and diff1. ###Code # Grid search for counties in diff0 list. # Initialize an empty list to store results ans = [] # For counties in diff0 run the following grid search. for df, name in zip(train_diff0, diff0): # Iterate through all the paramaters in pdq with parameters in seasonal pdq (nested loop ) to create a grid for comb in pdq: for combs in pdqs: try: #Fit traindata into SARIMAX from statsmodels mod = SARIMAX(df['commercial'], exog=df['new_case_count'], order=comb, seasonal_order=combs, enforce_stationarity=False, enforce_invertibility=False) #Get the results results = mod.fit() # Store the county name, pdq, PDQs, and AIC ans.append([name, comb, combs, results.aic]) except: continue ans # Store all results to a data frame result_diff0 = pd.DataFrame(ans, columns = ['name','pdq','pdqs','AIC']) ###Output _____no_output_____ ###Markdown In below, I sort the data frame by lowest AIC, and store the best parameters for each county in best_para_diff0. ###Code # Sort by lowest AIC best_para_diff0 = result_diff0.loc[result_diff0.groupby("name")["AIC"].idxmin()] best_para_diff0 ###Output _____no_output_____ ###Markdown The table above report the the optimal set of parameters with the lowest AIC for each county in diff0 list. I do the same grid search for counties in diff1 list. ###Code # Grid search for counties in diff1. # Initialize an empty list to store results ans_1 = [] # Run the grid search for diff1. train_diff1 is one differenced values. for df, name in zip(train_diff1, diff1): # Iterate through all the combination of pdq and PDQs for comb in pdq: for combs in pdqs: try: #Fit train data into SARIMAX from statsmodels mod = SARIMAX(df['commercial'], exog=df['new_case_count'], order=comb, seasonal_order=combs, enforce_stationarity=False, enforce_invertibility=False) #Get the results results = mod.fit() # Store the county name, pdq, PDQs, and AIC ans_1.append([name, comb, combs, results.aic]) except: continue # Store all results to a data frame result_diff1 = pd.DataFrame(ans_1, columns = ['name','pdq','pdqs','AIC']) ###Output _____no_output_____ ###Markdown In below, I sort the data frame by lowest AIC, and store the best parameters for each county. ###Code # Sort by lowest AIC best_para_diff1 = result_diff1.loc[result_diff1.groupby("name")["AIC"].idxmin()] best_para_diff1 ###Output _____no_output_____ ###Markdown The table above reports the optimal set of parameters with the lowest AIC for each county in diff1 list.Next, I set the optimal set of parameters on SARIMAX model for each county and validate the model with a test set. 3. Model Validation with test time seriesI check the model's prediction accuracy by comparing the prediction with the test time series. I use Root Mean Squared Error (RMSE) for the accuracy score.In the cell below, I (1) fit the training data into the tuned model, (2) get prediction and confidence intervals and store the results in a data frame. ###Code # Fit and predict function for diff 0 def fitpredict(train, test, zip_df, best_para): # training data (train_diff0-2), list of zipcodes(diff_df0-2), VAR order(order0-2) """ Input of this function: takes train data, test data, list of county ids, and parameters from grid search as inputs. Output of this function: predicted values and confirence intervals in one data frame """ # To store the prediction prediction = [] for train_df, name in zip(train, zip_df): # 1. Fit the training data into the model # Get the optimal parameter from the grid serach results order = list(best_para.loc[best_para.name==name, 'pdq'].values)[0] seasonal_order = list(best_para.loc[best_para.name==name, 'pdqs'].values)[0] # Plug the optimal parameter values into a SARIMAX model sarimax = SARIMAX(train_df['commercial'], exog=train_df['new_case_count'], order=order, seasonal_order=seasonal_order, enforce_stationarity=False, enforce_invertibility=False) # Fit the model and print results output = sarimax.fit() # 2. Predict using the train data # prediction period day = pd.date_range('2021-11-1','2022-01-09', freq='D') # Prediction period start= len(train_df) i = zip_df.index(name) end= len(train_df)+len(test[i])-1 # input of exogenous variables exog_forecast = test[i][['new_case_count']] # plug in data for prediction and get the prediction pred = output.get_prediction(start=start, end=end, dynamic=True, exog= exog_forecast) # Get the confidence intervals for all predictions pred_conf = pred.conf_int() # Store prediction and confirence interval in one data frame to store. df_pred = pd.DataFrame(pred.predicted_mean) #, index=test[i].index df_conf = pd.DataFrame(pred_conf) #, index=test[i].index df_forecast = pd.concat([df_conf,df_pred], axis=1) df_forecast.reset_index(inplace=True) df_forecast.drop(['index'], axis=1, inplace=True) df_forecast.index=day prediction.append(df_forecast ) return prediction # Input train data, test data, zipcode, and bet parameters for each county into fitprediction function defined above. predict0 = fitpredict(train_diff0, test_diff0, diff0, best_para_diff0) ###Output _____no_output_____ ###Markdown In below, I do the same thing for counties in diff 1. ###Code def fitpredict_1(train, test, zip_df, best_para): # training data (train_diff0-2), list of zipcodes(diff_df0-2), VAR order(order0-2) """ Input of this function: takes train data, test data, list of county ids, and parameters from grid search as inputs. Output of this function: predicted values and confirence intervals in one data frame """ # To store the prediction prediction = [] for train_df, name in zip(train, zip_df): # 1. Fit the training data into the model # Get the optimal parameter from the grid serach results order = list(best_para.loc[best_para.name==name, 'pdq'].values)[0] seasonal_order = list(best_para.loc[best_para.name==name, 'pdqs'].values)[0] # Plug the optimal parameter values into a SARIMAX model sarimax = SARIMAX(train_df['commercial'], exog=train_df['new_case_count'], order=order, seasonal_order=seasonal_order, enforce_stationarity=False, enforce_invertibility=False) # Fit the model and print results output = sarimax.fit() # 2. Predict using the train data # prediction period day = pd.date_range('2021-11-2','2022-01-09', freq='D') # Prediction period start= len(train_df) #pd.to_datetime('2021-11-01') i = zip_df.index(name) end= len(train_df)+len(test[i])-1 #pd.to_datetime('2022-01-09') # input of exogenous variables exog_forecast = test[i][['new_case_count']] # plug in data for prediction and get the prediction pred = output.get_prediction(start=start, end=end, dynamic=True, exog= exog_forecast) # Get the confidence intervals for all predictions pred_conf = pred.conf_int() # Store prediction and confirence interval in one data frame to store. df_pred = pd.DataFrame(pred.predicted_mean) #, index=test[i].index df_conf = pd.DataFrame(pred_conf) #, index=test[i].index df_forecast = pd.concat([df_conf,df_pred], axis=1) df_forecast.reset_index(inplace=True) df_forecast.drop(['index'], axis=1, inplace=True) df_forecast.index=day prediction.append(df_forecast ) return prediction # Input train data, test data, zipcode, and bet parameters for each county into fitprediction function defined above. predict1 = fitpredict_1(train_diff1, test_diff1, diff1, best_para_diff1) ###Output _____no_output_____ ###Markdown Rolling back differenced time series. Before evaluating the predicted values, I need to bring the differenced data back to its original scale. In the following cell, I create a function to roll back the differenced data. For counties in diff1, I differenced the data one time, so the prediction is one-time difference. To roll back to the original scale, I sum up all differences and add them back to the last observed data of the training set. The below is the function to roll back for the differenced data. ###Code # This function roll back the first order differencing to get the original scale def invert_diff(df_train, df_forecast): """Revert back the first differencing to get the forecast to original scale.""" df_fc = df_forecast.copy() # cumulative sum of all forcast (= total sumn of changes since the last obeserved data) df_fc['pred_cumsum'] = df_fc['predicted_mean'].cumsum() # add a column of the last observed data from the training. Here, this training data should be stored in the original scale. df_fc['ob_last'] = df_train['commercial'].iloc[-1] # add the acumulative change to the last observed data in original scale. df_fc['pred_commercial'] = df_fc['ob_last'] + df_fc['pred_cumsum'] return df_fc ###Output _____no_output_____ ###Markdown I store the rolled back prediction in a list. ###Code predict1_rolled = [] for df_train, df_pre, name in zip(train_orig1, predict1, diff1): # apply invert difference function. df_fc = invert_diff(df_train, df_pre) predict1_rolled.append(df_fc) ###Output _____no_output_____ ###Markdown Using the rolled back prediction, I calculate RMSE. I define the RMSE calculation function separately for the first differenced group and no differenced group. RMSE Function ###Code def rmse1(test, predict, zip_df): summary_rmse = pd.DataFrame() RMSE1=[] Zipcode=[] for predict_df, name in zip(predict, zip_df): predict1 = predict_df['pred_commercial'] i = zip_df.index(name) test1 = test[i][:-1]['commercial'] rmse1 = np.sqrt(mean_squared_error(test1, predict1)) Zipcode.append(name) RMSE1.append(rmse1) summary_rmse['Zipcode'] = Zipcode summary_rmse['RMSE'] = RMSE1 return summary_rmse def rmse0(test, predict, zip_df): summary_rmse = pd.DataFrame() RMSE1=[] Zipcode=[] for predict_df, name in zip(predict, zip_df): predict1 = predict_df['predicted_mean'] i = zip_df.index(name) test1 = test[i]['commercial'] rmse1 = np.sqrt(mean_squared_error(test1, predict1)) Zipcode.append(name) RMSE1.append(rmse1) summary_rmse['Zipcode'] = Zipcode summary_rmse['RMSE'] = RMSE1 return summary_rmse ###Output _____no_output_____ ###Markdown Calculate RMSE for each group of counties, diff1 and diff0. ###Code # Calcuate RMSE by using rmse function rmse_df0 = rmse0(test_diff0, predict0, diff0) rmse_df1 = rmse1(test_orig1, predict1_rolled, diff1) # Create one large dataframe wchich store the all RMSE rmse_df = rmse_df0.append(rmse_df1) rmse_df print(f'RMSE varies from {rmse_df.RMSE.min()} to {rmse_df.RMSE.max()}, with mean {rmse_df.RMSE.mean()}.' ) print(f'Since the commercial mobility index ranges from {df.commercial.min()} to {df.commercial.max()}, the prediction is off about 20-25% the size of the index.') ###Output RMSE varies from 162.10033108809597 to 394.7054401875997, with mean 281.80830421577434. Since the commercial mobility index ranges from -701.0 to 608.0, the prediction is off about 20-25% the size of the index. ###Markdown Comparison with Naive model's RMSETo evaluate SARIMA model, I compare with RMSE of naive model. Naive model for the time series is shifting the time series by one period. In below, I define a function to calcuate RMSE of naive model for all zip codes. ###Code # Calculate a prediction by naive model def rmse_naive(data, zip_df): # to store RMSE for all zipcodes rmse_df = pd.DataFrame() rmse_naive = [] Zipcode = [] for df, name in zip(data, zip_df): # Get revenue time series series = df['commercial'] # Naive model prediction naive = series.shift(1) # Calculate RMSE rmse = np.sqrt(mean_squared_error(series[1:], naive.dropna())) # store rmse_naive.append(rmse) Zipcode.append(name) # store results in data frame rmse_df['RMSE_naive'] = rmse_naive rmse_df['Zipcode'] = Zipcode return rmse_df ###Output _____no_output_____ ###Markdown Using the anove function, I calculate the naive model RMSE for all counties, and store the results to the SARIMA's RMSE table. ###Code # Calcuate naive model RMSE rmse_naive0 = rmse_naive(all_diff0, diff0) rmse_naive1 = rmse_naive(all_diff1, diff1) # Create one large dataframe wchich store the all RMSE rmse_naive_df = rmse_naive0.append(rmse_naive1) # Merge to SARIMA's rmse table rmse_df_1 = rmse_df.merge(rmse_naive_df, on='Zipcode', how='left') rmse_df_1.head() # Comparison rmse_df_1.describe() ###Output _____no_output_____ ###Markdown From the above table, I see mean, median, min, max values of RMSE for SARIMA model is much higher than the naive model's RMSE. It means the prediction performance of my SARIMA model is worse than naive prediciton. Below, I plot the prediction and original test data for Washington DC county. ###Code # Plot real vs predicted values along with confidence interval rcParams['figure.figsize'] = 12, 5 # Plot observed values ax = all_diff0[0]['commercial'].plot(label='Observed') # Plot predicted values predict0[0]['predicted_mean'].plot(ax=ax, label='Dynamic Forecast', alpha=0.9) # Plot the range for confidence intervals ax.fill_between(predict0[0].index, predict0[0]['lower commercial'], predict0[0]['upper commercial'], color='g', alpha=0.5) # Set axes labels ax.set_xlabel('Date') ax.set_ylabel('Mobility Index') ax.set_title('Mobility Index in commercial places in Washigton DC') plt.legend() ax.set_xlim(['2021-05-01','2022-01-09']) plt.show() ###Output _____no_output_____ ###Markdown The model's forecast and observed data are close to each other until the middle of December 2021, and the model's prediction has separated from the observed time series. This is when the omicron variant spread, and the daily new cases hit the recode high. So, by feeding these recode high daily cases into the model, the model predicted that the mobility index dropped during January 2022. However, as most people are fully vaccinated, people kept going out to a restaurant, bars, and other stores.The model capture too much of the negative relationship between COVID-19 new cases and people's mobility in stores, based on the early 2020 patterns. Yet, by 2022, many people will be fully vaccinated, they continue to go out.This suggests that for future work, incorporating vaccination rates will adjust these changes of people's perception toward going out during pandemics. 4. ForecastingI forecast the mobility index for 2021 for all counties. First, I fit the model into an entire sample. Using that model, I make an out-of-sample prediction. Since my model needs an exogenous variable (covid-19 new cases for the forecasting period), I input hypothetical numbers. I assumed that by the end of 2022, the COID-19 new cases would be 0. Fit the model on the complete datasetFirst, I create a hypothetical series for the COVID-19 new cases. I assume that the new daily case will drop incrementally and reach zero at the end of 2022. Below, I create hypothetical numbers for this scenarios.Since I have a covid_19 new cases data till Jan 15, 2022. I use the observed data for Feb 2021 to Jan15, 2022. And beyond Jan15, I feed the hypothetical data which is based on the optimistic assumption that the covid new cases incrementally drop and reaches zero by the end of 2022. ###Code # Optimistic Senario = the new covid cases incrementally drop in the next 360 days and reaches zeto in a year. # Calculate for county in diff 0 case2_diff0=[] for i in range(len(diff0)): maxval = all_diff0[0]['new_case_count'][-1] case =[] for i in range(360): if i == 0: case.append(maxval) else: new = maxval - i*(maxval/360) case.append(new) case2_diff0.append(case) # Calculate for county in diff1 case2_diff1=[] for i in range(len(diff1)): maxval = all_diff1[0]['new_case_count'][-1] case =[] for i in range(360): if i == 0: case.append(maxval) else: new = maxval - i*(maxval/360) case.append(new) case2_diff1.append(case) ###Output _____no_output_____ ###Markdown Next, I fit the model on the complete dataset, feed the covid new cases data into the model, and and get a forecast for 2022 for each county. The following function is almost identical to the earlier fitprediction function with minor changes. ###Code def fitforecast(data, zip_df, best_para, newcases): """ Input: complete train data, list of county ids, and exogenous data. Output: predictions """ # To store the prediction prediction = [] for df, name, newcase in zip(data, zip_df, newcases): # 1. Fit the training data into the model # Get the optimal parameter from the grid serach results order = list(best_para.loc[best_para.name==name, 'pdq'].values)[0] seasonal_order = list(best_para.loc[best_para.name==name, 'pdqs'].values)[0] # Plug the optimal parameter values into a SARIMAX model sarimax = SARIMAX(df['commercial'], exog=df['new_case_count'], order=order, seasonal_order=seasonal_order, enforce_stationarity=False, enforce_invertibility=False) # Fit the model and print results output = sarimax.fit() # 2. Predict using the train data # prediction period day = pd.date_range('2022-01-10','2023-01-04', freq='D') # Get forecast 360 steps ahead in future pred = output.get_forecast(steps=360, exog=newcase) # dynamic=True, # Get confidence intervals of forecasts pred_conf = pred.conf_int() # Store prediction and confirence interval in one data frame to store. df_pred = pd.DataFrame(pred.predicted_mean) #, index=test[i].index df_conf = pd.DataFrame(pred_conf) #, index=test[i].index df_forecast = pd.concat([df_conf,df_pred], axis=1) df_forecast.reset_index(inplace=True) df_forecast.drop(['index'], axis=1, inplace=True) df_forecast.index=day prediction.append(df_forecast ) return prediction ###Output _____no_output_____ ###Markdown Apply the function and get prediction. ###Code # Forecast for the optimistic COVID-19 senario forecast0 = fitforecast(all_diff0, diff0, best_para_diff0, case2_diff0) forecast1 = fitforecast(all_diff1, diff1, best_para_diff1, case2_diff1) ###Output _____no_output_____ ###Markdown Rolling back differenced time series. Using invert_diff function created earlier, I roll back the differenced time series to its original scale. ###Code # Roll back the forecast forecast1_rolled = [] for df, df_pre, name in zip(all_orig1, forecast1, diff1): # apply invert difference function. df_fc = invert_diff(df, df_pre) forecast1_rolled.append(df_fc) ###Output _____no_output_____ ###Markdown Forecast for Washington DC countyBelow shows a rising trend of mobility in 2022. 75% confidence level are very wide. ###Code # Plot future predictions with confidence intervals ax = all_diff0[0]['commercial'].plot(label='observed', figsize=(15, 6)) forecast0[0]['predicted_mean'].plot(ax=ax, label='Forecast') ax.fill_between(forecast0[0].index, forecast0[0]['lower commercial'], forecast0[0]['upper commercial'], color='k', alpha=0.25) ax.set_xlabel('Date') ax.set_ylabel('Mobility Index') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Lastly, I create a data frame that stores observed and forecast data for all counties in one table.First, I create a data frame from a list that stores forecasted values for each county. A data frame's row is county fips, a column is a date, and each value is a predicted mobility index. Second, I create a data frame of the historical data in the same setting; row is county ID (countyfips), and column is the date. And merge two data frames using countyfips as a key value. ###Code # 1. Convert a list of forecast into a table. # for county in diff1 fc_df1=pd.DataFrame() for df_pre, name in zip(forecast1, diff1): fc_df1[name]=forecast1[diff1.index(name)]['predicted_mean'] fd_merge1 = fc_df1.T # for county in diff0 fc_df=pd.DataFrame() for df_pre, name in zip(forecast0, diff0): fc_df[name]=forecast0[diff0.index(name)]['predicted_mean'] fd_merge = fc_df.T # merhe both dataset into one fd_merge2 = fd_merge.append(fd_merge1) # 2. reshape a original dataset DF1 = DF DF2 = DF1.unstack()['commercial'] DF2.reset_index(inplace=True) # 3. merge the forecast df to original df # reset index before merge fd_merge2.reset_index(inplace=True) DF3 = DF2.merge(fd_merge2, how='left', left_on='countyfips', right_on='index') DF3.head() ###Output _____no_output_____ ###Markdown Finally, I transpose the data frame to columns = county, and rows = date, and save in csv. ###Code # Change countyfips from integer to string DF3.countyfips = DF3.countyfips.astype(str) # set county fips as index DF3.set_index(DF3.countyfips, inplace=True) DF3.drop(['countyfips'], axis=1, inplace=True) # Transform wide format to long format df3 = DF3.reset_index() df3 = pd.melt(df3, id_vars=['countyfips'], var_name = 'date', value_name='mobility') # calculate the changes. df3['yoy_change'] = df3.groupby(['countyfips'])['mobility'].pct_change(360) # create new columns for post model analysis df3.date = df3.date.astype('str') df3['year'] = df3.date.str[:4] df3['month'] =df3.date.str[5:7] df3['yearmonth']=df3.date.str[:7] df3['day'] =df3.date.str[8:10] df3['Date'] =df3.date.str[0:10] df3.groupby(['yearmonth', 'countyfips']).mean() ###Output _____no_output_____ ###Markdown Save data ###Code # Save df3.to_csv('Data/mobility_forecast_normal.csv', index=False) ###Output _____no_output_____
GRAVEYARD.ipynb
###Markdown GRAVEYARDfor the next part, I want to begin including the weather figures to go along with each week. For this I'm using World Weather Online's Historical Weather API which can be found [here](https://www.worldweatheronline.com/developer/api/docs/historical-weather-api.aspx)The returned data has a lot of interesting information, for the sake of time and clarity, I will look at average temperature, temperature variance, uvindex, humidity, precipitationWEATHERBIT ###Code def grab_weather(weekdf): api_key = '01feb38f5b474c60924192053201703' url = 'https://api.worldweatheronline.com/premium/v1/past-weather.ashx' query = { 'q': str(weekdf['Lat']) + ',' + str(weekdf['Long']), 'date': weekdf.iat[0,5], 'enddate': weekdf.iat[0,6], 'tp': 24, 'format': 'json', 'key': api_key } time.sleep(5) response = rq.get(url, query) display(response) if response.ok: rawjson = response.json() display(rawjson) weatherdata = pd.io.json.json_normalize(rawjson['data']['weather']).reset_index(drop=True) weatherdata = weatherdata.drop(columns=['maxtempC', 'mintempC', 'avgtempC', 'astronomy']) return weatherdata def IROC_data(weekndf): for locations in weekndf['locationID']: w1d_idx = weekndf[weekndf['locationID'] == locations] lindata = tallnewdf[ (tallnewdf['locationID'] == locations) & (tallnewdf['count_type'] == 'cases') & (tallnewdf['Date'] <= pd.Timestamp(w1d_idx.iat[0,6])) & (tallnewdf['Date'] >= pd.Timestamp(w1d_idx.iat[0,5]))].reset_index(drop=True) slope, _, _, _, _ = st.linregress(lindata.index, lindata['count']) weatherdf = grab_weather(weekndf.loc[weekndf['locationID'] == locations]) # display(weatherdf) # break weekdnf.at[w1d_idx.index, 'meantemp'] = weatherdf['avgtempF'].astype(int).mean() weekdnf.at[w1d_idx.index, 'maxtemp'] = weatherdf['maxtempF'].astype(int).max() weekdnf.at[w1d_idx.index, 'mintemp'] = weatherdf['mintempF'].astype(int).min() weekdnf.at[w1d_idx.index, 'difftemp'] = weatherdf['maxtempF'].astype(int).max() - weatherdf['mintempF'].astype(int).min() weekndf.at[w1d_idx.index, 'weekIROC'] = slope return weekndf lowlim = 0 hiilim = 50 IROC1plotdat = week1IROC[ (week1IROC['weekIROC'] > lowlim) & (week1IROC['weekIROC'] < hiilim) ].reset_index(drop=True) IROC2plotdat = week2IROC[ (week2IROC['weekIROC'] > lowlim) & (week2IROC['weekIROC'] < hiilim) ].reset_index(drop=True) IROC3plotdat = week3IROC[ (week3IROC['weekIROC'] > lowlim) & (week3IROC['weekIROC'] < hiilim) ].reset_index(drop=True) IROC4plotdat = week4IROC[ (week4IROC['weekIROC'] > lowlim) & (week4IROC['weekIROC'] < hiilim) ].reset_index(drop=True) IROC5plotdat = week5IROC[ (week5IROC['weekIROC'] > lowlim) & (week5IROC['weekIROC'] < hiilim) ].reset_index(drop=True) IROC6plotdat = week6IROC[ (week6IROC['weekIROC'] > lowlim) & (week6IROC['weekIROC'] < hiilim) ].reset_index(drop=True) IROC7plotdat = week7IROC[ (week7IROC['weekIROC'] > lowlim) & (week7IROC['weekIROC'] < hiilim) ].reset_index(drop=True) IROC1plotdat.hist('weekIROC', bins=12) plt.xlim([lowlim, hiilim]) plt.ylim([0, 90]) plt.show() IROC2plotdat.hist('weekIROC', bins=12) plt.xlim([lowlim, hiilim]) plt.ylim([0, 90]) plt.show() IROC3plotdat.hist('weekIROC', bins=12) plt.xlim([lowlim, hiilim]) plt.ylim([0, 90]) plt.show() IROC4plotdat.hist('weekIROC', bins=12) plt.xlim([lowlim, hiilim]) plt.ylim([0, 90]) plt.show() IROC5plotdat.hist('weekIROC', bins=12) plt.xlim([lowlim, hiilim]) plt.ylim([0, 90]) plt.show() IROC6plotdat.hist('weekIROC', bins=12) plt.xlim([lowlim, hiilim]) plt.ylim([0, 90]) plt.show() IROC7plotdat.hist('weekIROC', bins=12) plt.xlim([lowlim, hiilim]) plt.ylim([0, 90]) plt.show() plt.figure(figsize=[15, 10]) sn.lineplot(x='Latbins', y='IROC_cases', data=locationref, label='MinTemp') sn.lineplot(x='maxtemp', y='weekIROC', data=testplotdat, label='MaxTemp') sn.lineplot(x='difftemp', y='weekIROC', data=testplotdat, label='TempDiff') # plt.xlabel('Temperature\nDegrees F') # plt.legend() plt.figure(figsize=[15, 10]) sn.lineplot(x='mintemp', y='weekIROC', data=week1dat, label='MinTemp') sn.lineplot(x='maxtemp', y='weekIROC', data=week1dat, label='MaxTemp') sn.lineplot(x='difftemp', y='weekIROC', data=week1dat, label='TempDiff') plt.xlabel('Temperature\nDegrees F') plt.legend() ###Output _____no_output_____
Tutorial-VacuumMaxwell_Curvilinear_RHSs.ipynb
###Markdown window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); Time Evolution of Maxwell's Equations in Flat Spacetime and Curvilinear Coordinates Authors: Terrence Pierre Jacques, Zachariah Etienne and Ian Ruchlin This module constructs the evolution equations for Maxwell's equations as symbolic (SymPy) expressions, for an electromagnetic field in vacuum, as defined in [Tutorial-VacuumMaxwell_formulation_Curvilinear](Tutorial-VacuumMaxwell_formulation_Curvilinear.ipynb).**Notebook Status:** Validated **Validation Notes:** All expressions generated in this here have been validated against the [VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) module, as well as the [Maxwell/VacuumMaxwell_Flat_Evol_Cartesian](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Cartesian.py) module when setting the coordinate system to Cartesian. NRPy+ Source Code for this module: [VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) $$\label{top}$$ Table of Contents: 1. [Step 1](step1): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](step2): System II in curvilinear coordinates, using the rescaled quantities $a^i$ and $e^i$1. [Step 3](cart_transform): Convert $A^i$ and $E^i$ to the Cartesian basis1. [Step 4](step4): Code Validation1. [Step 5](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Preliminaries \[Back to [top](top)\]$$\label{step1}$$Set up the needed NRPy+ infrastructure, such the number of dimensions and finite differencing order. ###Code # Import needed Python modules import NRPy_param_funcs as par # NRPy+: Parameter interface import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support import reference_metric as rfm # NRPy+: Reference metric support import grid as gri # NRPy+: Functions having to do with numerical grids import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends # Set the spatial dimension parameter to 3. par.set_parval_from_str("grid::DIM", 3) DIM = par.parval_from_str("grid::DIM") # Set coordinate system # Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical, # SymTP, SinhSymTP CoordSystem = "Spherical" par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem) # Set reference metric related quantities rfm.reference_metric() ###Output _____no_output_____ ###Markdown Step 2: System II in Curvilinear Coordinates, using the rescaled quantities $a^i$ and $e^i$ \[Back to [top](top)\]$$\label{step2}$$(Following discussion reproduced from [Tutorial-VacuumMaxwell_formulation_Curvilinear](Tutorial-VacuumMaxwell_formulation_Curvilinear.ipynb))Consider an arbitrary vector $\Lambda^i$, with smooth (continous) Cartesian components $\Lambda^x$, $\Lambda^y$, and $\Lambda^z$. Transforming $\Lambda^i$ to, e.g. spherical coordinates, introduces terms that spoil the smoothness of $\Lambda^i$;$$\Lambda^\phi = \frac{1}{r \sin \theta} \times \left[ \text{smooth part} \right].$$Evolving $\Lambda^\phi$ will introduce instabilities along the $z$-axis. To avoid this, we instead evolve the _rescaled_ quantity $\lambda^i$, defined by $$\bar{\Lambda}^i = \frac{\lambda^i}{\text{scalefactor}[i]}.$$where we use the [Hadamard product](https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=852272177) of matrices, and no sums are implied by the repeated indices.Thus, we evolve the smoothed variable $\lambda^i$, via $$\lambda^i = \bar{\Lambda}^i \text{scalefactor}[i].$$Within Nrpy+, ReU[i] = 1/scalefactor[i], giving $$\lambda^i = \frac{\bar{\Lambda}^i}{\text{ReU}[i]}.$$We now define the rescaled quantities $a^i$ and $e^i$ and rewrite our formulation of Maxwell's equations in curvilinear coordinates;\begin{align}a^i &= \frac{A^i}{\text{ReU}[i]},\\ \\e^i &= \frac{E^i}{\text{ReU}[i]},\end{align}Taking a time derivative on both sides,\begin{align}\partial_t a^i &= \frac{\partial_t A^i}{\text{ReU}[i]} = \frac{ -E^i - \hat{g}^{ij}\partial_j \varphi}{\text{ReU}[i]} = -e^i - \frac{\hat{g}^{ij}\partial_j \varphi}{\text{ReU}[i]},\\ \\\partial_t e^i &= \frac{\partial_t E^i}{\text{ReU}[i]} = \frac{\hat{g}^{ij}\partial_j \Gamma - \hat{g}^{jk} \hat{\nabla}_{j} \left(\hat{\nabla}_{k} A^{i}\right)}{\text{ReU}[i]} = \frac{\hat{g}^{ij}\partial_j \Gamma}{\text{ReU}[i]} - \frac{\hat{g}^{jk} \hat{\nabla}_{j} \left(\hat{\nabla}_{k} a^{i} \text{ReU}[i] \right)}{\text{ReU}[i]}.\end{align}Given that$$\partial_t E^i = {\underbrace {\textstyle \hat{g}^{ij}\partial_j \Gamma}_{\text{Term 1}}} - \hat{\gamma}^{jk} \left({\underbrace {\textstyle A^i_{,kj}}_{\text{Term 2}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} + \hat{\Gamma}^i_{dj} A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d}}_{\text{Term 3}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m}_{\text{Term 4}}}\right),$$we can make the following replacements within the above, in terms of NRPy+ code;\begin{align}A^i = \text{AU[i]} &\to \text{aU[i] * rfm.ReU[i]} \\\partial_j A^i = \text{AUdD[i][j]} &\to \text{aU_dD[i][j] * rfm.ReU[i]} +\text{aU[i] * rfm.ReUdD[i][j]} \\\partial_k \partial_j A^i = \text{AUdDD[i][j][k]} &\to \text{aU_dDD[i][j][k] * rfm.ReU[i]} + \text{aU_dDD[i][j] * rfm.ReUdD[i][k]} \\&+ \text{aU_dD[i][k] * rfm.ReUdD[i][j]} +\text{aU[i] * rfm.ReUdDD[i][j][k]}\end{align}The remainder of Maxwell's equations are unchanged;$$\partial_t \Gamma = -\hat{g}^{ij} \left( \partial_i \partial_j \varphi - \hat{\Gamma}^k_{ji} \partial_k \varphi \right),$$$$\partial_t \varphi = -\Gamma,$$subject to constraints\begin{align}\mathcal{G} &\equiv \Gamma - \partial_i A^i + \hat{\Gamma}^i_{ji} A^j &= 0,\\\mathcal{C} &\equiv \partial_i E^i + \hat{\Gamma}^i_{ji} E^j &= 0.\end{align} ###Code # Register gridfunctions that are needed as input. # Declare the rank-1 indexed expressions e^{i}, e^{i}, # and \partial^{i} \psi, that are to be evolved in time. # Derivative variables like these must have an underscore # in them, so the finite difference module can parse # the variable name properly. # e^i eU = ixp.register_gridfunctions_for_single_rank1("EVOL", "eU") # \partial_k ( E^i ) --> rank two tensor eU_dD = ixp.declarerank2("eU_dD", "nosym") # a^i aU = ixp.register_gridfunctions_for_single_rank1("EVOL", "aU") # \partial_k ( a^i ) --> rank two tensor aU_dD = ixp.declarerank2("aU_dD", "nosym") # \partial_k partial_m ( a^i ) --> rank three tensor aU_dDD = ixp.declarerank3("aU_dDD", "sym12") # \psi is a scalar function that is time evolved psi = gri.register_gridfunctions("EVOL", ["psi"]) # \Gamma is a scalar function that is time evolved Gamma = gri.register_gridfunctions("EVOL", ["Gamma"]) # \partial_i \psi psi_dD = ixp.declarerank1("psi_dD") # \partial_i \Gamma Gamma_dD = ixp.declarerank1("Gamma_dD") # partial_i \partial_j \psi psi_dDD = ixp.declarerank2("psi_dDD", "sym01") ghatUU = rfm.ghatUU GammahatUDD = rfm.GammahatUDD GammahatUDDdD = rfm.GammahatUDDdD ReU = rfm.ReU ReUdD = rfm.ReUdD ReUdDD = rfm.ReUdDD ###Output _____no_output_____ ###Markdown $$\partial_t a^i = -e^i - \frac{\hat{g}^{ij}\partial_j \varphi}{\text{ReU}[i]},$$ ###Code # \partial_t a^i = -e^i - \frac{\hat{g}^{ij}\partial_j \varphi}{\text{ReU}[i]} arhsU = ixp.zerorank1() for i in range(DIM): arhsU[i] -= eU[i] for j in range(DIM): arhsU[i] -= (ghatUU[i][j]*psi_dD[j])/ReU[i] ###Output _____no_output_____ ###Markdown $$\partial_t e^i = \frac{\hat{g}^{ij}\partial_j \Gamma - \hat{g}^{jk} \hat{\nabla}_{j} \left(\hat{\nabla}_{k} A^{i}\right)}{\text{ReU}[i]} = \frac{\hat{g}^{ij}\partial_j \Gamma}{\text{ReU}[i]} - \frac{\hat{g}^{jk} \hat{\nabla}_{j} \left(\hat{\nabla}_{k} a^{i} \text{ReU}[i] \right)}{\text{ReU}[i]}.$$Given that$$\partial_t E^i = {\underbrace {\textstyle \hat{g}^{ij}\partial_j \Gamma}_{\text{Term 1}}} - \hat{\gamma}^{jk} \left({\underbrace {\textstyle A^i_{,kj}}_{\text{Term 2}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} + \hat{\Gamma}^i_{dj} A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d}}_{\text{Term 3}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m}_{\text{Term 4}}}\right),$$we can make the following replacements within the above, in terms of NRPy+ code;\begin{align}A^i = \text{AU[i]} &\to \text{aU[i] * rfm.ReU[i]} \\\partial_j A^i = \text{AUdD[i][j]} &\to \text{aU_dD[i][j] * rfm.ReU[i]} +\text{aU[i] * rfm.ReUdD[i][j]} \\\partial_k \partial_j A^i = \text{AUdDD[i][j][k]} &\to \text{aU_dDD[i][j][k] * rfm.ReU[i]} + \text{aU_dD[i][j] * rfm.ReUdD[i][k]} \\&+ \text{aU_dD[i][k] * rfm.ReUdD[i][j]} +\text{aU[i] * rfm.ReUdDD[i][j][k]}\end{align} ###Code # A^i AU = ixp.zerorank1() # \partial_k ( A^i ) --> rank two tensor AU_dD = ixp.zerorank2() # \partial_k partial_m ( A^i ) --> rank three tensor AU_dDD = ixp.zerorank3() for i in range(DIM): AU[i] = aU[i]*ReU[i] for j in range(DIM): AU_dD[i][j] = aU_dD[i][j]*ReU[i] + aU[i]*ReUdD[i][j] for k in range(DIM): AU_dDD[i][j][k] = aU_dDD[i][j][k]*ReU[i] + aU_dD[i][j]*ReUdD[i][k] +\ aU_dD[i][k]*ReUdD[i][j] + aU[i]*ReUdDD[i][j][k] ###Output _____no_output_____ ###Markdown $$\text{Term 1} = \hat{g}^{ij}\partial_j \Gamma$$ ###Code # Term 1 = \hat{g}^{ij}\partial_j \Gamma Term1U = ixp.zerorank1() for i in range(DIM): for j in range(DIM): Term1U[i] += ghatUU[i][j]*Gamma_dD[j] ###Output _____no_output_____ ###Markdown $$\text{Term 2} = A^i_{,kj}$$ ###Code # Term 2: A^i_{,kj} Term2UDD = ixp.zerorank3() for i in range(DIM): for j in range(DIM): for k in range(DIM): Term2UDD[i][j][k] += AU_dDD[i][k][j] ###Output _____no_output_____ ###Markdown $$\text{Term 3} = \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} + \hat{\Gamma}^i_{dj}A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d}$$ ###Code # Term 3: \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} # + \hat{\Gamma}^i_{dj}A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d} Term3UDD = ixp.zerorank3() for i in range(DIM): for j in range(DIM): for k in range(DIM): for m in range(DIM): Term3UDD[i][j][k] += GammahatUDDdD[i][m][k][j]*AU[m] \ + GammahatUDD[i][m][k]*AU_dD[m][j] \ + GammahatUDD[i][m][j]*AU_dD[m][k] \ - GammahatUDD[m][k][j]*AU_dD[i][m] ###Output _____no_output_____ ###Markdown $$\text{Term 4} = \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m$$ ###Code # Term 4: \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - # \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m Term4UDD = ixp.zerorank3() for i in range(DIM): for j in range(DIM): for k in range(DIM): for m in range(DIM): for d in range(DIM): Term4UDD[i][j][k] += ( GammahatUDD[i][d][j]*GammahatUDD[d][m][k] \ -GammahatUDD[d][k][j]*GammahatUDD[i][m][d])*AU[m] ###Output _____no_output_____ ###Markdown Finally, we build up the RHS of $E^i$,$$\partial_t E^i = {\underbrace {\textstyle \hat{g}^{ij}\partial_j \Gamma}_{\text{Term 1}}} - \hat{\gamma}^{jk} \left({\underbrace {\textstyle A^i_{,kj}}_{\text{Term 2}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} + \hat{\Gamma}^i_{dj} A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d}}_{\text{Term 3}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m}_{\text{Term 4}}}\right),$$and divide through by ReU[i] to get $e^i$. ###Code # \partial_t E^i = \hat{g}^{ij}\partial_j \Gamma - \hat{\gamma}^{jk}* # (A^i_{,kj} # + \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} # + \hat{\Gamma}^i_{dj} A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d} # + \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m # - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m) ErhsU = ixp.zerorank1() for i in range(DIM): ErhsU[i] += Term1U[i] for j in range(DIM): for k in range(DIM): ErhsU[i] -= ghatUU[j][k]*(Term2UDD[i][j][k] + Term3UDD[i][j][k] + Term4UDD[i][j][k]) erhsU = ixp.zerorank1() for i in range(DIM): erhsU[i] = ErhsU[i]/ReU[i] ###Output _____no_output_____ ###Markdown $$\partial_t \Gamma = -\hat{g}^{ij} \left( \partial_i \partial_j \varphi - \hat{\Gamma}^k_{ji} \partial_k \varphi \right)$$ ###Code # \partial_t \Gamma = -\hat{g}^{ij} (\partial_i \partial_j \varphi - # \hat{\Gamma}^k_{ji} \partial_k \varphi) Gamma_rhs = sp.sympify(0) for i in range(DIM): for j in range(DIM): Gamma_rhs -= ghatUU[i][j]*psi_dDD[i][j] for k in range(DIM): Gamma_rhs += ghatUU[i][j]*GammahatUDD[k][j][i]*psi_dD[k] ###Output _____no_output_____ ###Markdown $$\partial_t \varphi = -\Gamma$$ ###Code # \partial_t \varphi = -\Gamma psi_rhs = -Gamma ###Output _____no_output_____ ###Markdown Constraints:\begin{align}\mathcal{G} &\equiv \Gamma - \partial_i A^i + \hat{\Gamma}^i_{ji} A^j, \\\mathcal{C} &\equiv \partial_i E^i + \hat{\Gamma}^i_{ji} E^j.\end{align} ###Code # \mathcal{G} \equiv \Gamma - \partial_i A^i + \hat{\Gamma}^i_{ji} A^j G = Gamma for i in range(DIM): G -= AU_dD[i][i] for j in range(DIM): G += GammahatUDD[i][j][i]*AU[j] # E^i EU = ixp.zerorank1() # \partial_k ( A^i ) --> rank two tensor EU_dD = ixp.zerorank2() for i in range(DIM): EU[i] = eU[i]*ReU[i] for j in range(DIM): EU_dD[i][j] = eU_dD[i][j]*ReU[i] + eU[i]*ReUdD[i][j] C = sp.sympify(0) for i in range(DIM): C += EU_dD[i][i] for j in range(DIM): C += GammahatUDD[i][j][i]*EU[j] ###Output _____no_output_____ ###Markdown Step 3: Convert $A^i$ and $E^i$ to the Cartesian basis \[Back to [top](top)\]$$\label{cart_transform}$$Here we convert $A^i$ and $E^i$ to the Cartesian basis, to make convergence tests within [Tutorial-Start_to_Finish-Solving_Maxwells_Equations_in_Vacuum-Curvilinear](Tutorial-Start_to_Finish-Solving_Maxwells_Equations_in_Vacuum-Curvilinear.ipynb) easier. Specifically, we will use the coordinate transformation definitions provided by [reference_metric.py](../edit/reference_metric.py) to build the Jacobian:\begin{align} \frac{\partial x_{\rm Cart}^i}{\partial x_{\rm Orig}^j},\end{align}where $x_{\rm Cart}^i \in \{x,y,z\}$. We then apply it to $A^i$ and $E^i$ to transform into Cartesian coordinates, via\begin{align}A^i_{\rm Cart} = \frac{\partial x_{\rm Cart}^i}{\partial x_{\rm Orig}^j} A^j_{\rm Orig}.\end{align} ###Code def Convert_to_Cartesian_basis(VU): # Coordinate transformation from original basis to Cartesian rfm.reference_metric() VU_Cart = ixp.zerorank1() Jac_dxCartU_dxOrigD = ixp.zerorank2() for i in range(DIM): for j in range(DIM): Jac_dxCartU_dxOrigD[i][j] = sp.diff(rfm.xx_to_Cart[i], rfm.xx[j]) for i in range(DIM): for j in range(DIM): VU_Cart[i] += Jac_dxCartU_dxOrigD[i][j]*VU[j] return VU_Cart AU_Cart = Convert_to_Cartesian_basis(AU) EU_Cart = Convert_to_Cartesian_basis(EU) ###Output _____no_output_____ ###Markdown Step 4: NRPy+ Module Code Validation \[Back to [top](top)\]$$\label{step4}$$Here, as a code validation check, we verify agreement in the SymPy expressions for the RHSs of Maxwell's equations between1. this tutorial and 2. the NRPy+ [VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) module. ###Code # Reset the list of gridfunctions, as registering a gridfunction # twice will spawn an error. gri.glb_gridfcs_list = [] # Call the VacuumMaxwellRHSs_rescaled() function from within the # Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py module, # which should do exactly the same as the above. # Set which system to use, which are defined in Maxwell/InitialData.py par.initialize_param(par.glb_param("char","Maxwell.InitialData","System_to_use","System_II")) import Maxwell.VacuumMaxwell_Flat_Evol_Curvilinear_rescaled as mwevol mwevol.VacuumMaxwellRHSs_rescaled() print("Consistency check between Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling tutorial and NRPy+ module: ALL SHOULD BE ZERO.") print("C - mwevol.C = " + str(C - mwevol.C)) print("G - mwevol.G = " + str(G - mwevol.G)) print("psi_rhs - mwevol.psi_rhs = " + str(psi_rhs - mwevol.psi_rhs)) print("Gamma_rhs - mwevol.Gamma_rhs = " + str(Gamma_rhs - mwevol.Gamma_rhs)) for i in range(DIM): print("arhsU["+str(i)+"] - mwevol.arhsU["+str(i)+"] = " + str(arhsU[i] - mwevol.arhsU[i])) print("erhsU["+str(i)+"] - mwevol.erhsU["+str(i)+"] = " + str(erhsU[i] - mwevol.erhsU[i])) print("AU_Cart["+str(i)+"] - mwevol.AU_Cart["+str(i)+"] = " + str(AU_Cart[i] - mwevol.AU_Cart[i])) print("EU_Cart["+str(i)+"] - mwevol.EU_Cart["+str(i)+"] = " + str(EU_Cart[i] - mwevol.EU_Cart[i])) ###Output Consistency check between Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling tutorial and NRPy+ module: ALL SHOULD BE ZERO. C - mwevol.C = 0 G - mwevol.G = 0 psi_rhs - mwevol.psi_rhs = 0 Gamma_rhs - mwevol.Gamma_rhs = 0 arhsU[0] - mwevol.arhsU[0] = 0 erhsU[0] - mwevol.erhsU[0] = 0 AU_Cart[0] - mwevol.AU_Cart[0] = 0 EU_Cart[0] - mwevol.EU_Cart[0] = 0 arhsU[1] - mwevol.arhsU[1] = 0 erhsU[1] - mwevol.erhsU[1] = 0 AU_Cart[1] - mwevol.AU_Cart[1] = 0 EU_Cart[1] - mwevol.EU_Cart[1] = 0 arhsU[2] - mwevol.arhsU[2] = 0 erhsU[2] - mwevol.erhsU[2] = 0 AU_Cart[2] - mwevol.AU_Cart[2] = 0 EU_Cart[2] - mwevol.EU_Cart[2] = 0 ###Markdown Step 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](top)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling.pdf](Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.) ###Code import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling") ###Output [NbConvertApp] WARNING | pattern 'Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling.ipynb' matched no files Created Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling.tex, and compiled LaTeX file to PDF file Tutorial-VacuumMaxwell_Curvilinear_RHS- Rescaling.pdf ###Markdown Time Evolution of Maxwell's Equations in Flat Spacetime and Curvilinear Coordinates Authors: Terrence Pierre Jacques, Zachariah Etienne and Ian Ruchlin This module constructs the evolution equations for Maxwell's equations as symbolic (SymPy) expressions, for an electromagnetic field in vacuum, as defined in [Tutorial-VacuumMaxwell_formulation_Curvilinear](Tutorial-VacuumMaxwell_formulation_Curvilinear.ipynb).**Notebook Status:** Validated **Validation Notes:** All expressions generated in this here have been validated against the [VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) module, as well as the [Maxwell/VacuumMaxwell_Flat_Evol_Cartesian](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Cartesian.py) module when setting the coordinate system to Cartesian. NRPy+ Source Code for this module: [VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) $$\label{top}$$ Table of Contents: 1. [Step 1](step1): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](step2): System II in curvilinear coordinates, using the rescaled quantities $a^i$ and $e^i$1. [Step 3](cart_transform): Convert $A^i$ and $E^i$ to the Cartesian basis1. [Step 4](step4): Code Validation1. [Step 5](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Preliminaries \[Back to [top](top)\]$$\label{step1}$$Set up the needed NRPy+ infrastructure, such the number of dimensions and finite differencing order. ###Code # Import needed Python modules import NRPy_param_funcs as par # NRPy+: Parameter interface import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support import reference_metric as rfm # NRPy+: Reference metric support import grid as gri # NRPy+: Functions having to do with numerical grids import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends # Set the spatial dimension parameter to 3. par.set_parval_from_str("grid::DIM", 3) DIM = par.parval_from_str("grid::DIM") # Set coordinate system # Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical, # SymTP, SinhSymTP CoordSystem = "Spherical" par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem) # Set reference metric related quantities rfm.reference_metric() ###Output _____no_output_____ ###Markdown Step 2: System II in Curvilinear Coordinates, using the rescaled quantities $a^i$ and $e^i$ \[Back to [top](top)\]$$\label{step2}$$(Following discussion reproduced from [Tutorial-VacuumMaxwell_formulation_Curvilinear](Tutorial-VacuumMaxwell_formulation_Curvilinear.ipynb))Consider an arbitrary vector $\Lambda^i$, with smooth (continous) Cartesian components $\Lambda^x$, $\Lambda^y$, and $\Lambda^z$. Transforming $\Lambda^i$ to, e.g. spherical coordinates, introduces terms that spoil the smoothness of $\Lambda^i$;$$\Lambda^\phi = \frac{1}{r \sin \theta} \times \left[ \text{smooth part} \right].$$Evolving $\Lambda^\phi$ will introduce instabilities along the $z$-axis. To avoid this, we instead evolve the _rescaled_ quantity $\lambda^i$, defined by $$\bar{\Lambda}^i = \frac{\lambda^i}{\text{scalefactor}[i]}.$$where we use the [Hadamard product](https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=852272177) of matrices, and no sums are implied by the repeated indices.Thus, we evolve the smoothed variable $\lambda^i$, via $$\lambda^i = \bar{\Lambda}^i \text{scalefactor}[i].$$Within Nrpy+, ReU[i] = 1/scalefactor[i], giving $$\lambda^i = \frac{\bar{\Lambda}^i}{\text{ReU}[i]}.$$We now define the rescaled quantities $a^i$ and $e^i$ and rewrite our formulation of Maxwell's equations in curvilinear coordinates;\begin{align}a^i &= \frac{A^i}{\text{ReU}[i]},\\ \\e^i &= \frac{E^i}{\text{ReU}[i]},\end{align}Taking a time derivative on both sides,\begin{align}\partial_t a^i &= \frac{\partial_t A^i}{\text{ReU}[i]} = \frac{ -E^i - \hat{g}^{ij}\partial_j \varphi}{\text{ReU}[i]} = -e^i - \frac{\hat{g}^{ij}\partial_j \varphi}{\text{ReU}[i]},\\ \\\partial_t e^i &= \frac{\partial_t E^i}{\text{ReU}[i]} = \frac{\hat{g}^{ij}\partial_j \Gamma - \hat{g}^{jk} \hat{\nabla}_{j} \left(\hat{\nabla}_{k} A^{i}\right)}{\text{ReU}[i]} = \frac{\hat{g}^{ij}\partial_j \Gamma}{\text{ReU}[i]} - \frac{\hat{g}^{jk} \hat{\nabla}_{j} \left(\hat{\nabla}_{k} a^{i} \text{ReU}[i] \right)}{\text{ReU}[i]}.\end{align}Given that$$\partial_t E^i = {\underbrace {\textstyle \hat{g}^{ij}\partial_j \Gamma}_{\text{Term 1}}} - \hat{\gamma}^{jk} \left({\underbrace {\textstyle A^i_{,kj}}_{\text{Term 2}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} + \hat{\Gamma}^i_{dj} A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d}}_{\text{Term 3}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m}_{\text{Term 4}}}\right),$$we can make the following replacements within the above, in terms of NRPy+ code;\begin{align}A^i = \text{AU[i]} &\to \text{aU[i] * rfm.ReU[i]} \\\partial_j A^i = \text{AUdD[i][j]} &\to \text{aU_dD[i][j] * rfm.ReU[i]} +\text{aU[i] * rfm.ReUdD[i][j]} \\\partial_k \partial_j A^i = \text{AUdDD[i][j][k]} &\to \text{aU_dDD[i][j][k] * rfm.ReU[i]} + \text{aU_dDD[i][j] * rfm.ReUdD[i][k]} \\&+ \text{aU_dD[i][k] * rfm.ReUdD[i][j]} +\text{aU[i] * rfm.ReUdDD[i][j][k]}\end{align}The remainder of Maxwell's equations are unchanged;$$\partial_t \Gamma = -\hat{g}^{ij} \left( \partial_i \partial_j \varphi - \hat{\Gamma}^k_{ji} \partial_k \varphi \right),$$$$\partial_t \varphi = -\Gamma,$$subject to constraints\begin{align}\mathcal{G} &\equiv \Gamma - \partial_i A^i + \hat{\Gamma}^i_{ji} A^j &= 0,\\\mathcal{C} &\equiv \partial_i E^i + \hat{\Gamma}^i_{ji} E^j &= 0.\end{align} ###Code # Register gridfunctions that are needed as input. # Declare the rank-1 indexed expressions e^{i}, e^{i}, # and \partial^{i} \psi, that are to be evolved in time. # Derivative variables like these must have an underscore # in them, so the finite difference module can parse # the variable name properly. # e^i eU = ixp.register_gridfunctions_for_single_rank1("EVOL", "eU") # \partial_k ( E^i ) --> rank two tensor eU_dD = ixp.declarerank2("eU_dD", "nosym") # a^i aU = ixp.register_gridfunctions_for_single_rank1("EVOL", "aU") # \partial_k ( a^i ) --> rank two tensor aU_dD = ixp.declarerank2("aU_dD", "nosym") # \partial_k partial_m ( a^i ) --> rank three tensor aU_dDD = ixp.declarerank3("aU_dDD", "sym12") # \psi is a scalar function that is time evolved psi = gri.register_gridfunctions("EVOL", ["psi"]) # \Gamma is a scalar function that is time evolved Gamma = gri.register_gridfunctions("EVOL", ["Gamma"]) # \partial_i \psi psi_dD = ixp.declarerank1("psi_dD") # \partial_i \Gamma Gamma_dD = ixp.declarerank1("Gamma_dD") # partial_i \partial_j \psi psi_dDD = ixp.declarerank2("psi_dDD", "sym01") ghatUU = rfm.ghatUU GammahatUDD = rfm.GammahatUDD GammahatUDDdD = rfm.GammahatUDDdD ReU = rfm.ReU ReUdD = rfm.ReUdD ReUdDD = rfm.ReUdDD ###Output _____no_output_____ ###Markdown $$\partial_t a^i = -e^i - \frac{\hat{g}^{ij}\partial_j \varphi}{\text{ReU}[i]},$$ ###Code # \partial_t a^i = -e^i - \frac{\hat{g}^{ij}\partial_j \varphi}{\text{ReU}[i]} arhsU = ixp.zerorank1() for i in range(DIM): arhsU[i] -= eU[i] for j in range(DIM): arhsU[i] -= (ghatUU[i][j]*psi_dD[j])/ReU[i] ###Output _____no_output_____ ###Markdown $$\partial_t e^i = \frac{\hat{g}^{ij}\partial_j \Gamma - \hat{g}^{jk} \hat{\nabla}_{j} \left(\hat{\nabla}_{k} A^{i}\right)}{\text{ReU}[i]} = \frac{\hat{g}^{ij}\partial_j \Gamma}{\text{ReU}[i]} - \frac{\hat{g}^{jk} \hat{\nabla}_{j} \left(\hat{\nabla}_{k} a^{i} \text{ReU}[i] \right)}{\text{ReU}[i]}.$$Given that$$\partial_t E^i = {\underbrace {\textstyle \hat{g}^{ij}\partial_j \Gamma}_{\text{Term 1}}} - \hat{\gamma}^{jk} \left({\underbrace {\textstyle A^i_{,kj}}_{\text{Term 2}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} + \hat{\Gamma}^i_{dj} A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d}}_{\text{Term 3}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m}_{\text{Term 4}}}\right),$$we can make the following replacements within the above, in terms of NRPy+ code;\begin{align}A^i = \text{AU[i]} &\to \text{aU[i] * rfm.ReU[i]} \\\partial_j A^i = \text{AUdD[i][j]} &\to \text{aU_dD[i][j] * rfm.ReU[i]} +\text{aU[i] * rfm.ReUdD[i][j]} \\\partial_k \partial_j A^i = \text{AUdDD[i][j][k]} &\to \text{aU_dDD[i][j][k] * rfm.ReU[i]} + \text{aU_dD[i][j] * rfm.ReUdD[i][k]} \\&+ \text{aU_dD[i][k] * rfm.ReUdD[i][j]} +\text{aU[i] * rfm.ReUdDD[i][j][k]}\end{align} ###Code # A^i AU = ixp.zerorank1() # \partial_k ( A^i ) --> rank two tensor AU_dD = ixp.zerorank2() # \partial_k partial_m ( A^i ) --> rank three tensor AU_dDD = ixp.zerorank3() for i in range(DIM): AU[i] = aU[i]*ReU[i] for j in range(DIM): AU_dD[i][j] = aU_dD[i][j]*ReU[i] + aU[i]*ReUdD[i][j] for k in range(DIM): AU_dDD[i][j][k] = aU_dDD[i][j][k]*ReU[i] + aU_dD[i][j]*ReUdD[i][k] +\ aU_dD[i][k]*ReUdD[i][j] + aU[i]*ReUdDD[i][j][k] ###Output _____no_output_____ ###Markdown $$\text{Term 1} = \hat{g}^{ij}\partial_j \Gamma$$ ###Code # Term 1 = \hat{g}^{ij}\partial_j \Gamma Term1U = ixp.zerorank1() for i in range(DIM): for j in range(DIM): Term1U[i] += ghatUU[i][j]*Gamma_dD[j] ###Output _____no_output_____ ###Markdown $$\text{Term 2} = A^i_{,kj}$$ ###Code # Term 2: A^i_{,kj} Term2UDD = ixp.zerorank3() for i in range(DIM): for j in range(DIM): for k in range(DIM): Term2UDD[i][j][k] += AU_dDD[i][k][j] ###Output _____no_output_____ ###Markdown $$\text{Term 3} = \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} + \hat{\Gamma}^i_{dj}A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d}$$ ###Code # Term 3: \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} # + \hat{\Gamma}^i_{dj}A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d} Term3UDD = ixp.zerorank3() for i in range(DIM): for j in range(DIM): for k in range(DIM): for m in range(DIM): Term3UDD[i][j][k] += GammahatUDDdD[i][m][k][j]*AU[m] \ + GammahatUDD[i][m][k]*AU_dD[m][j] \ + GammahatUDD[i][m][j]*AU_dD[m][k] \ - GammahatUDD[m][k][j]*AU_dD[i][m] ###Output _____no_output_____ ###Markdown $$\text{Term 4} = \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m$$ ###Code # Term 4: \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - # \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m Term4UDD = ixp.zerorank3() for i in range(DIM): for j in range(DIM): for k in range(DIM): for m in range(DIM): for d in range(DIM): Term4UDD[i][j][k] += ( GammahatUDD[i][d][j]*GammahatUDD[d][m][k] \ -GammahatUDD[d][k][j]*GammahatUDD[i][m][d])*AU[m] ###Output _____no_output_____ ###Markdown Finally, we build up the RHS of $E^i$,$$\partial_t E^i = {\underbrace {\textstyle \hat{g}^{ij}\partial_j \Gamma}_{\text{Term 1}}} - \hat{\gamma}^{jk} \left({\underbrace {\textstyle A^i_{,kj}}_{\text{Term 2}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} + \hat{\Gamma}^i_{dj} A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d}}_{\text{Term 3}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m}_{\text{Term 4}}}\right),$$and divide through by ReU[i] to get $e^i$. ###Code # \partial_t E^i = \hat{g}^{ij}\partial_j \Gamma - \hat{\gamma}^{jk}* # (A^i_{,kj} # + \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} # + \hat{\Gamma}^i_{dj} A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d} # + \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m # - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m) ErhsU = ixp.zerorank1() for i in range(DIM): ErhsU[i] += Term1U[i] for j in range(DIM): for k in range(DIM): ErhsU[i] -= ghatUU[j][k]*(Term2UDD[i][j][k] + Term3UDD[i][j][k] + Term4UDD[i][j][k]) erhsU = ixp.zerorank1() for i in range(DIM): erhsU[i] = ErhsU[i]/ReU[i] ###Output _____no_output_____ ###Markdown $$\partial_t \Gamma = -\hat{g}^{ij} \left( \partial_i \partial_j \varphi - \hat{\Gamma}^k_{ji} \partial_k \varphi \right)$$ ###Code # \partial_t \Gamma = -\hat{g}^{ij} (\partial_i \partial_j \varphi - # \hat{\Gamma}^k_{ji} \partial_k \varphi) Gamma_rhs = sp.sympify(0) for i in range(DIM): for j in range(DIM): Gamma_rhs -= ghatUU[i][j]*psi_dDD[i][j] for k in range(DIM): Gamma_rhs += ghatUU[i][j]*GammahatUDD[k][j][i]*psi_dD[k] ###Output _____no_output_____ ###Markdown $$\partial_t \varphi = -\Gamma$$ ###Code # \partial_t \varphi = -\Gamma psi_rhs = -Gamma ###Output _____no_output_____ ###Markdown Constraints:\begin{align}\mathcal{G} &\equiv \Gamma - \partial_i A^i + \hat{\Gamma}^i_{ji} A^j, \\\mathcal{C} &\equiv \partial_i E^i + \hat{\Gamma}^i_{ji} E^j.\end{align} ###Code # \mathcal{G} \equiv \Gamma - \partial_i A^i + \hat{\Gamma}^i_{ji} A^j G = Gamma for i in range(DIM): G -= AU_dD[i][i] for j in range(DIM): G += GammahatUDD[i][j][i]*AU[j] # E^i EU = ixp.zerorank1() # \partial_k ( A^i ) --> rank two tensor EU_dD = ixp.zerorank2() for i in range(DIM): EU[i] = eU[i]*ReU[i] for j in range(DIM): EU_dD[i][j] = eU_dD[i][j]*ReU[i] + eU[i]*ReUdD[i][j] C = sp.sympify(0) for i in range(DIM): C += EU_dD[i][i] for j in range(DIM): C += GammahatUDD[i][j][i]*EU[j] ###Output _____no_output_____ ###Markdown Step 3: Convert $A^i$ and $E^i$ to the Cartesian basis \[Back to [top](top)\]$$\label{cart_transform}$$Here we convert $A^i$ and $E^i$ to the Cartesian basis, to make convergence tests within [Tutorial-Start_to_Finish-Solving_Maxwells_Equations_in_Vacuum-Curvilinear](Tutorial-Start_to_Finish-Solving_Maxwells_Equations_in_Vacuum-Curvilinear.ipynb) easier. Specifically, we will use the coordinate transformation definitions provided by [reference_metric.py](../edit/reference_metric.py) to build the Jacobian:\begin{align} \frac{\partial x_{\rm Cart}^i}{\partial x_{\rm Orig}^j},\end{align}where $x_{\rm Cart}^i \in \{x,y,z\}$. We then apply it to $A^i$ and $E^i$ to transform into Cartesian coordinates, via\begin{align}A^i_{\rm Cart} = \frac{\partial x_{\rm Cart}^i}{\partial x_{\rm Orig}^j} A^j_{\rm Orig}.\end{align} ###Code def Convert_to_Cartesian_basis(VU): # Coordinate transformation from original basis to Cartesian rfm.reference_metric() VU_Cart = ixp.zerorank1() Jac_dxCartU_dxOrigD = ixp.zerorank2() for i in range(DIM): for j in range(DIM): Jac_dxCartU_dxOrigD[i][j] = sp.diff(rfm.xxCart[i], rfm.xx[j]) for i in range(DIM): for j in range(DIM): VU_Cart[i] += Jac_dxCartU_dxOrigD[i][j]*VU[j] return VU_Cart AU_Cart = Convert_to_Cartesian_basis(AU) EU_Cart = Convert_to_Cartesian_basis(EU) ###Output _____no_output_____ ###Markdown Step 4: NRPy+ Module Code Validation \[Back to [top](top)\]$$\label{step4}$$Here, as a code validation check, we verify agreement in the SymPy expressions for the RHSs of Maxwell's equations between1. this tutorial and 2. the NRPy+ [VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) module. ###Code # Reset the list of gridfunctions, as registering a gridfunction # twice will spawn an error. gri.glb_gridfcs_list = [] # Call the VacuumMaxwellRHSs_rescaled() function from within the # Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py module, # which should do exactly the same as the above. # Set which system to use, which are defined in Maxwell/InitialData.py par.initialize_param(par.glb_param("char","Maxwell.InitialData","System_to_use","System_II")) import Maxwell.VacuumMaxwell_Flat_Evol_Curvilinear_rescaled as mwevol mwevol.VacuumMaxwellRHSs_rescaled() print("Consistency check between Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling tutorial and NRPy+ module: ALL SHOULD BE ZERO.") print("C - mwevol.C = " + str(C - mwevol.C)) print("G - mwevol.G = " + str(G - mwevol.G)) print("psi_rhs - mwevol.psi_rhs = " + str(psi_rhs - mwevol.psi_rhs)) print("Gamma_rhs - mwevol.Gamma_rhs = " + str(Gamma_rhs - mwevol.Gamma_rhs)) for i in range(DIM): print("arhsU["+str(i)+"] - mwevol.arhsU["+str(i)+"] = " + str(arhsU[i] - mwevol.arhsU[i])) print("erhsU["+str(i)+"] - mwevol.erhsU["+str(i)+"] = " + str(erhsU[i] - mwevol.erhsU[i])) print("AU_Cart["+str(i)+"] - mwevol.AU_Cart["+str(i)+"] = " + str(AU_Cart[i] - mwevol.AU_Cart[i])) print("EU_Cart["+str(i)+"] - mwevol.EU_Cart["+str(i)+"] = " + str(EU_Cart[i] - mwevol.EU_Cart[i])) ###Output Consistency check between Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling tutorial and NRPy+ module: ALL SHOULD BE ZERO. C - mwevol.C = 0 G - mwevol.G = 0 psi_rhs - mwevol.psi_rhs = 0 Gamma_rhs - mwevol.Gamma_rhs = 0 arhsU[0] - mwevol.arhsU[0] = 0 erhsU[0] - mwevol.erhsU[0] = 0 AU_Cart[0] - mwevol.AU_Cart[0] = 0 EU_Cart[0] - mwevol.EU_Cart[0] = 0 arhsU[1] - mwevol.arhsU[1] = 0 erhsU[1] - mwevol.erhsU[1] = 0 AU_Cart[1] - mwevol.AU_Cart[1] = 0 EU_Cart[1] - mwevol.EU_Cart[1] = 0 arhsU[2] - mwevol.arhsU[2] = 0 erhsU[2] - mwevol.erhsU[2] = 0 AU_Cart[2] - mwevol.AU_Cart[2] = 0 EU_Cart[2] - mwevol.EU_Cart[2] = 0 ###Markdown Step 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](top)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling.pdf](Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.) ###Code import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling") ###Output [NbConvertApp] WARNING | pattern 'Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling.ipynb' matched no files Created Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling.tex, and compiled LaTeX file to PDF file Tutorial-VacuumMaxwell_Curvilinear_RHS- Rescaling.pdf ###Markdown Time Evolution of Maxwell's Equations in Flat Spacetime and Curvilinear Coordinates Authors: Terrence Pierre Jacques, Zachariah Etienne and Ian Ruchlin This module constructs the evolution equations for Maxwell's equations as symbolic (SymPy) expressions, for an electromagnetic field in vacuum, as defined in [Tutorial-VacuumMaxwell_formulation_Curvilinear](Tutorial-VacuumMaxwell_formulation_Curvilinear.ipynb).**Notebook Status:** Validated **Validation Notes:** All expressions generated in this here have been validated against the [VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) module, as well as the [Maxwell/VacuumMaxwell_Flat_Evol_Cartesian](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Cartesian.py) module when setting the coordinate system to Cartesian. NRPy+ Source Code for this module: [VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) $$\label{top}$$ Table of Contents: 1. [Step 1](step1): Set core NRPy+ parameters for numerical grids and reference metric1. [Step 2](step2): System II in curvilinear coordinates, using the rescaled quantities $a^i$ and $e^i$1. [Step 3](cart_transform): Convert $A^i$ and $E^i$ to the Cartesian basis1. [Step 4](step4): Code Validation1. [Step 5](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Preliminaries \[Back to [top](top)\]$$\label{step1}$$Set up the needed NRPy+ infrastructure, such the number of dimensions and finite differencing order. ###Code # Import needed Python modules import NRPy_param_funcs as par # NRPy+: Parameter interface import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support import reference_metric as rfm # NRPy+: Reference metric support import grid as gri # NRPy+: Functions having to do with numerical grids import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends # Set the spatial dimension parameter to 3. par.set_parval_from_str("grid::DIM", 3) DIM = par.parval_from_str("grid::DIM") # Set coordinate system # Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical, # SymTP, SinhSymTP CoordSystem = "Spherical" par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem) # Set reference metric related quantities rfm.reference_metric() ###Output _____no_output_____ ###Markdown Step 2: System II in Curvilinear Coordinates, using the rescaled quantities $a^i$ and $e^i$ \[Back to [top](top)\]$$\label{step2}$$(Following discussion reproduced from [Tutorial-VacuumMaxwell_formulation_Curvilinear](Tutorial-VacuumMaxwell_formulation_Curvilinear.ipynb))Consider an arbitrary vector $\Lambda^i$, with smooth (continous) Cartesian components $\Lambda^x$, $\Lambda^y$, and $\Lambda^z$. Transforming $\Lambda^i$ to, e.g. spherical coordinates, introduces terms that spoil the smoothness of $\Lambda^i$;$$\Lambda^\phi = \frac{1}{r \sin \theta} \times \left[ \text{smooth part} \right].$$Evolving $\Lambda^\phi$ will introduce instabilities along the $z$-axis. To avoid this, we instead evolve the _rescaled_ quantity $\lambda^i$, defined by $$\bar{\Lambda}^i = \frac{\lambda^i}{\text{scalefactor}[i]}.$$where we use the [Hadamard product](https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=852272177) of matrices, and no sums are implied by the repeated indices.Thus, we evolve the smoothed variable $\lambda^i$, via $$\lambda^i = \bar{\Lambda}^i \text{scalefactor}[i].$$Within Nrpy+, ReU[i] = 1/scalefactor[i], giving $$\lambda^i = \frac{\bar{\Lambda}^i}{\text{ReU}[i]}.$$We now define the rescaled quantities $a^i$ and $e^i$ and rewrite our formulation of Maxwell's equations in curvilinear coordinates;\begin{align}a^i &= \frac{A^i}{\text{ReU}[i]},\\ \\e^i &= \frac{E^i}{\text{ReU}[i]},\end{align}Taking a time derivative on both sides,\begin{align}\partial_t a^i &= \frac{\partial_t A^i}{\text{ReU}[i]} = \frac{ -E^i - \hat{g}^{ij}\partial_j \varphi}{\text{ReU}[i]} = -e^i - \frac{\hat{g}^{ij}\partial_j \varphi}{\text{ReU}[i]},\\ \\\partial_t e^i &= \frac{\partial_t E^i}{\text{ReU}[i]} = \frac{\hat{g}^{ij}\partial_j \Gamma - \hat{g}^{jk} \hat{\nabla}_{j} \left(\hat{\nabla}_{k} A^{i}\right)}{\text{ReU}[i]} = \frac{\hat{g}^{ij}\partial_j \Gamma}{\text{ReU}[i]} - \frac{\hat{g}^{jk} \hat{\nabla}_{j} \left(\hat{\nabla}_{k} a^{i} \text{ReU}[i] \right)}{\text{ReU}[i]}.\end{align}Given that$$\partial_t E^i = {\underbrace {\textstyle \hat{g}^{ij}\partial_j \Gamma}_{\text{Term 1}}} - \hat{\gamma}^{jk} \left({\underbrace {\textstyle A^i_{,kj}}_{\text{Term 2}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} + \hat{\Gamma}^i_{dj} A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d}}_{\text{Term 3}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m}_{\text{Term 4}}}\right),$$we can make the following replacements within the above, in terms of NRPy+ code;\begin{align}A^i = \text{AU[i]} &\to \text{aU[i] * rfm.ReU[i]} \\\partial_j A^i = \text{AUdD[i][j]} &\to \text{aU_dD[i][j] * rfm.ReU[i]} +\text{aU[i] * rfm.ReUdD[i][j]} \\\partial_k \partial_j A^i = \text{AUdDD[i][j][k]} &\to \text{aU_dDD[i][j][k] * rfm.ReU[i]} + \text{aU_dDD[i][j] * rfm.ReUdD[i][k]} \\&+ \text{aU_dD[i][k] * rfm.ReUdD[i][j]} +\text{aU[i] * rfm.ReUdDD[i][j][k]}\end{align}The remainder of Maxwell's equations are unchanged;$$\partial_t \Gamma = -\hat{g}^{ij} \left( \partial_i \partial_j \varphi - \hat{\Gamma}^k_{ji} \partial_k \varphi \right),$$$$\partial_t \varphi = -\Gamma,$$subject to constraints\begin{align}\mathcal{G} &\equiv \Gamma - \partial_i A^i + \hat{\Gamma}^i_{ji} A^j &= 0,\\\mathcal{C} &\equiv \partial_i E^i + \hat{\Gamma}^i_{ji} E^j &= 0.\end{align} ###Code # Register gridfunctions that are needed as input. # Declare the rank-1 indexed expressions e^{i}, e^{i}, # and \partial^{i} \psi, that are to be evolved in time. # Derivative variables like these must have an underscore # in them, so the finite difference module can parse # the variable name properly. # e^i eU = ixp.register_gridfunctions_for_single_rank1("EVOL", "eU") # \partial_k ( E^i ) --> rank two tensor eU_dD = ixp.declarerank2("eU_dD", "nosym") # a^i aU = ixp.register_gridfunctions_for_single_rank1("EVOL", "aU") # \partial_k ( a^i ) --> rank two tensor aU_dD = ixp.declarerank2("aU_dD", "nosym") # \partial_k partial_m ( a^i ) --> rank three tensor aU_dDD = ixp.declarerank3("aU_dDD", "sym12") # \psi is a scalar function that is time evolved psi = gri.register_gridfunctions("EVOL", ["psi"]) # \Gamma is a scalar function that is time evolved Gamma = gri.register_gridfunctions("EVOL", ["Gamma"]) # \partial_i \psi psi_dD = ixp.declarerank1("psi_dD") # \partial_i \Gamma Gamma_dD = ixp.declarerank1("Gamma_dD") # partial_i \partial_j \psi psi_dDD = ixp.declarerank2("psi_dDD", "sym01") ghatUU = rfm.ghatUU GammahatUDD = rfm.GammahatUDD GammahatUDDdD = rfm.GammahatUDDdD ReU = rfm.ReU ReUdD = rfm.ReUdD ReUdDD = rfm.ReUdDD ###Output _____no_output_____ ###Markdown $$\partial_t a^i = -e^i - \frac{\hat{g}^{ij}\partial_j \varphi}{\text{ReU}[i]},$$ ###Code # \partial_t a^i = -e^i - \frac{\hat{g}^{ij}\partial_j \varphi}{\text{ReU}[i]} arhsU = ixp.zerorank1() for i in range(DIM): arhsU[i] -= eU[i] for j in range(DIM): arhsU[i] -= (ghatUU[i][j]*psi_dD[j])/ReU[i] ###Output _____no_output_____ ###Markdown $$\partial_t e^i = \frac{\hat{g}^{ij}\partial_j \Gamma - \hat{g}^{jk} \hat{\nabla}_{j} \left(\hat{\nabla}_{k} A^{i}\right)}{\text{ReU}[i]} = \frac{\hat{g}^{ij}\partial_j \Gamma}{\text{ReU}[i]} - \frac{\hat{g}^{jk} \hat{\nabla}_{j} \left(\hat{\nabla}_{k} a^{i} \text{ReU}[i] \right)}{\text{ReU}[i]}.$$Given that$$\partial_t E^i = {\underbrace {\textstyle \hat{g}^{ij}\partial_j \Gamma}_{\text{Term 1}}} - \hat{\gamma}^{jk} \left({\underbrace {\textstyle A^i_{,kj}}_{\text{Term 2}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} + \hat{\Gamma}^i_{dj} A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d}}_{\text{Term 3}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m}_{\text{Term 4}}}\right),$$we can make the following replacements within the above, in terms of NRPy+ code;\begin{align}A^i = \text{AU[i]} &\to \text{aU[i] * rfm.ReU[i]} \\\partial_j A^i = \text{AUdD[i][j]} &\to \text{aU_dD[i][j] * rfm.ReU[i]} +\text{aU[i] * rfm.ReUdD[i][j]} \\\partial_k \partial_j A^i = \text{AUdDD[i][j][k]} &\to \text{aU_dDD[i][j][k] * rfm.ReU[i]} + \text{aU_dD[i][j] * rfm.ReUdD[i][k]} \\&+ \text{aU_dD[i][k] * rfm.ReUdD[i][j]} +\text{aU[i] * rfm.ReUdDD[i][j][k]}\end{align} ###Code # A^i AU = ixp.zerorank1() # \partial_k ( A^i ) --> rank two tensor AU_dD = ixp.zerorank2() # \partial_k partial_m ( A^i ) --> rank three tensor AU_dDD = ixp.zerorank3() for i in range(DIM): AU[i] = aU[i]*ReU[i] for j in range(DIM): AU_dD[i][j] = aU_dD[i][j]*ReU[i] + aU[i]*ReUdD[i][j] for k in range(DIM): AU_dDD[i][j][k] = aU_dDD[i][j][k]*ReU[i] + aU_dD[i][j]*ReUdD[i][k] +\ aU_dD[i][k]*ReUdD[i][j] + aU[i]*ReUdDD[i][j][k] ###Output _____no_output_____ ###Markdown $$\text{Term 1} = \hat{g}^{ij}\partial_j \Gamma$$ ###Code # Term 1 = \hat{g}^{ij}\partial_j \Gamma Term1U = ixp.zerorank1() for i in range(DIM): for j in range(DIM): Term1U[i] += ghatUU[i][j]*Gamma_dD[j] ###Output _____no_output_____ ###Markdown $$\text{Term 2} = A^i_{,kj}$$ ###Code # Term 2: A^i_{,kj} Term2UDD = ixp.zerorank3() for i in range(DIM): for j in range(DIM): for k in range(DIM): Term2UDD[i][j][k] += AU_dDD[i][k][j] ###Output _____no_output_____ ###Markdown $$\text{Term 3} = \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} + \hat{\Gamma}^i_{dj}A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d}$$ ###Code # Term 3: \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} # + \hat{\Gamma}^i_{dj}A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d} Term3UDD = ixp.zerorank3() for i in range(DIM): for j in range(DIM): for k in range(DIM): for m in range(DIM): Term3UDD[i][j][k] += GammahatUDDdD[i][m][k][j]*AU[m] \ + GammahatUDD[i][m][k]*AU_dD[m][j] \ + GammahatUDD[i][m][j]*AU_dD[m][k] \ - GammahatUDD[m][k][j]*AU_dD[i][m] ###Output _____no_output_____ ###Markdown $$\text{Term 4} = \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m$$ ###Code # Term 4: \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - # \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m Term4UDD = ixp.zerorank3() for i in range(DIM): for j in range(DIM): for k in range(DIM): for m in range(DIM): for d in range(DIM): Term4UDD[i][j][k] += ( GammahatUDD[i][d][j]*GammahatUDD[d][m][k] \ -GammahatUDD[d][k][j]*GammahatUDD[i][m][d])*AU[m] ###Output _____no_output_____ ###Markdown Finally, we build up the RHS of $E^i$,$$\partial_t E^i = {\underbrace {\textstyle \hat{g}^{ij}\partial_j \Gamma}_{\text{Term 1}}} - \hat{\gamma}^{jk} \left({\underbrace {\textstyle A^i_{,kj}}_{\text{Term 2}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} + \hat{\Gamma}^i_{dj} A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d}}_{\text{Term 3}}} + {\underbrace {\textstyle \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m}_{\text{Term 4}}}\right),$$and divide through by ReU[i] to get $e^i$. ###Code # \partial_t E^i = \hat{g}^{ij}\partial_j \Gamma - \hat{\gamma}^{jk}* # (A^i_{,kj} # + \hat{\Gamma}^i_{mk,j} A^m + \hat{\Gamma}^i_{mk} A^m_{,j} # + \hat{\Gamma}^i_{dj} A^d_{,k} - \hat{\Gamma}^d_{kj} A^i_{,d} # + \hat{\Gamma}^i_{dj}\hat{\Gamma}^d_{mk} A^m # - \hat{\Gamma}^d_{kj} \hat{\Gamma}^i_{md} A^m) ErhsU = ixp.zerorank1() for i in range(DIM): ErhsU[i] += Term1U[i] for j in range(DIM): for k in range(DIM): ErhsU[i] -= ghatUU[j][k]*(Term2UDD[i][j][k] + Term3UDD[i][j][k] + Term4UDD[i][j][k]) erhsU = ixp.zerorank1() for i in range(DIM): erhsU[i] = ErhsU[i]/ReU[i] ###Output _____no_output_____ ###Markdown $$\partial_t \Gamma = -\hat{g}^{ij} \left( \partial_i \partial_j \varphi - \hat{\Gamma}^k_{ji} \partial_k \varphi \right)$$ ###Code # \partial_t \Gamma = -\hat{g}^{ij} (\partial_i \partial_j \varphi - # \hat{\Gamma}^k_{ji} \partial_k \varphi) Gamma_rhs = sp.sympify(0) for i in range(DIM): for j in range(DIM): Gamma_rhs -= ghatUU[i][j]*psi_dDD[i][j] for k in range(DIM): Gamma_rhs += ghatUU[i][j]*GammahatUDD[k][j][i]*psi_dD[k] ###Output _____no_output_____ ###Markdown $$\partial_t \varphi = -\Gamma$$ ###Code # \partial_t \varphi = -\Gamma psi_rhs = -Gamma ###Output _____no_output_____ ###Markdown Constraints:\begin{align}\mathcal{G} &\equiv \Gamma - \partial_i A^i + \hat{\Gamma}^i_{ji} A^j, \\\mathcal{C} &\equiv \partial_i E^i + \hat{\Gamma}^i_{ji} E^j.\end{align} ###Code # \mathcal{G} \equiv \Gamma - \partial_i A^i + \hat{\Gamma}^i_{ji} A^j G = Gamma for i in range(DIM): G -= AU_dD[i][i] for j in range(DIM): G += GammahatUDD[i][j][i]*AU[j] # E^i EU = ixp.zerorank1() # \partial_k ( A^i ) --> rank two tensor EU_dD = ixp.zerorank2() for i in range(DIM): EU[i] = eU[i]*ReU[i] for j in range(DIM): EU_dD[i][j] = eU_dD[i][j]*ReU[i] + eU[i]*ReUdD[i][j] C = sp.sympify(0) for i in range(DIM): C += EU_dD[i][i] for j in range(DIM): C += GammahatUDD[i][j][i]*EU[j] ###Output _____no_output_____ ###Markdown Step 3: Convert $A^i$ and $E^i$ to the Cartesian basis \[Back to [top](top)\]$$\label{cart_transform}$$Here we convert $A^i$ and $E^i$ to the Cartesian basis, to make convergence tests within [Tutorial-Start_to_Finish-Solving_Maxwells_Equations_in_Vacuum-Curvilinear](Tutorial-Start_to_Finish-Solving_Maxwells_Equations_in_Vacuum-Curvilinear.ipynb) easier. Specifically, we will use the coordinate transformation definitions provided by [reference_metric.py](../edit/reference_metric.py) to build the Jacobian:\begin{align} \frac{\partial x_{\rm Cart}^i}{\partial x_{\rm Orig}^j},\end{align}where $x_{\rm Cart}^i \in \{x,y,z\}$. We then apply it to $A^i$ and $E^i$ to transform into Cartesian coordinates, via\begin{align}A^i_{\rm Cart} = \frac{\partial x_{\rm Cart}^i}{\partial x_{\rm Orig}^j} A^j_{\rm Orig}.\end{align} ###Code def Convert_to_Cartesian_basis(VU): # Coordinate transformation from original basis to Cartesian rfm.reference_metric() VU_Cart = ixp.zerorank1() Jac_dxCartU_dxOrigD = ixp.zerorank2() for i in range(DIM): for j in range(DIM): Jac_dxCartU_dxOrigD[i][j] = sp.diff(rfm.xx_to_Cart[i], rfm.xx[j]) for i in range(DIM): for j in range(DIM): VU_Cart[i] += Jac_dxCartU_dxOrigD[i][j]*VU[j] return VU_Cart AU_Cart = Convert_to_Cartesian_basis(AU) EU_Cart = Convert_to_Cartesian_basis(EU) ###Output _____no_output_____ ###Markdown Step 4: NRPy+ Module Code Validation \[Back to [top](top)\]$$\label{step4}$$Here, as a code validation check, we verify agreement in the SymPy expressions for the RHSs of Maxwell's equations between1. this tutorial and 2. the NRPy+ [VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) module. ###Code # Reset the list of gridfunctions, as registering a gridfunction # twice will spawn an error. gri.glb_gridfcs_list = [] # Call the VacuumMaxwellRHSs_rescaled() function from within the # Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py module, # which should do exactly the same as the above. # Set which system to use, which are defined in Maxwell/InitialData.py par.initialize_param(par.glb_param("char","Maxwell.InitialData","System_to_use","System_II")) import Maxwell.VacuumMaxwell_Flat_Evol_Curvilinear_rescaled as mwevol mwevol.VacuumMaxwellRHSs_rescaled() print("Consistency check between Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling tutorial and NRPy+ module: ALL SHOULD BE ZERO.") print("C - mwevol.C = " + str(C - mwevol.C)) print("G - mwevol.G = " + str(G - mwevol.G)) print("psi_rhs - mwevol.psi_rhs = " + str(psi_rhs - mwevol.psi_rhs)) print("Gamma_rhs - mwevol.Gamma_rhs = " + str(Gamma_rhs - mwevol.Gamma_rhs)) for i in range(DIM): print("arhsU["+str(i)+"] - mwevol.arhsU["+str(i)+"] = " + str(arhsU[i] - mwevol.arhsU[i])) print("erhsU["+str(i)+"] - mwevol.erhsU["+str(i)+"] = " + str(erhsU[i] - mwevol.erhsU[i])) print("AU_Cart["+str(i)+"] - mwevol.AU_Cart["+str(i)+"] = " + str(AU_Cart[i] - mwevol.AU_Cart[i])) print("EU_Cart["+str(i)+"] - mwevol.EU_Cart["+str(i)+"] = " + str(EU_Cart[i] - mwevol.EU_Cart[i])) ###Output Consistency check between Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling tutorial and NRPy+ module: ALL SHOULD BE ZERO. C - mwevol.C = 0 G - mwevol.G = 0 psi_rhs - mwevol.psi_rhs = 0 Gamma_rhs - mwevol.Gamma_rhs = 0 arhsU[0] - mwevol.arhsU[0] = 0 erhsU[0] - mwevol.erhsU[0] = 0 AU_Cart[0] - mwevol.AU_Cart[0] = 0 EU_Cart[0] - mwevol.EU_Cart[0] = 0 arhsU[1] - mwevol.arhsU[1] = 0 erhsU[1] - mwevol.erhsU[1] = 0 AU_Cart[1] - mwevol.AU_Cart[1] = 0 EU_Cart[1] - mwevol.EU_Cart[1] = 0 arhsU[2] - mwevol.arhsU[2] = 0 erhsU[2] - mwevol.erhsU[2] = 0 AU_Cart[2] - mwevol.AU_Cart[2] = 0 EU_Cart[2] - mwevol.EU_Cart[2] = 0 ###Markdown Step 5: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](top)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling.pdf](Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.) ###Code import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling") ###Output [NbConvertApp] WARNING | pattern 'Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling.ipynb' matched no files Created Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling.tex, and compiled LaTeX file to PDF file Tutorial-VacuumMaxwell_Curvilinear_RHS- Rescaling.pdf
lecture08.ML2/regression.ipynb
###Markdown With our X and Y, we now have the solution to the linear regression.$$y=mx+b$$where b = Intercept, and m is the Coefficient of Estimate for the feature "Rooms" ###Code rooms = pd.Series([4], name="rooms") X_test = pd.DataFrame(rooms) X_test lreg.predict(X_test) ###Output _____no_output_____ ###Markdown Multivariate Regression Let's add more features to our prediction model. ###Code # Data Columns X_multi = boston_df.drop('Price',1) # Targets Y_target = boston_df.Price ###Output _____no_output_____ ###Markdown Finally, we're ready to pass the X and Y using the linear regression object. ###Code # Implement Linear Regression lreg.fit(X_multi,Y_target) ###Output _____no_output_____ ###Markdown Let's go ahead check the intercept and number of coefficients. ###Code print(' The estimated intercept coefficient is %.2f ' %lreg.intercept_) print(' The number of coefficients used was %d ' % len(lreg.coef_)) ###Output The number of coefficients used was 13 ###Markdown Great! So we have basically made an equation for a line, but instead of just oneo coefficient m and an intercept b, we now have 13 coefficients. To get an idea of what this looks like check out the [documentation](http://scikit-learn.org/stable/modules/linear_model.html) for this equation:$$ y(w,x) = w_0 + w_1 x_1 + ... + w_p x_p $$Where $$w = (w_1, ...w_p)$$ as the coefficients and $$ w_0 $$ as the intercept What we'll do next is set up a DataFrame showing all the Features and their estimated coefficients obtained form the linear regression. ###Code # Set a DataFrame from the Features coeff_df = DataFrame(boston_df.columns) coeff_df.columns = ['Features'] # Set a new column lining up the coefficients from the linear regression coeff_df["Coefficient Estimate"] = pd.Series(lreg.coef_) # Show coeff_df ###Output _____no_output_____ ###Markdown Just like we initially plotted out, it seems the highest correlation between a feature and a house price was the number of rooms.Now let's move on to Predicting prices! Training and Validation In a dataset a training set is implemented to build up a model, while a validation set is used to validate the model built. Data points in the training set are excluded from the validation set. The correct way to pick out samples from your dataset to be part either the training or validation (also called test) set is *randomly*.Fortunately, scikit learn has a built in function specifically for this called train_test_split.The parameters passed are your X and Y, then optionally test_size parameter, representing the proportion of the dataset to include in the test split. As well a train_size parameter. The default split is: 75% for training set and 25% for testing set. You can learn more about these parameters [here](http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html) ###Code # Grab the output and set as X and Y test and train data sets! X_train, X_test, Y_train, Y_test = \ sklearn.cross_validation.train_test_split(X_multi,Y_target) ###Output _____no_output_____ ###Markdown Let's go ahead and see what the output of the train_test_split was: ###Code # Print shapes of the training and testing data sets print(X_train.shape, X_test.shape, Y_train.shape, Y_test.shape) X_train.head(5) ###Output _____no_output_____ ###Markdown Great! Now that we have our training and testing sets we can continue on to predicint gprices based on the multiple variables. Prediction! Now that we have our training and testing sets, let's go ahead and try to use them to predict house prices. We'll use our training set for the prediction and then use our testing set for validation. ###Code # Create our regression object lreg = LinearRegression() # Once again do a linear regression, except only on the training sets this time lreg.fit(X_train,Y_train) ###Output _____no_output_____ ###Markdown Now run a prediction on both the X training set and the testing set. ###Code # Predictions on training and testing sets pred_train = lreg.predict(X_train) pred_test = lreg.predict(X_test) ###Output _____no_output_____ ###Markdown Let's see if we can find the error in our fitted line. A common error measure is called "root mean squared error" (RMSE). RMSE is similar to the standard deviation. It is calculated by taking the square root of the sum of the square error and divide by the elements. Square error is the square of the sum of all differences between the prediction and the true value.The root mean square error (RMSE) corresponds approximately to the standard deviation. i.e., a prediction won't vary more than 2 times the RMSE 95% of the time. Note: Review the Normal Distribution Appendix lecture if this doesn't make sense to you or check out this [link](http://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule).Now we will get the mean square error ###Code print("Fit a model X_train, and calculate MSE with Y_train: %.2f" \ % np.mean((Y_train - pred_train) ** 2)) print("Fit a model X_train, and calculate MSE with X_test and Y_test: %.2f" \ % np.mean((Y_test - pred_test) ** 2)) ###Output Fit a model X_train, and calculate MSE with Y_train: 20.74 Fit a model X_train, and calculate MSE with X_test and Y_test: 25.91 ###Markdown It looks like our mean square error between our training and testing was pretty close. But how do we actually visualize this? Visualizing Risiduals In regression analysis, the difference between the observed value of the dependent variable (y) and the predicted value (ŷ) is called the residual (e). Each data point has one residual, so that:$$Residual = Observed\:value - Predicted\:value $$ You can think of these residuals in the same way as the D value we discussed earlier, in this case however, there were multiple data points considered.A residual plot is a graph that shows the residuals on the vertical axis and the independent variable on the horizontal axis. If the points in a residual plot are randomly dispersed around the horizontal axis, a linear regression model is appropriate for the data; otherwise, a non-linear model is more appropriate.Residual plots are a good way to visualize the errors in your data. If you have done a good job then your data should be randomly scattered around line zero. If there is some strucutre or pattern, that means your model is not capturing some thing. There could be an interaction between 2 variables that you're not considering, or may be you are measuring time dependent data. If this is the case go back to your model and check your data set closely.So now let's go ahead and create the residual plot. For more info on the residual plots check out this great [link](http://blog.minitab.com/blog/adventures-in-statistics/why-you-need-to-check-your-residual-plots-for-regression-analysis). ###Code # Scatter plot the training data train = plt.scatter(pred_train,(Y_train-pred_train),c='b',alpha=0.5) # Scatter plot the testing data test = plt.scatter(pred_test,(Y_test-pred_test),c='r',alpha=0.5) # Plot a horizontal axis line at 0 plt.hlines(y=0,xmin=-10,xmax=50) #Labels plt.legend((train,test),('Training','Test'),loc='lower left') plt.title('Residual Plots') ###Output _____no_output_____ ###Markdown Great! Looks like there aren't any major patterns to be concerned about, it may be interesting to check out the line pattern at the top of the graph, but overall the majority of the residuals seem to be randomly allocated above and below the horizontal. We could also use seaborn to create these plots: ###Code # Residual plot of all the dataset using seaborn sns.residplot('RM', 'Price', data = boston_df) ###Output _____no_output_____ ###Markdown RegressionThis notebook covers univariate & multi-variate "linear regression". We'll be going over how to use the scikit-learn regression model, as well as how to train the regressor using the fit() method, and how to predict new labels using the predict() method. We'll be analyzing a data set consisting of house prices in Boston. If you're interested in the deeper mathematics of linear regession methods, check out the [wikipedia page](http://en.wikipedia.org/wiki/Linear_regression) and also check out Andrew Ng's wonderful lectures for free on [youtube](https://www.youtube.com/watch?v=5u4G23_OohI). ###Code import numpy as np import pandas as pd from pandas import Series, DataFrame import matplotlib.pyplot as plt import seaborn as sns sns.set_style('whitegrid') %matplotlib inline ###Output _____no_output_____ ###Markdown We'll start by looking a an example of a dataset from scikit-learn. First we'll import our usual data analysis imports, then sklearn's built-in boston dataset.You should always try to do a quick visualization fo the data you have. Let's go ahead an make a histogram of the prices. ###Code from sklearn.datasets import load_boston boston = load_boston() print boston.DESCR plt.hist(boston.target, bins=50) plt.xlabel("Prices in $1000s") plt.ylabel("Number of Houses") # the 5th column in "boston" dataset is "RM" (# rooms) plt.scatter(boston.data[:,5], boston.target) plt.ylabel("Prices in $1000s") plt.xlabel("# rooms") boston_df = DataFrame(boston.data) boston_df.columns = boston.feature_names boston_df.head(5) boston_df = DataFrame(boston.data) boston_df.columns = boston.feature_names boston_df['Price'] = boston.target boston_df.head(5) sns.lmplot('RM', 'Price', data=boston_df) ###Output _____no_output_____ ###Markdown Univariate Regression We will start by setting up the X and Y arrays for numpy to take in. An **important note** for the X array: Numpy expects a two-dimensional array, the first dimension is the different example values, and the second dimension is the attribute number. In this case we have our value as the mean number of rooms per house, and this is a single attribute so the second dimension of the array is just 1. So we'll need to create a (506,1) shape array. There are a few ways to do this, but an easy way to do this is by using numpy's built-in vertical stack tool, vstack. ###Code # Set up X as median room values X = boston_df.RM # Use v to make X two-dimensional X = np.vstack(boston_df.RM) # Set up Y as the target price of the houses. Y = boston_df.Price type(X) type(Y) ###Output _____no_output_____ ###Markdown Let's import the [linear regression library](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) from the sklearn module.The sklearn.linear_model.LinearRegression class is an estimator. Estimators predict a value based on the observed data. In scikit-learn, all estimators implement the fit() and predict() methods. The former method is used to learn the parameters of a model, and the latter method is used to predict the value of a response variable for an explanatory variable using the learned parameters. It is easy to experiment with different models using scikit-learn because all estimators implement the fit and predict methods. ###Code import sklearn from sklearn.linear_model import LinearRegression ###Output _____no_output_____ ###Markdown Next, we create a LinearRegression object, afterwards, type lm. then press tab to see the list of methods availble on this object. ###Code # Create a LinearRegression Object lreg = LinearRegression() ###Output _____no_output_____ ###Markdown The functions we will be using are:lreg.fit() which fits a linear modellreg.predict() which is used to predict Y using the linear model with estimated coefficientslreg.score() which returns the coefficient of determination (R^2). A measure of how well observed outcomes are replicated by the model, learn more about it [here](http://en.wikipedia.org/wiki/Coefficient_of_determination) ###Code # Implement Linear Regression lreg.fit(X,Y) ###Output _____no_output_____ ###Markdown Let's go ahead check the intercept and number of coefficients. ###Code print(' The estimated intercept coefficient is %.2f ' %lreg.intercept_) print(' The number of coefficients used was %d ' % len(lreg.coef_)) type(lreg.coef_) # Set a DataFrame from the Features coeff_df = DataFrame(["Intercept", "Rooms"]) coeff_df.columns = ['Feature'] # Set a new column lining up the coefficients from the linear regression coeff_df["Coefficient Estimate"] = pd.Series(np.append(lreg.intercept_, lreg.coef_)) # Show coeff_df ###Output _____no_output_____
BSNets.ipynb
###Markdown ###Code import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import sys import os import torch.optim as optim import torchvision from torchvision import datasets, transforms from scipy import io import torch.utils.data import scipy import matplotlib.pyplot as plt from torch.utils.data import Dataset, DataLoader device = torch.device("cuda" if torch.cuda.is_available() else "cpu") !pip install -U spectral if not (os.path.isfile('/content/Indian_pines_corrected.mat')): !wget http://www.ehu.eus/ccwintco/uploads/6/67/Indian_pines_corrected.mat if not (os.path.isfile('/content/Indian_pines_gt.mat')): !wget http://www.ehu.eus/ccwintco/uploads/c/c4/Indian_pines_gt.mat def padWithZeros(X, margin=2): ## From: https://github.com/gokriznastic/HybridSN/blob/master/Hybrid-Spectral-Net.ipynb newX = np.zeros((X.shape[0] + 2 * margin, X.shape[1] + 2* margin, X.shape[2])) x_offset = margin y_offset = margin newX[x_offset:X.shape[0] + x_offset, y_offset:X.shape[1] + y_offset, :] = X return newX def createImageCubes(X, y, windowSize=5, removeZeroLabels = True): ## From: https://github.com/gokriznastic/HybridSN/blob/master/Hybrid-Spectral-Net.ipynb margin = int((windowSize - 1) / 2) zeroPaddedX = padWithZeros(X, margin=margin) # split patches patchesData = np.zeros((X.shape[0] * X.shape[1], windowSize, windowSize, X.shape[2]), dtype=np.uint8) patchesLabels = np.zeros((X.shape[0] * X.shape[1]), dtype=np.uint8) patchIndex = 0 for r in range(margin, zeroPaddedX.shape[0] - margin): for c in range(margin, zeroPaddedX.shape[1] - margin): patch = zeroPaddedX[r - margin:r + margin + 1, c - margin:c + margin + 1] patchesData[patchIndex, :, :, :] = patch patchesLabels[patchIndex] = y[r-margin, c-margin] patchIndex = patchIndex + 1 if removeZeroLabels: patchesData = patchesData[patchesLabels>0,:,:,:] patchesLabels = patchesLabels[patchesLabels>0] patchesLabels -= 1 return patchesData, patchesLabels class HyperSpectralDataset(Dataset): """HyperSpectral dataset.""" def __init__(self,data_url,label_url): self.data = np.array(scipy.io.loadmat('/content/'+data_url.split('/')[-1])[data_url.split('/')[-1].split('.')[0].lower()]) self.targets = np.array(scipy.io.loadmat('/content/'+label_url.split('/')[-1])[label_url.split('/')[-1].split('.')[0].lower()]) self.data, self.targets = createImageCubes(self.data,self.targets, windowSize=5) self.data = self.data[:10240,:,:,:] self.targets = self.targets[:10240] self.data = torch.Tensor(self.data) self.data = self.data.permute(0,3,1,2) def __len__(self): return self.data.shape[0] def __getitem__(self, idx): return self.data[idx,:,:,:] , self.targets[idx] data_train = HyperSpectralDataset('Indian_pines_corrected.mat','Indian_pines_gt.mat') train_loader = DataLoader(data_train, batch_size=64, shuffle=True) class BSNET_Conv(nn.Module): def __init__(self,): super(BSNET_Conv, self).__init__() self.conv1 = nn.Sequential( nn.Conv2d(200,64,(3,3),1,0), nn.ReLU(True)) self.conv1_1 = nn.Sequential( nn.Conv2d(200,128,(3,3),1,0), nn.ReLU(True)) self.conv1_2 = nn.Sequential( nn.Conv2d(128,64,(3,3),1,0), nn.ReLU(True)) self.deconv1_2 = nn.Sequential( nn.ConvTranspose2d(64,64,(3,3),1,0), nn.ReLU(True)) self.deconv1_1 = nn.Sequential( nn.ConvTranspose2d(64,128,(3,3),1,0), nn.ReLU(True)) self.conv2_1 = nn.Sequential( nn.Conv2d(128,200,(1,1),1,0), nn.Sigmoid()) self.fc1 = nn.Sequential( nn.Linear(64,128), nn.ReLU(True)) self.fc2 = nn.Sequential( nn.Linear(128,200), nn.Sigmoid()) def GlobalPool(self,feature_size): return nn.AvgPool2d(kernel_size=feature_size) def BAM(self,x): x = self.conv1(x) #print(x.shape) #x = torch.topk(x, k=1, dim=2)[0] #x = torch.topk(x, k=1, dim=3)[0] gp = self.GlobalPool(x.shape[2]) x = gp(x) x = x.T x = self.fc1(x) x = self.fc2(x) x = x.permute(2,3,0,1) return x def RecNet(self,x): x = self.conv1_1(x) #print('after conv1-1',x.shape) x = self.conv1_2(x) #print('after conv1-2',x.shape) x = self.deconv1_2(x) #print('after deconv1-2',x.shape) x = self.deconv1_1(x) #print('after deconv1-1',x.shape) x = self.conv2_1(x) #print('after conv2-1',x.shape) return x def forward(self,x): #print('before bam ',x.shape) BRW = self.BAM(x) x = x*BRW #print('after bam ',x.shape) ret = self.RecNet(x) return ret model = BSNET_Conv().to(device) optimizer = optim.SGD(model.parameters(), lr=0.002, momentum=0.9) def train(epoch): model.train() for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) # print(output.shape,target.shape) loss = F.l1_loss(output,data) loss.backward() optimizer.step() if batch_idx % 50 == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) def test(): with torch.no_grad(): model.eval() test_loss = 0 correct = 0 for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) # sum up batch loss test_loss += F.mse_loss(output, target).item() # get the index of the max log-probability pred = output.max(1, keepdim=True)[1] correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n' .format(test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) for epoch in range(1, 99 + 1): train(epoch) #test() """import matplotlib.pyplot as plt %matplotlib inline X, y = createImageCubes(X, y, windowSize=15) def plot(r): assert r<=10000 fig, axes = plt.subplots(32, 32, figsize=(20, 20)) itera = [*range(r)] for t,ax in zip(itera,axes.flatten()): ax.imshow(X[t,:,:,0]) plt.subplots_adjust(wspace=.5, hspace=.5) plot(1000)""" def convert_image_np(inp): """Convert a Tensor to numpy image.""" return inp.numpy() def visualize_tile(): with torch.no_grad(): # Get a batch of training data data = next(iter(train_loader))[0].to(device) input_tensor = data.cpu() transformed_input_tensor = model.RecNet(data).cpu() in_grid = convert_image_np( input_tensor) print(in_grid.shape) out_grid = convert_image_np( transformed_input_tensor) print(out_grid.shape) # Plot the results side-by-side f, axarr = plt.subplots(1, 2,figsize=(10,10)) axarr[0].imshow(in_grid[0,0,:,:],cmap='gnuplot') axarr[0].set_title('Dataset Images') axarr[1].imshow(out_grid[0,0,:,:],cmap='gnuplot') axarr[1].set_title('Transformed Images') visualize_tile() plt.ioff() plt.show() import spectral data_url , label_url = 'Indian_pines_corrected.mat' ,'Indian_pines_gt.mat' X = np.array(scipy.io.loadmat('/content/'+data_url.split('/')[-1])[data_url.split('/')[-1].split('.')[0].lower()]) y = np.array(scipy.io.loadmat('/content/'+label_url.split('/')[-1])[label_url.split('/')[-1].split('.')[0].lower()]) view = spectral.imshow(X,(30,20,100), classes=y) view.set_display_mode('overlay') view.class_alpha = 0.5 X.shape, y.shape ###Output _____no_output_____
generate_text_demo.ipynb
###Markdown IntroductionOur goal is to determine whether text generated by the inclusion of grammatical structure is better than text generated without that grammatical structure. Specifically, we want to know if adding part of speech vectors improves text generation. To accomplish this, we will train two models with the same architecture over the same dataset, one with only words and one with parts of speech included. The cells below will illustrate the results.Before running these cells, please read the instructions in README.md. In particular:1. acquire and preprocess data with `preprocess.py`1. adjust and train models with and without parts of speech with `train.py --include_pos` y and `train.py --include_pos n`Once training is complete and the model weights have been saved, we can generate text as seen below.The following code will generate 20 "sentences" for each of the two models, where a sentence is considered complete simply as soon as the RNN decides to output the "EN" token. ###Code import numpy as np import sys import os import json from tqdm import tqdm from keras.models import load_model from keras.models import model_from_json from keras import backend as K import pickle ########## UTILITY FUNCTIONS ########## def dist(x,y): s = 0 z = x-y for element in z: s += element**2 return np.sqrt(s) def closest(dictionary, vec): min_dist = 1000000000000 for key,val in dictionary.items(): v = np.array(val)[0] d = dist(v, vec) if d < min_dist: min_dist = d closest = key closest_vec = val return closest, np.array(closest_vec) ########## SET DIRECTORIES ########## DATA_DIR = os.path.join("data", "train", "cleaned") MAPPING_FILE = os.path.join("utils", "mapping.pkl") RNN_MODEL_POS = os.path.join("models", "rnn_model_pos.hdf5") RNN_MODEL_NO_POS = os.path.join("models", "rnn_model_no_pos.hdf5") NUM_POS_TAGS = 47 ########## IMPORT DATA ########## with open(MAPPING_FILE, 'rb') as f: mapping = pickle.load(f) ########## LOAD MODEL ########## ########## NO GRAMMAR VERSION ########## model_no_pos = load_model(RNN_MODEL_NO_POS) ########## GRAMMAR VERSION ########## model_pos = load_model(RNN_MODEL_POS) ###Output _____no_output_____ ###Markdown Generate Sentences without Part of Speech Vector ###Code ########## NO GRAMMAR ########## INCLUDE_POS = False # set up start token token = mapping['ST'] token = np.array(token) token = np.reshape(token, (1,) + token.shape) if INCLUDE_POS: final_shape = token.shape[-1] + NUM_POS_TAGS else: final_shape = token.shape[-1] tmp = np.zeros(shape=(1,1,final_shape)) tmp[0,0,:len(token[0,0])] = token[0,0,:] token = tmp noise = np.random.rand(token.shape[0], token.shape[1], token.shape[2]) noise /= 10 #small amount of noise en_count = 0 words = [] words.append('ST') ########## GENERATE WORDS ########## print('ST', end=' ') while en_count <= 20: out = model_no_pos.predict([token, noise]) # snap the network's prediction to the closest real word, and also # snap the network's prediction to the closest vector in our space # so that it predicts with real words as previous values closest_word, closest_vec = closest(mapping, out[0,0,:]) token = np.zeros(shape=out.shape) token[0,0,:] = closest_vec # fix shapes tmp = np.zeros(shape=(1,1,final_shape)) tmp[0,0,:len(out[0,0])] = out[0,0,:] out = tmp tmp = np.zeros(shape=(1,1,final_shape)) tmp[0,0,:len(token[0,0])] = token[0,0,:] token = tmp noise = np.random.rand(token.shape[0], token.shape[1], token.shape[2]) noise /= 10 words.append(closest_word) if closest_word == "EN": en_count += 1 print(closest_word) else: print(closest_word, end=' ') ###Output ST EN pickle. ST EN pickle. a into myself ST EN and a i'm ST EN pickle. a into myself ST EN do i ST EN pickle. a into myself ST EN pickle. i'm ST EN pickle! a into myself ST EN pickle. a into myself ST EN pickle. ST EN house. ST EN pickle. ST EN pickle. a into myself ST EN pickle. ST EN pickle. a into myself ST EN pickle. a into myself ST EN pickle! a into myself ST EN pickle. i'm ST EN pickle. a into myself ST EN pickle. ST EN ###Markdown ---We can see here that the network fails to associate ST with the start of the sentence, and the sentences are quite short and repetitive. Let's see how it does once the part of speech vectors are added! Generate Sentences with Part of Speech Vector ###Code ########## GRAMMAR ########## INCLUDE_POS = True # set up start token token = mapping['ST'] token = np.array(token) token = np.reshape(token, (1,) + token.shape) if INCLUDE_POS: final_shape = token.shape[-1] + NUM_POS_TAGS else: final_shape = token.shape[-1] tmp = np.zeros(shape=(1,1,final_shape)) tmp[0,0,:len(token[0,0])] = token[0,0,:] token = tmp noise = np.random.rand(token.shape[0], token.shape[1], token.shape[2]) noise /= 10 #small amount of noise en_count = 0 words = [] words.append('ST') ########## GENERATE WORDS ########## print('ST', end=' ') while en_count <= 20: out = model_pos.predict([token, noise]) # snap the network's prediction to the closest real word, and also # snap the network's prediction to the closest vector in our space # so that it predicts with real words as previous values closest_word, closest_vec = closest(mapping, out[0,0,:]) token = np.zeros(shape=out.shape) token[0,0,:] = closest_vec # fix shapes tmp = np.zeros(shape=(1,1,final_shape)) tmp[0,0,:len(out[0,0])] = out[0,0,:] out = tmp tmp = np.zeros(shape=(1,1,final_shape)) tmp[0,0,:len(token[0,0])] = token[0,0,:] token = tmp noise = np.random.rand(token.shape[0], token.shape[1], token.shape[2]) noise /= 10 words.append(closest_word) if closest_word == "EN": en_count += 1 print(closest_word) else: print(closest_word, end=' ') ###Output ST from it turns week who from it turns myself rick, don't science. heels] science. rick, don't i layers i-i'm which layers i-i'm which mind. [beth which from it did [beth from it did from it ooh, all EN from it ooh, all -- rick: EN from it ooh, all -- rick: mean, who from it boom! that's stop stop terrorism which stop about dark which mind. dark i-i'm which layers science. science. layers which stop about all -- -- family sweetie. which mind. [beth from it oh, all EN from it did [beth think think sweetie. layers science. rick, don't science. rick, don't science. science. rick, stop enter dark and [beth from it this layers that's stop about dark sweetie. which mind. [beth from it oh, all -- rick: mean, who from it turns magic i-i'm who beth: which mind. [beth from it did [beth from it pickle, all EN from it turns week who from it ooh, and [beth from it oh, can, from it for, layers myself rick, stop my EN from it turns myself layers which layers myself rick, don't i rick, don't science. science. science. science. rick, who from it did from it for who from it did [beth from it oh, all -- rick: who and [beth from it ooh, from it did [beth who from it did [beth from it oh, all EN from it for, can, from it pickle, all -- rick: mean, EN from it ooh, all -- rick: EN from it did [beth think who from it oh, all -- rick: mean, who from it did [beth who from it oh, dark sweetie. layers that's stop stop about all EN from it oh, which stop from it did from it did [beth from it ooh, all -- rick: EN from it pickles sweetie. layers science. science. rick, garage a layers myself science. layers i layers myself rick, don't science. science. rick, layers which upon from it ooh, all -- rick: who and [beth think and [beth who from it pickle, all -- rick: who from it did from it ooh, all EN from it the which mind. dark all -- rick: mean, who from it this all -- rick: mean, who from it did [beth who from it oh, sweetie. is which brains, never layers i-i'm EN from it the which mind. [beth from it turns week never all -- rick: who from it oh, and [beth from it this layers science. science. science. don't science. rick, which upon from it did [beth think don't science. science. rick, stop from it oh, dark sweetie. layers layers which mind. [beth which upon from it did [beth think sweetie. staring stop from it for, which upon from it did [beth which mind. dark by dark sweetie. -- rick: mean, who from it ooh, all -- rick: mean, i-i'm sweetie. is which layers science. rick, which stop about all EN from it did [beth think sweetie. is which counseling? stop about all -- rick: EN from it this can, from it ooh, all -- rick: who beth: who and [beth who from it boom! can, all EN from it did from it did [beth from it did [beth think sweetie. move? from it ooh, all -- rick: mean, who and [beth think from it ooh, which slipped which upon who from it oh, all EN from it oh, can, from it ooh, all -- rick: mean, who and [beth from it pickle, which layers which mind. dark sweetie. which slipped stop other stop from it ooh, and [beth from it ooh, all -- rick: who from it oh, all EN from it did [beth from it for, and [beth from it at has about all EN from it pickle, which mind. all -- rick: EN from it ooh, from it pickle, stop from it at has part which layers which mind. [beth from it did [beth from it ooh, all -- rick: mean, who and [beth from it did [beth from it ooh, dark all EN from it ooh, all -- rick: who and [beth from it did [beth think who and [beth from it ooh, all EN
FD_Training.ipynb
###Markdown Installing Dependencies ###Code !pip install plotly ###Output _____no_output_____ ###Markdown Importing Dependencies ###Code import tensorflow as tf import numpy as np import pandas as pd import os import csv import datetime import matplotlib.pyplot as plt import shutil import plotly.express as px import seaborn as sn ###Output _____no_output_____ ###Markdown Downloading Processed Dataset from Google DriveWe are downloading the dataset we prepared the Data Organization Notebook of this project. We have already prepared the dataset and made it available on Google Drive. You can simply download it over the Google Server to Colab ###Code !gdown --id 1YV106cRSbZu6txcPdi2iMA8xDOfe8-6b #Contains all Sensor Data contrary to what we did in Previous Notebook #Unzipping the Dataset !unzip data_proc.zip !mv /content/content/data_proc /content/data_proc shutil.rmtree('/content/content') ###Output _____no_output_____ ###Markdown Data Reading FunctionsWe initially define some functions to be used later for reading the processed data. ###Code def list_full_path(dir): return [os.path.join(dir,os.path.splitext(fi)[0]) for fi in os.listdir(dir)] def read_into_array(filedir): #reads data into array. If file extension isn't displayed in filedir then its default to .dat ext=os.path.splitext(filedir)[1] if ext=='': ext='.dat' with open(os.path.splitext(filedir)[0]+ext,'r') as datfile : file_data=csv.reader(datfile,delimiter=',') data=list() for row in file_data: data.append([float(val) for val in row]) return data ###Output _____no_output_____ ###Markdown Data VisualizationWe are using Plotly, a library which allows us to give amazing pictorial representation of different types of data ###Code def ply_plot(datpath): dat=np.asarray(read_into_array(f'{datpath}.csv')) import plotly.graph_objects as go if not np.shape(dat)[1]<10: fig = go.Figure(data=go.Heatmap( z=dat, x=[r'$a_{x}$',r'$a_{y}$',r'$a_{z}$',r'$b$',r'$g_{x}$',r'$g_{y}$',r'$g_{z}$',r'$m_{x}$',r'$m_{y}$',r'$m_{z}$'], hoverongaps = False,colorbar={"title":'Sensor Reading'})) else: fig = go.Figure(data=go.Heatmap( z=dat, x=[r'$a_{x}$',r'$a_{y}$',r'$a_{z}$',r'$g_{x}$',r'$g_{y}$',r'$g_{z}$'], hoverongaps = False,colorbar={"title":'Sensor Reading'})) fig.update_layout( xaxis_title=r'$Sensor\ Labels$', yaxis_title=r'$Time\ Axis$',) fig.show() #Plotting Fall Data datpath=sorted(list_full_path('/content/data_proc/fall_files'))[10] ply_plot(datpath) #Plotting Activities of Daily Life (ADLs) Data datpath=sorted(list_full_path('/content/data_proc/not_fall_files'))[0] ply_plot(datpath) ###Output _____no_output_____ ###Markdown Data Loading ###Code #DONOT USE PANDAS. Issue with Reading Data. Reads one point less def csv_dataloader(Path,auto_balance=False,skip_wrist=False): ###### Parameters ####### # Path = Path to Data(Classes need to be split in different folders beforehand) # auto_balance = Balances Data by Trimming Extra Data # skip_wrist = Skips Samples from Wrist Measurements (as they are prone to errors) ######################### classes=len(os.listdir(Path)) #number of classes dirs=[os.path.join(Path,folds) for folds in os.listdir(Path)] data=[] lbls=[] file_lbl=0 min_len=0 len_hist=[] if auto_balance: for dir in dirs: data_files=sorted(list_full_path(dir)) if skip_wrist: data_files=[data_file for data_file in data_files if data_file[35]!='2'] len_hist.append(len(data_files)) min_len=min(len_hist) for dir in dirs: data_files=sorted(list_full_path(dir)) if skip_wrist: data_files=[data_file for data_file in data_files if data_file[35]!='2'] if auto_balance: data_files=data_files[:min_len-1] dir_data=[] dir_lbls=[] for data_file in data_files: file_data=np.expand_dims(np.asarray(read_into_array(f'{data_file}.csv')),2) dir_lbls.append(np.array(file_lbl)) dir_data.append(file_data) file_lbl+=1 data.append(np.array(dir_data)) lbls.append(np.array(dir_lbls)) data=np.concatenate(tuple(data),0) lbls=np.concatenate(tuple(lbls),0) return tf.data.Dataset.from_tensor_slices((data.astype(np.float32),lbls.astype(np.uint8))) # 0=Fall 1=No Fall loader=csv_dataloader('/content/data_proc/',False,False) #Dataset Characteristics loader.element_spec ###Output _____no_output_____ ###Markdown Splitting Data in Train,Validation and Test ###Code def splitds(dataset,train_ratio,val_ratio,test_ratio=0,shuffle=False,shuffle_buffer=5000): assert train_ratio+val_ratio+test_ratio==1 SEED=32768; if shuffle==True: dataset=dataset.shuffle(shuffle_buffer,seed=SEED) ds_size=len(dataset) train_ds=dataset.take(int(ds_size*train_ratio)) val_ds=dataset.skip(int(train_ratio*ds_size)).take(int(ds_size*val_ratio)) if not test_ratio==0: test_ds=dataset.skip(int((train_ratio+val_ratio)*ds_size)).take(int(ds_size*test_ratio)) return train_ds,val_ds,test_ds return train_ds,val_ds train_ds,val_ds,test_ds=splitds(loader,0.8,0.1,0.1,shuffle=True,shuffle_buffer=10000) print(len(train_ds)) print(len(val_ds)) print(len(test_ds)) ###Output _____no_output_____ ###Markdown Model Training ###Code model=tf.keras.Sequential( [tf.keras.layers.Conv2D(16,(3,2),activation='relu',input_shape=(952,10,1)), tf.keras.layers.MaxPool2D((2,1)), tf.keras.layers.Conv2D(32,(3,3),activation='relu'), tf.keras.layers.MaxPool2D((2,1)), tf.keras.layers.Conv2D(64,(3,1),activation='relu'), tf.keras.layers.MaxPool2D((2,1)), tf.keras.layers.Conv2D(128,(3,1),activation='relu'), tf.keras.layers.MaxPool2D((2,1)), tf.keras.layers.Conv2D(256,(3,1),activation='relu'), tf.keras.layers.MaxPool2D((2,1)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(1,activation='sigmoid') ]) model.summary() model.compile(optimizer=tf.optimizers.Adam(0.0005),loss='binary_crossentropy',metrics=['accuracy','FalsePositives','FalseNegatives']) %load_ext tensorboard !rm -rf ./logs/ log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1) lr_callback=tf.keras.callbacks.ReduceLROnPlateau('val_loss',factor=0.1,patience=2) with tf.device('/device:GPU:0'): history=model.fit(train_ds.batch(1),validation_data=val_ds.batch(1),epochs=10,callbacks=[tensorboard_callback,lr_callback]) %tensorboard --logdir logs/fit ###Output _____no_output_____ ###Markdown Model PredictionsWe are running predictions on our trained model so we can quantify it accuracy on unseen data ###Code #0 = fall , 1 = No Fall preds=model.predict(test_ds.batch(1)) pred_lbls=np.squeeze(np.where(preds>0.6,1,0)) pred_lbls truth_lbls = np.concatenate([y for x, y in test_ds.batch(1)], axis=0) np.asarray(tf.math.confusion_matrix(truth_lbls,pred_lbls,2)).astype(np.int16) #bininopaul https://stackoverflow.com/a/35572247/13223282 cm=np.asarray(tf.math.confusion_matrix(truth_lbls,pred_lbls,2)) df_cm = pd.DataFrame(cm, index = [i for i in ['Fall','Not Fall']], columns = [i for i in ['Fall','Not Fall']]) plt.figure(figsize = (10,7)) sn.heatmap(df_cm, annot=True,fmt='g') ###Output _____no_output_____ ###Markdown Loading and Training using only Two Sensors DataLet's try using only accelerometer and gyroscope data ###Code #Accelrometer and Gyro Only !gdown --id 1dICsvfTAV0ZDDNHVR6sYwix-NgyQDPwB !unzip data_proc_AG.zip !mv /content/content/data_proc_AG /content/data_proc_AG shutil.rmtree('/content/content') #Plot Fall Data datpath=sorted(list_full_path('/content/data_proc_AG/fall_files'))[10] ply_plot(datpath) #Plot ADLs datpath=sorted(list_full_path('/content/data_proc/not_fall_files'))[0] ply_plot(datpath) # 0=Fall 1=No Fall loader=csv_dataloader('/content/data_proc_AG/',False,False) loader.element_spec train_ds,val_ds,test_ds=splitds(loader,0.8,0.1,0.1,shuffle=True,shuffle_buffer=10000) print(len(train_ds)) print(len(val_ds)) print(len(test_ds)) model=tf.keras.Sequential( [tf.keras.layers.Conv2D(8,(3,3),strides=(1,3),activation='relu',input_shape=(952,6,1)), tf.keras.layers.MaxPool2D((2,1)), tf.keras.layers.Conv2D(16,(3,2),activation='relu'), tf.keras.layers.MaxPool2D((2,1)), tf.keras.layers.Conv2D(32,(3,1),activation='relu'), tf.keras.layers.MaxPool2D((2,1)), tf.keras.layers.Conv2D(64,(3,1),activation='relu'), tf.keras.layers.MaxPool2D((2,1)), tf.keras.layers.Conv2D(128,(3,1),activation='relu'), tf.keras.layers.MaxPool2D((2,1)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(32,activation='relu'), tf.keras.layers.Dense(16,activation='relu'), tf.keras.layers.Dense(8,activation='relu'), tf.keras.layers.Dense(1,activation='sigmoid') ]) model.summary() model.compile(optimizer=tf.optimizers.Adam(0.001),loss='binary_crossentropy',metrics=['accuracy','FalsePositives','FalseNegatives']) %load_ext tensorboard !rm -rf ./logs/ log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1) lr_callback=tf.keras.callbacks.ReduceLROnPlateau('val_loss',factor=0.1,patience=1) with tf.device('/device:GPU:0'): history=model.fit(train_ds.batch(1),validation_data=val_ds.batch(1),epochs=10,callbacks=[tensorboard_callback,lr_callback]) %tensorboard --logdir logs/fit #0 = fall , 1 = No Fall truth_lbls=[] tpred_lbls=[] for input,lbl in test_ds.batch(1): #input=tf.expand_dims(x[0],0) tpred_lbls.append(model.predict(input)) truth_lbls.append(lbl) truth_lbls=np.asarray(truth_lbls) tpred_lbls=np.asarray(tpred_lbls) tpred_lbls=np.squeeze(np.squeeze(tpred_lbls,1)) pred_lbls=np.where(tpred_lbls>=0.5,1,0) # Plotting the Confusion Matrix cm=np.asarray(tf.math.confusion_matrix(truth_lbls,pred_lbls,2)) df_cm = pd.DataFrame(cm, index = [i for i in ['Fall','Not Fall']], columns = [i for i in ['Fall','Not Fall']]) plt.figure(figsize = (10,7)) sn.heatmap(df_cm, annot=True,fmt='g') ###Output _____no_output_____ ###Markdown Now we have trained our model on both datasets. Observe which one performs better by viewing the confusion matrix. Our model is now ready to be optimized and converted to FlatBuffer format to be used in Arduino. Converting to TensorFlow Lite Generating a Representative Dataset for Quantization ###Code def representative_ds_gen(): fall_files=list_full_path('/content/data_proc_AG/fall_files') not_fall_files=list_full_path('/content/data_proc_AG/not_fall_files') shuffled=fall_files+not_fall_files for fno in range(len(shuffled)): data=np.expand_dims(np.expand_dims(np.asarray(read_into_array(f'{shuffled[fno]}.csv')),2),0) yield([data.astype(np.float32)]) ###Output _____no_output_____ ###Markdown Applying Quantization and converting the model to TensorFlow Lite Format ###Code converter=tf.lite.TFLiteConverter.from_keras_model(model) tflite_no_quant=converter.convert() converter.representative_dataset=representative_ds_gen converter.optimizations=[tf.lite.Optimize.DEFAULT] tflite_model=converter.convert() curr_date=datetime.datetime.now().strftime("%Y%m%d") MODELS_DIR = '/content/models/' if not os.path.exists(MODELS_DIR): os.mkdir(MODELS_DIR) with open(f'/content/models/model_no_quant.tflite','wb') as f: f.write(tflite_no_quant) with open(f'/content/models/model_quant.tflite','wb') as f: f.write(tflite_model) MODELS_DIR ='/content/models/' if not os.path.exists(MODELS_DIR): os.mkdir(MODELS_DIR) MODEL_TFLITE = MODELS_DIR + 'model_quant.tflite' MODEL_TFLITE_MICRO = MODELS_DIR + 'model.cc' # Install xxd if it is not available !apt-get update && apt-get -qq install xxd # Convert to a C source file, i.e, a TensorFlow Lite for Microcontrollers model !xxd -i {MODEL_TFLITE} > {MODEL_TFLITE_MICRO} # Update variable names REPLACE_TEXT = MODEL_TFLITE.replace('/', '_').replace('.', '_') !sed -i 's/'{REPLACE_TEXT}'/g_model/g' {MODEL_TFLITE_MICRO} #Displays model data in Flatbuffer Format !cat {MODEL_TFLITE_MICRO} ###Output _____no_output_____ ###Markdown Test Quantized Model ###Code def run_tflite_model(tflite_file, test_image_indices): global test_images # Initialize the interpreter interpreter = tf.lite.Interpreter(model_path=str(tflite_file)) interpreter.allocate_tensors() input_details = interpreter.get_input_details()[0] output_details = interpreter.get_output_details()[0] print('Input Details',input_details['dtype']) print('Output Details',output_details['dtype']) predictions = np.zeros((len(test_image_indices),), dtype=int) for i, test_image_index in enumerate(test_image_indices): test_image = test_images[test_image_index] test_label = test_labels[test_image_index] # Check if the input type is quantized, then rescale input data to uint8 if input_details['dtype'] == np.uint8: input_scale, input_zero_point = input_details["quantization"] test_image = test_image / input_scale + input_zero_point test_image = np.expand_dims(test_image, axis=0)#.astype(input_details["dtype"]) interpreter.set_tensor(input_details["index"], test_image) interpreter.invoke() output = interpreter.get_tensor(output_details["index"])[0] predictions[i] = (output > 0.6) return predictions # Helper function to evaluate a TFLite model on all images def evaluate_model(tflite_file, model_type): global test_images global test_labels test_image_indices = range(test_images.shape[0]) predictions = run_tflite_model(tflite_file, test_image_indices) print(predictions) accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images) cm=np.asarray(tf.math.confusion_matrix(test_labels,predictions,2)) df_cm = pd.DataFrame(cm, index = [i for i in ['Fall','Not Fall']], columns = [i for i in ['Fall','Not Fall']]) plt.figure(figsize = (10,7)) sn.heatmap(df_cm, annot=True,fmt='g') print('%s model accuracy is %.4f%% (Number of test samples=%d)' % ( model_type, accuracy, len(test_images))) test_images=[] test_labels=[] for test_image,test_label in test_ds: test_images.append(test_image) test_labels.append(test_label) test_images=np.asarray(test_images) test_labels=np.asarray(test_labels) evaluate_model('/content/models/model_no_quant.tflite','Unquantized') evaluate_model('/content/models/model_quant.tflite','Quantized') ###Output _____no_output_____
Notebooks/old/memory_analysis.ipynb
###Markdown Main code ###Code # Things for Google Drive from google.colab import drive drive.mount('/content/drive', force_remount=True) cd drive/MyDrive/MacAI # Install a dependency !pip install torchsummaryX from lib import medzoo import numpy as np import torch import matplotlib.pyplot as plt from lib import losses3D from torch.utils.checkpoint import checkpoint_sequential from skimage import transform import nibabel as nb using_TPU = False if using_TPU: import torch_xla import torch_xla.core.xla_model as xm # Get bounding box of a 3D image, shamelessly stolen from the following link. # There's probably actually a way to rotate an image to fit into the smallest # bounding box possible based on scipy.optimize, but whatever # https://stackoverflow.com/questions/31400769/bounding-box-of-numpy-array def bbox2_3D(img): r = np.any(img, axis=(1, 2)) c = np.any(img, axis=(0, 2)) z = np.any(img, axis=(0, 1)) rmin, rmax = np.where(r)[0][[0, -1]] cmin, cmax = np.where(c)[0][[0, -1]] zmin, zmax = np.where(z)[0][[0, -1]] return rmin, rmax, cmin, cmax, zmin, zmax # Load a file and apply brain mask (obtained using FSMRIB library) # then crop image seg = nb.load('files/segmentation.nii.gz').get_fdata() t1 = nb.load('files/T1.nii').get_fdata()/1000 brain_mask = nb.load('files/T1_mask.nii.gz').get_fdata() t1[brain_mask==0] = 0 xmin, xmax, ymin, ymax, zmin, zmax = bbox2_3D(t1) t1 = t1[xmin:xmax, ymin:ymax]#, zmin:zmax] seg = seg[xmin:xmax, ymin:ymax]#, zmin:zmax] plt.imshow(t1[220]) print('Space conserving factor of ' + str(round((512*512*320)/np.prod(t1.shape), 4)) + ' by brain masking and cropping') # This shouldn't be a thing in the final model, but U-net complains # if each dimension isn't divisible by something like 16 or 32 # This should be easily fixable by changing the padding in the UNet3D t1 = transform.resize(t1, [320, 400, 320]) seg = transform.resize(seg, [320, 400, 320]) # More code shamelessly stolen from stack overflow, makes pytorch use GPU if using_TPU: device = xm.xla_device() else: device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') if device.type != 'cpu': print(torch.cuda.get_device_name(0)) print('Allocated:', round(torch.cuda.memory_allocated()/1024**3,1), 'GB') print('Reserved:', round(torch.cuda.memory_reserved()/1024**3,1), 'GB') print('Using device:', device) print() # So (a single) raw image series are about 8-16x too big to fit using batch size 1 # The optimizations that I've identified here that we need are: # 16-bit precision (or less, but Pytorch doesn't support it): Reduction by factor 2 # Brain masking and cropping the brain - in this case, reduction by a factor of 2 # Making the network have 2x less filters works, I guess the true amount of reduction was less than 16x # Gradient checkpointing gives marginal improvements, but not the massive ones we were looking for # This is probably because most of the memory is in the first (largest) layer # Next steps # checkpoint_sequential is weird # Also, you'd still have to reduce the high number of image channels to one or two channels t1_tensor = torch.Tensor(np.expand_dims(np.expand_dims(t1, axis=0), axis=0)).half().cuda()#to(device) seg_tensor = torch.Tensor(np.expand_dims(np.expand_dims(seg, axis=0), axis=0)).half().cuda()#to(device) t1_tensor.requires_grad = True seg_tensor.requires_grad = True import torch.nn as nn import torch from torchsummary import summary import torchsummaryX from lib.medzoo.BaseModelClass import BaseModel from torch.utils.checkpoint import checkpoint, checkpoint_sequential class UNet3D(BaseModel): """ Implementations based on the Unet3D paper: https://arxiv.org/abs/1606.06650 """ def __init__(self, in_channels, n_classes, base_n_filter=8): super(UNet3D, self).__init__() self.in_channels = in_channels self.n_classes = n_classes self.base_n_filter = base_n_filter self.lrelu = nn.LeakyReLU() self.dropout3d = nn.Dropout3d(p=0.6) self.upsacle = nn.Upsample(scale_factor=2, mode='nearest') self.softmax = nn.Softmax(dim=1) self.conv3d_c1_1 = nn.Conv3d(self.in_channels, self.base_n_filter, kernel_size=3, stride=1, padding=1, bias=False) self.conv3d_c1_2 = nn.Conv3d(self.base_n_filter, self.base_n_filter, kernel_size=3, stride=1, padding=1, bias=False) self.lrelu_conv_c1 = self.lrelu_conv(self.base_n_filter, self.base_n_filter) self.inorm3d_c1 = nn.InstanceNorm3d(self.base_n_filter) self.conv3d_c2 = nn.Conv3d(self.base_n_filter, self.base_n_filter * 2, kernel_size=3, stride=2, padding=1, bias=False) self.norm_lrelu_conv_c2 = self.norm_lrelu_conv(self.base_n_filter * 2, self.base_n_filter * 2) self.inorm3d_c2 = nn.InstanceNorm3d(self.base_n_filter * 2) self.conv3d_c3 = nn.Conv3d(self.base_n_filter * 2, self.base_n_filter * 4, kernel_size=3, stride=2, padding=1, bias=False) self.norm_lrelu_conv_c3 = self.norm_lrelu_conv(self.base_n_filter * 4, self.base_n_filter * 4) self.inorm3d_c3 = nn.InstanceNorm3d(self.base_n_filter * 4) self.conv3d_c4 = nn.Conv3d(self.base_n_filter * 4, self.base_n_filter * 8, kernel_size=3, stride=2, padding=1, bias=False) self.norm_lrelu_conv_c4 = self.norm_lrelu_conv(self.base_n_filter * 8, self.base_n_filter * 8) self.inorm3d_c4 = nn.InstanceNorm3d(self.base_n_filter * 8) self.conv3d_c5 = nn.Conv3d(self.base_n_filter * 8, self.base_n_filter * 16, kernel_size=3, stride=2, padding=1, bias=False) self.norm_lrelu_conv_c5 = self.norm_lrelu_conv(self.base_n_filter * 16, self.base_n_filter * 16) self.norm_lrelu_upscale_conv_norm_lrelu_l0 = self.norm_lrelu_upscale_conv_norm_lrelu(self.base_n_filter * 16, self.base_n_filter * 8) self.conv3d_l0 = nn.Conv3d(self.base_n_filter * 8, self.base_n_filter * 8, kernel_size=1, stride=1, padding=0, bias=False) self.inorm3d_l0 = nn.InstanceNorm3d(self.base_n_filter * 8) self.conv_norm_lrelu_l1 = self.conv_norm_lrelu(self.base_n_filter * 16, self.base_n_filter * 16) self.conv3d_l1 = nn.Conv3d(self.base_n_filter * 16, self.base_n_filter * 8, kernel_size=1, stride=1, padding=0, bias=False) self.norm_lrelu_upscale_conv_norm_lrelu_l1 = self.norm_lrelu_upscale_conv_norm_lrelu(self.base_n_filter * 8, self.base_n_filter * 4) self.conv_norm_lrelu_l2 = self.conv_norm_lrelu(self.base_n_filter * 8, self.base_n_filter * 8) self.conv3d_l2 = nn.Conv3d(self.base_n_filter * 8, self.base_n_filter * 4, kernel_size=1, stride=1, padding=0, bias=False) self.norm_lrelu_upscale_conv_norm_lrelu_l2 = self.norm_lrelu_upscale_conv_norm_lrelu(self.base_n_filter * 4, self.base_n_filter * 2) self.conv_norm_lrelu_l3 = self.conv_norm_lrelu(self.base_n_filter * 4, self.base_n_filter * 4) self.conv3d_l3 = nn.Conv3d(self.base_n_filter * 4, self.base_n_filter * 2, kernel_size=1, stride=1, padding=0, bias=False) self.norm_lrelu_upscale_conv_norm_lrelu_l3 = self.norm_lrelu_upscale_conv_norm_lrelu(self.base_n_filter * 2, self.base_n_filter) self.conv_norm_lrelu_l4 = self.conv_norm_lrelu(self.base_n_filter * 2, self.base_n_filter * 2) self.conv3d_l4 = nn.Conv3d(self.base_n_filter * 2, self.n_classes, kernel_size=1, stride=1, padding=0, bias=False) self.ds2_1x1_conv3d = nn.Conv3d(self.base_n_filter * 8, self.n_classes, kernel_size=1, stride=1, padding=0, bias=False) self.ds3_1x1_conv3d = nn.Conv3d(self.base_n_filter * 4, self.n_classes, kernel_size=1, stride=1, padding=0, bias=False) self.sigmoid = nn.Sigmoid() def conv_norm_lrelu(self, feat_in, feat_out): return nn.Sequential( nn.Conv3d(feat_in, feat_out, kernel_size=3, stride=1, padding=1, bias=False), nn.InstanceNorm3d(feat_out), nn.LeakyReLU()) def norm_lrelu_conv(self, feat_in, feat_out): return nn.Sequential( nn.InstanceNorm3d(feat_in), nn.LeakyReLU(), nn.Conv3d(feat_in, feat_out, kernel_size=3, stride=1, padding=1, bias=False)) def lrelu_conv(self, feat_in, feat_out): return nn.Sequential( nn.LeakyReLU(), nn.Conv3d(feat_in, feat_out, kernel_size=3, stride=1, padding=1, bias=False)) def norm_lrelu_upscale_conv_norm_lrelu(self, feat_in, feat_out): return nn.Sequential( nn.InstanceNorm3d(feat_in), nn.LeakyReLU(), nn.Upsample(scale_factor=2, mode='nearest'), # should be feat_in*2 or feat_in nn.Conv3d(feat_in, feat_out, kernel_size=3, stride=1, padding=1, bias=False), nn.InstanceNorm3d(feat_out), nn.LeakyReLU()) def forward(self, x): # Level 1 context pathway out = checkpoint(self.conv3d_c1_1, x) residual_1 = out out = self.lrelu(out) out = checkpoint(self.conv3d_c1_2, out) out = self.dropout3d(out) out = checkpoint(self.lrelu_conv_c1, out) # Element Wise Summation out += residual_1 context_1 = self.lrelu(out) out = self.inorm3d_c1(out) out = self.lrelu(out) # Level 2 context pathway out = checkpoint(self.conv3d_c2, out) residual_2 = out out = checkpoint(self.norm_lrelu_conv_c2, out) out = self.dropout3d(out) out = checkpoint(self.norm_lrelu_conv_c2, out) out += residual_2 out = self.inorm3d_c2(out) out = self.lrelu(out) context_2 = out # Level 3 context pathway out = checkpoint(self.conv3d_c3, out) residual_3 = out out = checkpoint(self.norm_lrelu_conv_c3,out) out = self.dropout3d(out) out = checkpoint(self.norm_lrelu_conv_c3,out) out += residual_3 out = self.inorm3d_c3(out) out = self.lrelu(out) context_3 = out # Level 4 context pathway out = checkpoint(self.conv3d_c4, out) residual_4 = out out = checkpoint(self.norm_lrelu_conv_c4,out) out = self.dropout3d(out) out = checkpoint(self.norm_lrelu_conv_c4,out) out += residual_4 out = self.inorm3d_c4(out) out = self.lrelu(out) context_4 = out # Level 5 out = checkpoint(self.conv3d_c5, out) residual_5 = out out = checkpoint(self.norm_lrelu_conv_c5,out) out = self.dropout3d(out) out = checkpoint(self.norm_lrelu_conv_c5,out) out += residual_5 out = checkpoint(self.norm_lrelu_upscale_conv_norm_lrelu_l0,out) out = checkpoint(self.conv3d_l0, out) out = self.inorm3d_l0(out) out = self.lrelu(out) # Level 1 localization pathway out = torch.cat([out, context_4], dim=1) out = checkpoint(self.conv_norm_lrelu_l1, out) out = checkpoint(self.conv3d_l1,out) out = checkpoint(self.norm_lrelu_upscale_conv_norm_lrelu_l1,out) # Level 2 localization pathway # print(out.shape) # print(context_3.shape) out = torch.cat([out, context_3], dim=1) out = checkpoint(self.conv_norm_lrelu_l2, out) ds2 = out out = checkpoint(self.conv3d_l2,out) out = checkpoint(self.norm_lrelu_upscale_conv_norm_lrelu_l2,out) # Level 3 localization pathway out = torch.cat([out, context_2], dim=1) out = checkpoint(self.conv_norm_lrelu_l3,out) ds3 = out out = checkpoint(self.conv3d_l3,out) out = checkpoint(self.norm_lrelu_upscale_conv_norm_lrelu_l3,out) # Level 4 localization pathway out = torch.cat([out, context_1], dim=1) out = checkpoint(self.conv_norm_lrelu_l4,out) out_pred = checkpoint(self.conv3d_l4,out) ds2_1x1_conv = checkpoint(self.ds2_1x1_conv3d, ds2) ds1_ds2_sum_upscale = self.upsacle(ds2_1x1_conv) ds3_1x1_conv = checkpoint(self.ds3_1x1_conv3d,ds3) ds1_ds2_sum_upscale_ds3_sum = ds1_ds2_sum_upscale + ds3_1x1_conv ds1_ds2_sum_upscale_ds3_sum_upscale = self.upsacle(ds1_ds2_sum_upscale_ds3_sum) out = out_pred + ds1_ds2_sum_upscale_ds3_sum_upscale seg_layer = out return seg_layer def test(self,device='cpu'): input_tensor = torch.rand(1, 2, 32, 32, 32) ideal_out = torch.rand(1, self.n_classes, 32, 32, 32) out = self.forward(input_tensor) assert ideal_out.shape == out.shape summary(self.to(torch.device(device)), (2, 32, 32, 32),device='cpu') # import torchsummaryX # torchsummaryX.summary(self, input_tensor.to(device)) print("Unet3D test is complete") print(t1_tensor.shape) # If base_n_filter=6 using default medzoo.UNet3D, goes OOM # but if base_n_filter=6 using checkpointed medzoo.UNet3D, works # Definitely room for improvement, it is quite slow as implemented here unet = UNet3D(1,1, base_n_filter=6).cuda() unet.half() print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB') optimizer = torch.optim.Adam(unet.parameters()) # Pixel-wise CE gives some weird shape error with "weights" loss_function = losses3D.WeightedSmoothL1Loss() for i in range(1): outputs = unet(t1_tensor) loss = loss_function(outputs, seg_tensor) optimizer.zero_grad() loss.backward() optimizer.step() print(loss) #from lib.medzoo.HighResNet3D import HighResNet3D #unet = HighResNet3D(1,1).cuda()#to(device) #unet.half() #print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB') #optimizer = torch.optim.Adam(unet.parameters()) # Pixel-wise CE gives some weird shape error with "weights" #modules = [module for k, module in unet._modules.items()] #loss_function = losses3D.WeightedSmoothL1Loss() #for i in range(1): # outputs = checkpoint_sequential(modules, 4, t1_tensor) #outputs = unet(t1_tensor) # loss = loss_function(outputs, seg_tensor) # print(loss) # optimizer.zero_grad() # loss.backward() # optimizer.step() # Wait this experiment makes no sense #import sys #import os #models = [ #medzoo.Unet3D.UNet3D(1, 1), #medzoo.HighResNet3D(1, 1), #medzoo.DenseVoxelNet(1,1 ), #medzoo.ResNet3D_VAE.ResNet3dVAE(1,1 ), #medzoo.Vnet.VNetLight(1,1), #medzoo.SkipDenseNet3D.SkipDenseNet3D(1,1) #] #if not os.path.exists('model_sizes'): # os.mkdir('model_sizes') #types = ['Unet', 'ResNet', 'VoxelNet', 'ResNetVAE','VNet'] #for i, model in enumerate(models): # model.eval() # torch.save(model.state_dict(), 'model_sizes/'+types[i]+'.pt') # Every single network crashes even using 1 series # Also some might have problems with image sizes that aren't multiples of 2, idk # Crashes list # Unet3D.UNet3D(1, 1) # medzoo.HighResNet3D(1, 1) # medzoo.DenseVoxelNet(1,1 ) # medzoo.ResNet3D_VAE.ResNet3dVAE(1,1 ) # VNetLight # SkipDenseNet3D # doesn't work for 1 channel # medzoo.HyperDenseNet_2Mod(1, 1) # medzoo.ResNet3DMedNet.ResNetMed3D(1,1) # still crashes with TPU which has 64 GB HBM # Just stuff for TPUs, but didn't seem to work #!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py #!python pytorch-xla-env-setup.py ###Output _____no_output_____
files/DensityEstimation/LaplaceApproximation.ipynb
###Markdown The Laplace ApproximationThe laplace approximation is a widely used framework that finds a Gaussian approximation to a probability density definted over a set of continuous variables. It is especially useful when applying Bayesian principles to logistic regression where computing integral of posterior distributions becomes intractable.![Laplace Approximation](../../img/LaplaceApproximation.png) Basic IdeaConsider a continuous random variable $z \in \mathcal{R}^D$ with probability distribution given by $p(z) = \frac{1}{Z}f(z)$ where $Z = \int{f(z) dz}$ is the normalizing constant and need not be known.In the Laplace approximation, the goal is to `find a Gaussian distribution q(z) centered on a mode of the p(z)`. The mode can be computed by determining the value of $z=z_0$ where $\frac{dp(z)}{dz} = 0$.Note that if $p(z)$ is `multi-modal`, the laplace approximation is only precise in the neighborhood of one of its many modes.Let $q(z) \sim \mathcal{N}(z_0,A^{-1})$ where $A$ is the precision matrix. Note: Precision matrix is the inverse of covariance matrix and is often employed for computational reasons.$$ \begin{align} q_z &= \frac{\sqrt{|A|}}{(2\pi)^{D/2}} \exp \{-\frac{1}{2}(z-z_0)^T A (z-z_0)\} \\ \Rightarrow \ln{q_z} &= \frac{1}{2} \left(\ln{|A|} - D \ln{2\pi}\right) - \frac{1}{2}(z-z_0)^T A(z-z_0) \\&= \ln{f_{z0}} - \frac{1}{2}A(z-z_0)^2\end{align}$$Note that this is a Taylor series expansion for $p_z$ at a mode where $\frac{d \ln p(z)}{dz} = 0$ and $\frac{d^2 \ln p(z)}{dz^2} = -A 0$.In summary, the laplace approximation involves evaluating the mode $z_0$ and the Hessian $A$ at $z_0$. So if f(z) has an intractable but analytical form, the mode can be found by some form of numerical optimization algorithm. Note that the normalization constant $Z$ does not need to be known to apply this method. ExampleThis is an example to demonstrate the Laplace approximation and adapted from Figure 4.14 in [1].Suppose $p(z) \propto \sigma(20z+4) \exp{\left(\frac{-z^2}{2}\right)}$ where $\sigma(\cdot)$ is the sigmoid function. This form is very common in classification problems and serves as a good practical example.To compute the mode $z_0$ & Hessian $-A$,$$ \begin{align} \frac{d}{dz}\ln p_z &\propto \frac{d}{dz}\ln \sigma(\cdot) + \frac{d}{dz}\ln \exp{\left(\frac{-z^2}{2}\right)} \\&= 20 (1-\sigma(\cdot)) - z \\&= 0 \text{ iff } z_0 = 20(1-\sigma(20 z_0 + 4))\end{align}$$The above expression to determine $z_0$ is nonlinear and can be solved by Newton's method.Let $y(z_0) = z_0 - 20(1-\sigma(20 z_0 + 4))$. To find $z_0$ such that $y=0$, we start with an initial guess $z_{0,0}$ and iterate the following equation till convergence.$z_{0,k+1} = z_{0,k} - \left(y'(z_{0,k})\right)^{-1} y(z_{0,k})$. The convergence criteria can be either set to a fixed maximum number of iterations or till $|z_{0,k+1} - z_{0,k}| \le \epsilon$ for some small $\epsilon$.The Hessian is expressed as:$$ \begin{align} \frac{d^2}{dz^2}\ln p_z &\propto \frac{d}{dz}\frac{d}{dz}\ln p_z \\&= -400\sigma(\cdot)(1-\sigma(\cdot)) - 1 \\\Rightarrow A &= -\Bigg(\frac{d^2}{dz^2}\ln p_z\Bigg)\Bigg\vert_{z=z_0} = 400\sigma(20 z_0 + 4)(1-\sigma(20 z_0 + 4)) + 1\end{align}$$ ###Code import numpy as np from scipy.integrate import trapz from scipy.stats import norm import matplotlib.pyplot as plt import matplotlib # matplotlib.rcParams['text.usetex'] = True # matplotlib.rcParams['text.latex.unicode'] = True %matplotlib inline def sigmoid(x): den = 1.0+np.exp(-x) return 1.0/den def p_z(z): p = np.exp(-np.power(z,2)/2)*sigmoid(20*z+4) sum_p = trapz(p,z) ## normalize for plotting return p,p/sum_p def findMode(z_init,max_iter = 25,tol = 1E-6): iter = 0 z_next = np.finfo('d').max z_cur = z_init while (iter < max_iter and np.abs(z_next-z_cur) > tol): if iter > 0: z_cur = z_next y = z_cur - 20*(1-sigmoid(20*z_cur+4)) der_y = 1 + 400*sigmoid(20*z_cur+4)*(1-sigmoid(20*z_cur+4)) z_next = z_cur - y/der_y iter = iter+1 # print("Iter-"+str(iter)+":"+str(z_next)) return z_next def getHessian(z): sig_x = sigmoid(20*z+4) return 400*sig_x*(1-sig_x) + 1 z = np.linspace(-10,10,10000) pz,pzn = p_z(z) ## Mode & Precision matrix z0 = findMode(0) A = getHessian(z0) z0_idx = np.where(np.abs(z-z0) == np.min(np.abs(z-z0)))[0] p_z0 = pzn[z0_idx] dp = np.gradient(pzn,z[1]-z[0]) d2p = np.gradient(dp,z[1]-z[0]) ## Get approx Gaussian distribution q_z = norm.pdf(z, z0, 1/np.sqrt(A)) fig,ax = plt.subplots(1,1,figsize=(4,3)) ax.cla() ax.plot(z,pzn,color="orange") ax.fill_between(z,pzn, 0, facecolor="orange", # The fill color color='orange', # The outline color alpha=0.2) # Transparency of the fill #ax.axvline(x=z0)#,ylim=0,ymax=0.7) ax.vlines(z0, ymin=0, ymax=p_z0,linestyles='dotted') ax.plot(z,q_z,'r') ax.set_xlim([-2,4]); ax.set_ylim([0,0.8]); ax.set_yticks([0,0.2,0.4,0.6,0.8]); ax.legend(['p_z','N('+str(np.round(z0,4))+','+str(np.round(1/np.sqrt(A),3))+')']) ax.set_title('p(z) with its Laplace Approximation'); ###Output _____no_output_____
evaluations/evaluate_split.ipynb
###Markdown Evaluation for Split Class- Evaluates the effect of preprocessy train test split function on model accuracy compared to sklearn train test split- Evaluates on 4 datasets ###Code # To access preprocessy module. Required in .ipynb files import os import sys module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) import numpy as np import pandas as pd import time from sklearn.datasets import load_iris, load_boston, load_breast_cancer, load_diabetes from sklearn.linear_model import LinearRegression from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report, r2_score from sklearn.model_selection import train_test_split from preprocessy.resampling import Split splits = [None, 0.2, 0.3] def preprocessy_eval(X, y, split, model): X_train, X_test, y_train, y_test = Split().train_test_split(X, y, test_size=split) preprocessy_test_size = None if split: preprocessy_test_size = split else: preprocessy_test_size = 1 / np.sqrt(len(X.columns)) model.fit(X_train, y_train) preds = model.predict(X_test) return preprocessy_test_size, preds, y_test def sklearn_eval(X, y, split, model): X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=split, random_state=69 ) sklearn_test_size = None if split: sklearn_test_size = split else: sklearn_test_size = 0.25 # from sklearn docs model.fit(X_train, y_train) preds = model.predict(X_test) return sklearn_test_size, preds, y_test def eval(X, y, dataset, model): print(f"Dataset - {dataset}\n") for split in splits: start = time.time() preprocessy_test_size, preprocessy_preds, preprocessy_y_test = preprocessy_eval( X, y, split, model ) end = time.time() preprocessy_time = end - start start = time.time() sklearn_test_size, sklearn_preds, sklearn_y_test = sklearn_eval( X, y, split, model ) end = time.time() sklearn_time = end - start preprocessy_accuracy = classification_report( preprocessy_y_test, preprocessy_preds, output_dict=True )["accuracy"] sklearn_accuracy = classification_report( sklearn_y_test, sklearn_preds, output_dict=True )["accuracy"] print( f"Preprocessy\n-----------\n\ntest_size - {preprocessy_test_size}\naccuracy - {preprocessy_accuracy:.4f}\ntime - {preprocessy_time:.4f}\n" ) print(f"Sklearn\n-------\n\ntest_size - {sklearn_test_size}\naccuracy - {sklearn_accuracy:.4f}\ntime - {sklearn_time:.4f}\n") def evaluate_on_iris(): model = RandomForestClassifier() X, y = load_iris(return_X_y=True, as_frame=True) eval(X, y, "iris", model) def evaluate_on_breast_cancer(): model = RandomForestClassifier() X, y = load_breast_cancer(return_X_y=True, as_frame=True) eval(X, y, "breast cancer", model) def evaluate_on_diabetes(): print(f"Dataset - diabetes") model = LinearRegression(fit_intercept=True, normalize=True, copy_X=True) X, y = load_diabetes(return_X_y=True, as_frame=True) for split in splits: start = time.time() preprocessy_test_size, preprocessy_preds, preprocessy_y_test = preprocessy_eval( X, y, split, model ) end = time.time() preprocessy_time = end - start start = time.time() sklearn_test_size, sklearn_preds, sklearn_y_test = sklearn_eval( X, y, split, model ) end = time.time() sklearn_time = end - start preprocessy_accuracy = r2_score(preprocessy_y_test, preprocessy_preds) sklearn_accuracy = r2_score(sklearn_y_test, sklearn_preds) print( f"Preprocessy\n-----------\n\ntest_size - {preprocessy_test_size}\naccuracy - {preprocessy_accuracy:.4f}\ntime - {preprocessy_time:.4f}\n" ) print(f"Sklearn\n-------\n\ntest_size - {sklearn_test_size}\naccuracy - {sklearn_accuracy:.4f}\ntime - {sklearn_time:.4f}\n") def evaluate_on_boston(): print(f"Dataset - boston") model = LinearRegression(fit_intercept=True, normalize=True, copy_X=True) dataset = load_boston() X = pd.DataFrame(dataset.data, columns=dataset.feature_names) y = pd.Series(dataset.target, name="Target") for split in splits: start = time.time() preprocessy_test_size, preprocessy_preds, preprocessy_y_test = preprocessy_eval( X, y, split, model ) end = time.time() preprocessy_time = end - start start = time.time() sklearn_test_size, sklearn_preds, sklearn_y_test = sklearn_eval( X, y, split, model ) end = time.time() sklearn_time = end - start preprocessy_accuracy = r2_score(preprocessy_y_test, preprocessy_preds) sklearn_accuracy = r2_score(sklearn_y_test, sklearn_preds) print( f"Preprocessy\n-----------\n\ntest_size - {preprocessy_test_size}\naccuracy - {preprocessy_accuracy:.4f}\ntime - {preprocessy_time:.4f}\n" ) print(f"Sklearn\n-------\n\ntest_size - {sklearn_test_size}\naccuracy - {sklearn_accuracy:.4f}\ntime - {sklearn_time:.4f}\n") evaluate_on_iris() evaluate_on_breast_cancer() evaluate_on_diabetes() evaluate_on_boston() ###Output Dataset - boston Preprocessy ----------- test_size - 0.2773500981126146 accuracy - 0.6891 time - 0.0153 Sklearn ------- test_size - 0.25 accuracy - 0.6722 time - 0.0066 Preprocessy ----------- test_size - 0.2 accuracy - 0.6752 time - 0.0074 Sklearn ------- test_size - 0.2 accuracy - 0.6747 time - 0.0035 Preprocessy ----------- test_size - 0.3 accuracy - 0.6755 time - 0.0054 Sklearn ------- test_size - 0.3 accuracy - 0.6927 time - 0.0053 ###Markdown Evaluation for Split Class- Evaluates the effect of preprocessy train test split function on model accuracy compared to sklearn train test split- Evaluates on 4 datasets * iris * boston * breast_cancer * diabetes- Using RandomForestClassifier() model on first 2 datasets- Using LinearRegression() model on other 2 datasets- Using r2_score of sklearn.metrics- Comparisons between sklearn and preprocessy based on accuracy and time for different test sizes have been indicated at the end ###Code # To access preprocessy module. Required in .ipynb files import os import sys module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) import numpy as np import pandas as pd import time from sklearn.datasets import load_iris, load_boston, load_breast_cancer, load_diabetes from sklearn.linear_model import LinearRegression from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report, r2_score from sklearn.model_selection import train_test_split from preprocessy.resampling import Split splits = [None, 0.2, 0.3] def preprocessy_eval(X, y, split, model): X_train, X_test, y_train, y_test = Split().train_test_split(X, y, test_size=split) preprocessy_test_size = None if split: preprocessy_test_size = split else: preprocessy_test_size = 1 / np.sqrt(len(X.columns)) model.fit(X_train, y_train) preds = model.predict(X_test) return preprocessy_test_size, preds, y_test def sklearn_eval(X, y, split, model): X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=split, random_state=69 ) sklearn_test_size = None if split: sklearn_test_size = split else: sklearn_test_size = 0.25 # from sklearn docs model.fit(X_train, y_train) preds = model.predict(X_test) return sklearn_test_size, preds, y_test def eval(X, y, dataset, model): print(f"Dataset - {dataset}\n") for split in splits: start = time.time() preprocessy_test_size, preprocessy_preds, preprocessy_y_test = preprocessy_eval( X, y, split, model ) end = time.time() preprocessy_time = np.round(end - start,4) start = time.time() sklearn_test_size, sklearn_preds, sklearn_y_test = sklearn_eval( X, y, split, model ) end = time.time() sklearn_time = np.round(end - start,4) preprocessy_accuracy = np.round( classification_report( preprocessy_y_test, preprocessy_preds, output_dict=True )["accuracy"], 4) sklearn_accuracy = np.round(classification_report( sklearn_y_test, sklearn_preds, output_dict=True )["accuracy"], 4) dt = {'Test_size': [preprocessy_test_size, sklearn_test_size], 'Accuracy': [preprocessy_accuracy, sklearn_accuracy], 'Time': [preprocessy_time, sklearn_time] } print(pd.DataFrame(dt,index=['Preprocessy','sklearn'])) print() def eval2(X, y, dataset, model): print(f"Dataset - {dataset}\n") for split in splits: start = time.time() preprocessy_test_size, preprocessy_preds, preprocessy_y_test = preprocessy_eval( X, y, split, model ) end = time.time() preprocessy_time = np.round(end - start,4) start = time.time() sklearn_test_size, sklearn_preds, sklearn_y_test = sklearn_eval( X, y, split, model ) end = time.time() sklearn_time = np.round(end - start,4) preprocessy_accuracy = np.round(r2_score(preprocessy_y_test, preprocessy_preds),4) sklearn_accuracy = np.round(r2_score(sklearn_y_test, sklearn_preds),4) dt = {'Test_size': [preprocessy_test_size, sklearn_test_size], 'Accuracy': [preprocessy_accuracy, sklearn_accuracy], 'Time': [preprocessy_time, sklearn_time] } print(pd.DataFrame(dt,index=['Preprocessy','sklearn'])) print() def evaluate_on_iris(): model = RandomForestClassifier() X, y = load_iris(return_X_y=True, as_frame=True) eval(X, y, "iris", model) def evaluate_on_breast_cancer(): model = RandomForestClassifier() X, y = load_breast_cancer(return_X_y=True, as_frame=True) eval(X, y, "breast cancer", model) def evaluate_on_diabetes(): model = LinearRegression(fit_intercept=True, normalize=True, copy_X=True) X, y = load_diabetes(return_X_y=True, as_frame=True) eval2(X, y, "diabetes", model) def evaluate_on_boston(): model = LinearRegression(fit_intercept=True, normalize=True, copy_X=True) dataset = load_boston() X = pd.DataFrame(dataset.data, columns=dataset.feature_names) y = pd.Series(dataset.target, name="Target") eval2(X, y, "boston", model) evaluate_on_iris() evaluate_on_breast_cancer() evaluate_on_diabetes() evaluate_on_boston() ###Output Dataset - boston Test_size Accuracy Time Preprocessy 0.27735 0.6891 0.009 sklearn 0.25000 0.6722 0.007 Test_size Accuracy Time Preprocessy 0.2 0.6752 0.0119 sklearn 0.2 0.6747 0.0060 Test_size Accuracy Time Preprocessy 0.3 0.6755 0.006 sklearn 0.3 0.6927 0.004
code/12.topic-models-with-graphlab.ipynb
###Markdown ****** Topic Modeling Using Graphlab******王成军[email protected]计算传播网 http://computational-communication.com ###Code import graphlab graphlab.canvas.set_target("ipynb") ###Output _____no_output_____ ###Markdown Download Data: http://select.cs.cmu.edu/code/graphlab/datasets/wikipedia/wikipedia_raw/w15 ###Code sf = graphlab.SFrame.read_csv("/Users/chengjun/bigdata/w15", header=False) sf ###Output _____no_output_____ ###Markdown Transformations https://dato.com/learn/userguide/text/analysis.html ###Code dir(sf['X1']) bow = sf['X1']._count_words() type(sf['X1']) type(bow) bow.dict_has_any_keys(['limited']) bow.dict_values()[0][:20] sf['bow'] = bow type(sf['bow']) len(sf['bow']) sf['bow'][0].items()[:5] sf['tfidf'] = graphlab.text_analytics.tf_idf(sf['X1']) sf['tfidf'][0].items()[:5] sf.show() sf ###Output _____no_output_____ ###Markdown Text cleaning ###Code docs = sf['bow'].dict_trim_by_values(2) docs = docs.dict_trim_by_keys( graphlab.text_analytics.stopwords(), exclude=True) ###Output _____no_output_____ ###Markdown Topic modeling ###Code m = graphlab.topic_model.create(docs) m m.get_topics() topics = m.get_topics().unstack(['word','score'], \ new_column_name='topic_words')['topic_words'].apply(lambda x: x.keys()) for topic in topics: print topic pred = m.predict(docs) pred.show() pred = m.predict(docs, output_type='probabilities') m['vocabulary'] m['topics'] def print_topics(m): topics = m.get_topics(num_words=5) topics = topics.unstack(['word','score'], new_column_name='topic_words')['topic_words'] topics = topics.apply(lambda x: x.keys()) for topic in topics: print topic print_topics(m) ###Output ['party', 'university', 'won', 'year', 'time'] ['company', 'death', 'order', 'road', 'century'] ['high', 'city', 'school', 'station', 'area'] ['club', 'world', 'region', 'back', 'team'] ['large', 'services', 'line', 'art', 'species'] ['age', 'years', 'north', 'american', 'population'] ['season', '2008', 'game', '2007', 'series'] ['album', 'band', '2010', 'family', 'released'] ['states', 'state', 'time', 'united', 'government'] ['building', 'city', 'years', 'war', 'church'] ###Markdown Initializing from other models ###Code m2 = graphlab.topic_model.create(docs, num_topics=10, initial_topics=m['topics']) ###Output _____no_output_____ ###Markdown Seeding the model with prior knowledge ###Code associations = graphlab.SFrame() associations['word'] = ['recognition'] associations['topic'] = [0] m2 = graphlab.topic_model.create(docs, num_topics=20, num_iterations=50, associations=associations, verbose=False) m2.get_topics(num_words=10) print_topics(m2) ###Output ['king', 'court', 'war', 'police', 'people'] ['information', 'aircraft', 'network', 'service', 'system'] ['states', 'region', 'united', 'state', 'government'] ['la', 'de', 'india', 'indian', 'france'] ['military', 'army', 'force', 'war', 'air'] ['model', 'set', 'number', 'power', 'system'] ['large', 'found', 'small', 'species', 'family'] ['club', 'league', 'football', 'year', 'season'] ['party', 'company', 'election', 'council', 'served'] ['town', 'age', 'years', 'school', 'population'] ['work', 'band', 'book', 'art', 'published'] ['students', 'university', 'national', 'college', 'state'] ['death', 'father', 'time', 'family', 'son'] ['season', 'team', 'final', 'won', 'played'] ['city', 'line', 'river', 'road', 'area'] ['roman', 'church', 'people', 'language', 'century'] ['years', 'john', 'york', 'race', 'time'] ['album', 'released', 'music', 'film', 'song'] ['world', 'game', 'championship', 'games', 'team'] ['engine', 'production', 'company', 'made', 'design'] ###Markdown ****** Topic Modeling Using Graphlab******王成军[email protected]计算传播网 http://computational-communication.com ###Code import graphlab graphlab.canvas.set_target("ipynb") ###Output _____no_output_____ ###Markdown Download Data: http://select.cs.cmu.edu/code/graphlab/datasets/wikipedia/wikipedia_raw/w15 ###Code sf = graphlab.SFrame.read_csv("/Users/chengjun/bigdata/w15", header=False) sf ###Output _____no_output_____ ###Markdown Transformations https://dato.com/learn/userguide/text/analysis.html ###Code dir(sf['X1']) bow = sf['X1']._count_words() type(sf['X1']) type(bow) bow.dict_has_any_keys(['limited']) bow.dict_values()[0][:20] sf['bow'] = bow type(sf['bow']) len(sf['bow']) sf['bow'][0].items()[:5] sf['tfidf'] = graphlab.text_analytics.tf_idf(sf['X1']) sf['tfidf'][0].items()[:5] sf.show() sf ###Output _____no_output_____ ###Markdown Text cleaning ###Code docs = sf['bow'].dict_trim_by_values(2) docs = docs.dict_trim_by_keys(graphlab.text_analytics.stopwords(), exclude=True) ###Output _____no_output_____ ###Markdown Topic modeling ###Code m = graphlab.topic_model.create(docs) m m.get_topics() topics = m.get_topics().unstack(['word','score'], new_column_name='topic_words')['topic_words'].apply(lambda x: x.keys()) for topic in topics: print topic pred = m.predict(docs) pred.show() pred = m.predict(docs, output_type='probabilities') m['vocabulary'] m['topics'] def print_topics(m): topics = m.get_topics(num_words=5) topics = topics.unstack(['word','score'], new_column_name='topic_words')['topic_words'] topics = topics.apply(lambda x: x.keys()) for topic in topics: print topic print_topics(m) ###Output _____no_output_____ ###Markdown Initializing from other models ###Code m2 = graphlab.topic_model.create(docs, num_topics=20, initial_topics=m['topics']) ###Output _____no_output_____ ###Markdown Seeding the model with prior knowledge ###Code associations = graphlab.SFrame() associations['word'] = ['recognition'] associations['topic'] = [0] m2 = graphlab.topic_model.create(docs, num_topics=20, num_iterations=50, associations=associations, verbose=False) m2.get_topics(num_words=10) print_topics(m2) ###Output _____no_output_____
content/labs/lab02/notebook/cs109b_lab2_smooths_and_GAMs_ccc.ipynb
###Markdown CS109B Data Science 2: Advanced Topics in Data Science Lab 2 - Smoothers and Generalized Additive Models - Model FittingSpring 2020**Harvard University****Spring 2020****Instructors:** Mark Glickman, Pavlos Protopapas, and Chris Tanner**Lab Instructors:** Chris Tanner and Eleni Kaxiras**Content:** Eleni Kaxiras and Will Claybaugh--- ###Code ## RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES import requests from IPython.core.display import HTML styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text HTML(styles) import numpy as np from scipy.interpolate import interp1d import matplotlib.pyplot as plt import pandas as pd %matplotlib inline ###Output _____no_output_____ ###Markdown Learning GoalsBy the end of this lab, you should be able to:* Understand how to implement GAMs with the Python package `pyGAM`* Learn about the practical aspects of Splines and how to use them.**This lab corresponds to lectures 1, 2, and 3 and maps to homework 1.** Table of Contents* 1 - Overview - A Top View of LMs, GLMs, and GAMs to set the stage* 2 - A review of Linear Regression with `statsmodels`. What are those weird formulas?* 3 - Splines* 4 - Generative Additive Models with pyGAM* 5 - Smooting Splines using pyGAM OverviewLinear Models (LM), Generalized Linear Models (GLMs), Generalized Additive Models (GAMs), Splines, Natural Splines, Smoothing Splines! So many definitions. Let's try and work through an example for each of them so we can better understand them. ![](../images/GAM_venn.png)*image source: Dani Servén Marín (one of the developers of pyGAM)* A - Linear ModelsFirst we have the **Linear Models** which you know from 109a. These models are linear in the coefficients. Very *interpretable* but suffer from high bias because let's face it, few relationships in life are linear. Simple Linear Regression (defined as a model with one predictor) as well as Multiple Linear Regression (more than one predictors) are examples of LMs. Polynomial Regression extends the linear model by adding terms that are still linear for the coefficients but non-linear when it somes to the predictiors which are now raised in a power or multiplied between them.![](../images/linear.png)$$\begin{aligned}y = \beta{_0} + \beta{_1}{x_1} & \mbox{(simple linear regression)}\\y = \beta{_0} + \beta{_1}{x_1} + \beta{_2}{x_2} + \beta{_3}{x_3} & \mbox{(multiple linear regression)}\\y = \beta{_0} + \beta{_1}{x_1} + \beta{_2}{x_1^2} + \beta{_3}{x_3^3} & \mbox{(polynomial regression)}\\\end{aligned}$$ Discussion - What does it mean for a model to be **interpretable**? - Are linear regression models interpretable? Are random forests? What about Neural Networks such as FFNs and CNNs? - Do we always want interpretability? Describe cases where we do and cases where we do not care. B - Generalized Linear Models (GLMs)![](../images/GLM.png)$$\begin{aligned}y = \beta{_0} + \beta{_1}{x_1} + \beta{_2}{x_2} + \beta{_3}{x_3}\end{aligned}$$**Generalized Linear Models** is a term coined in the early 1970s by Nelder and Wedderburn for a class of models that includes both Linear Regression and Logistic Regression. A GLM fits one coefficient per feature (predictor). C - Generalized Additive Models (GAMs)Hastie and Tidshirani coined the term **Generalized Additive Models** in 1986 for a class of non-linear extensions to Generalized Linear Models.![](../images/GAM.png)$$\begin{aligned}y = \beta{_0} + f_1\left(x_1\right) + f_2\left(x_2\right) + f_3\left(x_3\right) \\y = \beta{_0} + f_1\left(x_1\right) + f_2\left(x_2, x_3\right) + f_3\left(x_3\right) & \mbox{(with interaction terms)}\end{aligned}$$In practice we add splines and regularization via smoothing penalties to our GLMs. Decision Trees also fit in this category.*image source: Dani Servén Marín* D - Basis FunctionsIn our models we can use various types of functions as "basis". - Monomials such as $x^2$, $x^4$ (**Polynomial Regression**)- Sigmoid functions (neural networks)- Fourier functions - Wavelets - **Regression splines** which we will look at shortly. Discussion - Where does polynomial regression fit in all this? polynomial regression is a LM most specifically Answer: GLMs include Polynomial Regression so the graphic above should really include curved lines, not just straight... Implementation 1 - Linear/Polynomial RegressionWe will use the `diabetes` dataset.Variables are:- subject: subject ID number- age: age diagnosed with diabetes- acidity: a measure of acidity called base deficitResponse:- y: natural log of serum C-peptide concentration*Original source is Sockett et al. (1987) mentioned in Hastie and Tibshirani's book "Generalized Additive Models".* Reading data and (some) exploring in Pandas: ###Code diab = pd.read_csv("../data/diabetes.csv") diab.head() diab.dtypes diab.describe() ###Output _____no_output_____ ###Markdown Plotting with matplotlib: ###Code ax0 = diab.plot.scatter(x='age',y='y',c='Red',title="Diabetes data") #plotting direclty from pandas! ax0.set_xlabel("Age at Diagnosis") ax0.set_ylabel("Log C-Peptide Concentration"); ###Output _____no_output_____ ###Markdown Linear/Polynomial regression with statsmodels. As you remember from 109a, we have two tools for Linear Regression:- `statsmodels` [https://www.statsmodels.org/stable/regression.html](https://www.statsmodels.org/stable/regression.html), and - `sklearn`[https://scikit-learn.org/stable/index.html](https://scikit-learn.org/stable/index.html)Previously, we worked from a vector of target values and a design matrix we built ourself (e.g. using `sklearn`'s PolynomialFeatures). `statsmodels` allows users to fit statistical models using R-style **formulas**. They build the target value and design matrix for you. ``` our target variable is 'Lottery', while 'Region' is a categorical predictordf = dta.data[['Lottery', 'Literacy', 'Wealth', 'Region']]formula='Lottery ~ Literacy + Wealth + C(Region) + Literacy * Wealth'```For more on these formulas see:- https://www.statsmodels.org/stable/examples/notebooks/generated/formulas.html- https://patsy.readthedocs.io/en/latest/overview.html ###Code import statsmodels.formula.api as sm model1 = sm.ols('y ~ age',data=diab) fit1_lm = model1.fit() ###Output _____no_output_____ ###Markdown Let's build a dataframe to predict values on (sometimes this is just the test or validation set). Very useful for making pretty plots of the model predictions - predict for TONS of values, not just whatever's in the training set. ###Code x_pred = np.linspace(0,16,100) predict_df = pd.DataFrame(data={"age":x_pred}) predict_df.head() ###Output _____no_output_____ ###Markdown Use `get_prediction().summary_frame()` to get the model's prediction (and error bars!) ###Code prediction_output = fit1_lm.get_prediction(predict_df).summary_frame() prediction_output.head() ###Output _____no_output_____ ###Markdown Plot the model and error bars ###Code ax1 = diab.plot.scatter(x='age',y='y',c='Red',title="Diabetes data with least-squares linear fit") ax1.set_xlabel("Age at Diagnosis") ax1.set_ylabel("Log C-Peptide Concentration") ax1.plot(predict_df.age, prediction_output['mean'],color="green") ax1.plot(predict_df.age, prediction_output['mean_ci_lower'], color="blue",linestyle="dashed") ax1.plot(predict_df.age, prediction_output['mean_ci_upper'], color="blue",linestyle="dashed"); ###Output _____no_output_____ ###Markdown Exercise 1- Fit a 3rd degree polynomial model and- plot the model+error bars.You can either take - **Route1**: Build a design df with a column for each of `age`, `age**2`, `age**3`, or - **Route2**: Just edit the formula ###Code # your answer here # %load ../solutions/exercise1-1.py fit2_lm = sm.ols(formula="y ~ age + np.power(age, 2) + np.power(age, 3)",data=diab).fit() poly_predictions = fit2_lm.get_prediction(predict_df).summary_frame() poly_predictions.head() # %load ../solutions/exercise1-2.py ax2 = diab.plot.scatter(x='age',y='y',c='Red',title="Diabetes data with least-squares cubic fit") ax2.set_xlabel("Age at Diagnosis") ax2.set_ylabel("Log C-Peptide Concentration") ax2.plot(predict_df.age, poly_predictions['mean'],color="green") ax2.plot(predict_df.age, poly_predictions['mean_ci_lower'], color="blue",linestyle="dashed") ax2.plot(predict_df.age, poly_predictions['mean_ci_upper'], color="blue",linestyle="dashed"); ###Output _____no_output_____ ###Markdown Ed exerciseThis example was similar with the Ed exercise. [Open it in Ed](https://us.edstem.org/courses/172/lessons/656/slides/2916) and let's go though it. 2 - Piecewise Polynomials a.k.a. SplinesSplines are a type of piecewise polynomial interpolant. A spline of degree k is a piecewise polynomial that is continuously differentiable k − 1 times. Splines are the basis of CAD software and vector graphics including a lot of the fonts used in your computer. The name “spline” comes from a tool used by ship designers to draw smooth curves. Here is the letter $epsilon$ written with splines:![](../images/epsilon.png)*font idea inspired by David Knezevic (AM205)*If the degree is 1 then we have a Linear Spline. If it is 3 then we have a Cubic spline. It turns out that cubic splines because they have a continous 2nd derivative at the knots are very smoothly looking to the eye. We do not need higher order than that. The Cubic Splines are usually Natural Cubic Splines which means they have the added constrain of the end points' second derivative = 0.We will use the CubicSpline and the B-Spline as well as the Linear Spline. scipy.interpolateSee all the different splines that scipy.interpolate has to offer: https://docs.scipy.org/doc/scipy/reference/interpolate.htmlLet's use the simplest form which is interpolate on a set of points and then find the points between them. ###Code from scipy.interpolate import splrep, splev from scipy.interpolate import BSpline, CubicSpline from scipy.interpolate import interp1d # define the range of the function a = -1 b = 1 # define the number of knots num_knots = 10 x = np.linspace(a,b,num_knots) # define the function we want to approximate y = 1/(1+25*(x**2)) # make a linear spline ll = interp1d(x, y) # sample at these points to plot xx = np.linspace(a,b,1000) plt.plot(x,y,'*') plt.plot(xx, ll(xx), label='linear spline'); plt.legend(); ###Output _____no_output_____ ###Markdown Exercise 2The Linear interpolation does not look very good. Fit a Cubic Spline and plot along the Linear to compare. ###Code # your answer here # %load ../solutions/exercise2.py # define the range of the function a = -1 b = 1 # define the knots num_knots = 10 x = np.linspace(a,b,num_knots) # define the function we want to approximate y = 1/(1+25*(x**2)) # make the Cubic spline yy = CubicSpline(x, y) # OR make a linear spline ll = interp1d(x, y) # plot xx = np.linspace(a,b,1000) plt.plot(x,y,'*') plt.plot(xx, ll(xx), label='linear'); plt.plot(xx, yy(xx), label='cubic'); plt.legend(); ###Output _____no_output_____ ###Markdown Discussion- Change the number of knots to 100 and see what happens. What would happen if we run a polynomial model of degree equal to the number of knots (a global one as in polynomial regression, not a spline)?- What makes a spline 'Natural'? B-SplinesA B-splines (Basis Splines) is defined by a set of **control points** and a set of **basis functions** that intepolate (fit) the function between these points. By choosing to have no smoothing factor we forces the final B-spline to pass though all the points. If, on the other hand, we set a smothing factor, our function is more of an approximation with the control points as "guidance". The latter produced a smoother curve which is prefferable for drawing software. For more on Splines see: https://en.wikipedia.org/wiki/B-spline)![](../images/B-spline.png)We will use [`scipy.splrep`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.splrep.htmlscipy.interpolate.splrep) to calulate the coefficients for the B-Spline and draw it. B-Spline with no smooting ###Code from scipy.interpolate import splev, splrep x = np.linspace(0, 10, 10) y = np.sin(x) t,c,k = splrep(x, y) # define the points to plot on (x2) x2 = np.linspace(0, 10, 200) y2 = BSpline(t, c, k) plt.plot(x, y, 'o', x2, y2(x2)) plt.show() ###Output _____no_output_____ ###Markdown B-Spline with smooting factor s ###Code from scipy.interpolate import splev, splrep x = np.linspace(0, 10, 10) y = np.sin(x) s = 0.5 # add smooting factor task = 0 # task needs to be set to 0 for the smooting factor t,c,k = splrep(x, y, task=task, s=s) # define the points to plot on (x2) x2 = np.linspace(0, 10, 200) y2 = BSpline(t, c, k) plt.plot(x, y, 'o', x2, y2(x2)) plt.show() ###Output _____no_output_____ ###Markdown B-Spline with given knots ###Code x = np.linspace(0, 10, 100) y = np.sin(x) knots = np.quantile(x, [0.25, 0.5, 0.75]) print(knots) # calculate the B-Spline t,c,k = splrep(x, y, t=knots) curve = BSpline(t,c,k) curve plt.scatter(x=x,y=y,c='grey') plt.plot(x,curve(x)) plt.show() ###Output _____no_output_____ ###Markdown Ed exerciseThis example was similar with the Ed exercise. [Open it in Ed](https://us.edstem.org/courses/172/lessons/656/slides/2917) and let's go though it. 3 - GAMshttps://readthedocs.org/projects/pygam/downloads/pdf/latest/ A - Classification in `pyGAM`Let's get our (multivariate!) data, the `kyphosis` dataset, and the `LogisticGAM` model from `pyGAM` to do binary classification.- kyphosis - wherther a particular deformation was present post-operation- age - patient's age in months- number - the number of vertebrae involved in the operation- start - the number of the topmost vertebrae operated on ###Code kyphosis = pd.read_csv("../data/kyphosis.csv") display(kyphosis.head()) display(kyphosis.describe(include='all')) display(kyphosis.dtypes) # convert the outcome in a binary form, 1 or 0 kyphosis = pd.read_csv("../data/kyphosis.csv") kyphosis["outcome"] = 1*(kyphosis["Kyphosis"] == "present") kyphosis.describe() from pygam import LogisticGAM, s, f, l X = kyphosis[["Age","Number","Start"]] y = kyphosis["outcome"] kyph_gam = LogisticGAM().fit(X,y) ###Output _____no_output_____ ###Markdown Outcome dependence on featuresTo help us see how the outcome depends on each feature, `pyGAM` has the `partial_dependence()` function.``` pdep, confi = kyph_gam.partial_dependence(term=i, X=XX, width=0.95)```For more on this see the : https://pygam.readthedocs.io/en/latest/api/logisticgam.html ###Code res = kyph_gam.deviance_residuals(X,y) for i, term in enumerate(kyph_gam.terms): if term.isintercept: continue XX = kyph_gam.generate_X_grid(term=i) pdep, confi = kyph_gam.partial_dependence(term=i, X=XX, width=0.95) pdep2, _ = kyph_gam.partial_dependence(term=i, X=X, width=0.95) plt.figure() plt.scatter(X.iloc[:,term.feature], pdep2 + res) plt.plot(XX[:, term.feature], pdep) plt.plot(XX[:, term.feature], confi, c='r', ls='--') plt.title(X.columns.values[term.feature]) plt.show() ###Output _____no_output_____ ###Markdown Notice that we did not specify the basis functions in the .fit(). Cool. `pyGAM` figures them out for us by using $s()$ (splines) for numerical variables and $f()$ for categorical features. If this is not what we want we can manually specify the basis functions, as follows: ###Code kyph_gam = LogisticGAM(s(0)+s(1)+s(2)).fit(X,y) res = kyph_gam.deviance_residuals(X,y) for i, term in enumerate(kyph_gam.terms): if term.isintercept: continue XX = kyph_gam.generate_X_grid(term=i) pdep, confi = kyph_gam.partial_dependence(term=i, X=XX, width=0.95) pdep2, _ = kyph_gam.partial_dependence(term=i, X=X, width=0.95) plt.figure() plt.scatter(X.iloc[:,term.feature], pdep2 + res) plt.plot(XX[:, term.feature], pdep) plt.plot(XX[:, term.feature], confi, c='r', ls='--') plt.title(X.columns.values[term.feature]) plt.show() ###Output _____no_output_____ ###Markdown DiscussionDescribe the relationship of each predictor with the outcome as seen in the plots above. B - Regression in `pyGAM`For regression problems, we can use a `linearGAM` model. For this part we will use the `wages` dataset.https://pygam.readthedocs.io/en/latest/api/lineargam.html The `wages` datasetLet's inspect another dataset that is included in `pyGAM` that notes the wages of people based on their age, year of employment and education. ###Code # from the pyGAM documentation from pygam import LinearGAM, s, f from pygam.datasets import wage X, y = wage(return_X_y=True) ## model gam = LinearGAM(s(0) + s(1) + f(2)) gam.gridsearch(X, y) ## plotting plt.figure(); fig, axs = plt.subplots(1,3); titles = ['year', 'age', 'education'] for i, ax in enumerate(axs): XX = gam.generate_X_grid(term=i) ax.plot(XX[:, i], gam.partial_dependence(term=i, X=XX)) ax.plot(XX[:, i], gam.partial_dependence(term=i, X=XX, width=.95)[1], c='r', ls='--') if i == 0: ax.set_ylim(-30,30) ax.set_title(titles[i]); ###Output 100% (11 of 11) |########################| Elapsed Time: 0:00:01 Time: 0:00:01 ###Markdown DiscussionWhat are your observations from the plots above? 4 - Smoothing Splines using pyGAMFor clarity: this is the fancy spline model that minimizes $MSE - \lambda\cdot\text{wiggle penalty}$ $=$ $\sum_{i=1}^N \left(y_i - f(x_i)\right)^2 - \lambda \int \left(f''(x)\right)^2$, across all possible functions $f$. The winner will always be a continuous, cubic polynomial with a knot at each data point. Let's see how this smoothing works in `pyGAM`. We start by creating some arbitrary data and fitting them with a GAM. ###Code X = np.linspace(0,10,500) y = np.sin(X*2*np.pi)*X + np.random.randn(len(X)) plt.scatter(X,y); # let's try a large lambda first and lots of splines gam = LinearGAM(lam=1e6, n_splines=50). fit(X,y) XX = gam.generate_X_grid(term=0) plt.scatter(X,y,alpha=0.3); plt.plot(XX, gam.predict(XX)); ###Output _____no_output_____ ###Markdown We see that the large $\lambda$ forces a straight line, no flexibility. Let's see now what happens if we make it smaller. ###Code # let's try a smaller lambda gam = LinearGAM(lam=1e2, n_splines=50). fit(X,y) XX = gam.generate_X_grid(term=0) plt.scatter(X,y,alpha=0.3); plt.plot(XX, gam.predict(XX)); ###Output _____no_output_____ ###Markdown There is some curvature there but still not a good fit. Let's try no penalty. That should have the line fit exactly. ###Code # no penalty, let's try a 0 lambda gam = LinearGAM(lam=0, n_splines=50). fit(X,y) XX = gam.generate_X_grid(term=0) plt.scatter(X,y,alpha=0.3) plt.plot(XX, gam.predict(XX)) ###Output _____no_output_____ ###Markdown Yes, that is good. Now let's see what happens if we lessen the number of splines. The fit should not be as good. ###Code # no penalty, let's try a 0 lambda gam = LinearGAM(lam=0, n_splines=10). fit(X,y) XX = gam.generate_X_grid(term=0) plt.scatter(X,y,alpha=0.3); plt.plot(XX, gam.predict(XX)); ###Output _____no_output_____
dss-2016/recommendation_systems/book-recommender-solutions.ipynb
###Markdown The following code snippet will parse the books data provided at the training. ###Code import os if os.path.exists('books/ratings'): ratings = gl.SFrame('books/ratings') items = gl.SFrame('books/items') users = gl.SFrame('books/users') else: ratings = gl.SFrame.read_csv('books/book-ratings.csv') ratings.save('books/ratings') items = gl.SFrame.read_csv('books/book-data.csv') items.save('books/items') users = gl.SFrame.read_csv('books/user-data.csv') users.save('books/users') ###Output [INFO] graphlab.cython.cy_server: GraphLab Create v2.0 started. Logging: /tmp/graphlab_server_1468091566.log INFO:graphlab.cython.cy_server:GraphLab Create v2.0 started. Logging: /tmp/graphlab_server_1468091566.log ###Markdown Visually explore the above data using GraphLab Canvas. ###Code ratings.show() ###Output _____no_output_____ ###Markdown Recommendation systems In this section we will make a model that can be used to recommend new tags to users. Creating a Model Use `gl.recommender.create()` to create a model that can be used to recommend tags to each user. ###Code m = gl.recommender.create(ratings, user_id='name', item_id='book') ###Output _____no_output_____ ###Markdown Print a summary of the model by simply entering the name of the object. ###Code m ###Output _____no_output_____ ###Markdown Get all unique users from the first 10000 observations and save them as a variable called `users`. ###Code users = ratings.head(10000)['name'].unique() ###Output _____no_output_____ ###Markdown Get 20 recommendations for each user in your list of users. Save these as a new SFrame called `recs`. ###Code recs = m.recommend(users, k=20) ###Output _____no_output_____ ###Markdown Inspecting your model Get an SFrame of the 20 most similar items for each observed item. ###Code sims = m.get_similar_items() ###Output _____no_output_____ ###Markdown This dataset has multiple rows corresponding to the same book, e.g., in situations where reprintings were done by different publishers in different year.For each unique value of 'book' in the `items` SFrame, select one of the of the available values for `author`, `publisher`, and `year`. Hint: Try using [`SFrame.groupby`](https://turi.com/products/create/docs/graphlab.data_structures.htmlmodule-graphlab.aggregate) and [`gl.aggregate.SELECT_ONE`](https://turi.com/products/create/docs/graphlab.data_structures.htmlgraphlab.aggregate.SELECT_ONE). ###Code items = items.groupby('book', {k: gl.aggregate.SELECT_ONE(k) for k in ['author', 'publisher', 'year']}) ###Output _____no_output_____ ###Markdown Computing the number of times each book was rated, and add a column containing these counts to the `items` SFrame using `SFrame.join`. ###Code num_ratings_per_book = ratings.groupby('book', gl.aggregate.COUNT) items = items.join(num_ratings_per_book, on='book') ###Output _____no_output_____ ###Markdown Print the first few books, sorted by the number of times they have been rated. Do these values make sense? ###Code items.sort('Count', ascending=False) ###Output _____no_output_____ ###Markdown Now print the most similar items per item, sorted by the most common books. Hint: Join the two SFrames you created above. ###Code sims = sims.join(items[['book', 'Count']], on='book') sims = sims.sort(['Count', 'book', 'rank'], ascending=False) sims.print_rows(1000, max_row_width=150) ###Output +-------------------------------+--------------------------------+------------------+------+-------+ | book | similar | score | rank | Count | +-------------------------------+--------------------------------+------------------+------+-------+ | Wild Animus | A Prayer for Owen Meany | 0.00925928354263 | 10 | 581 | | Wild Animus | Empire Falls | 0.0097222328186 | 9 | 581 | | Wild Animus | When the Wind Blows | 0.00980395078659 | 8 | 581 | | Wild Animus | The Bonesetter's Daughter | 0.0108991861343 | 7 | 581 | | Wild Animus | Life of Pi | 0.0110375285149 | 6 | 581 | | Wild Animus | The Alienist | 0.0113154053688 | 5 | 581 | | Wild Animus | A Painted House | 0.0116959214211 | 4 | 581 | | Wild Animus | The Bridges of Madison County | 0.0119840502739 | 3 | 581 | | Wild Animus | The Secret Life of Bees | 0.0123583674431 | 2 | 581 | | Wild Animus | The Da Vinci Code | 0.0171265602112 | 1 | 581 | | The Da Vinci Code | Me Talk Pretty One Day | 0.0232558250427 | 10 | 488 | | The Da Vinci Code | The Girls' Guide to Huntin... | 0.0233837962151 | 9 | 488 | | The Da Vinci Code | Good in Bed | 0.0237098932266 | 8 | 488 | | The Da Vinci Code | Bleachers | 0.0255474448204 | 7 | 488 | | The Da Vinci Code | Bridget Jones's Diary | 0.0264105796814 | 6 | 488 | | The Da Vinci Code | Dude, Where's My Country? | 0.0284191966057 | 5 | 488 | | The Da Vinci Code | Mystic River | 0.0299500823021 | 4 | 488 | | The Da Vinci Code | The Five People You Meet i... | 0.033898293972 | 3 | 488 | | The Da Vinci Code | Life of Pi | 0.036523938179 | 2 | 488 | | The Da Vinci Code | The Secret Life of Bees | 0.043376326561 | 1 | 488 | | The Secret Life of Bees | The Rapture of Canaan | 0.0299785733223 | 10 | 406 | | The Secret Life of Bees | The Bonesetter's Daughter | 0.031135559082 | 9 | 406 | | The Secret Life of Bees | Bridget Jones's Diary | 0.0322147607803 | 8 | 406 | | The Secret Life of Bees | The Girls' Guide to Huntin... | 0.0345911979675 | 7 | 406 | | The Secret Life of Bees | Girl in Hyacinth Blue | 0.0375000238419 | 6 | 406 | | The Secret Life of Bees | Wicked: The Life and Times... | 0.0410447716713 | 5 | 406 | | The Secret Life of Bees | The Da Vinci Code | 0.043376326561 | 4 | 406 | | The Secret Life of Bees | Life of Pi | 0.0439093708992 | 3 | 406 | | The Secret Life of Bees | The Five People You Meet i... | 0.0470016002655 | 2 | 406 | | The Secret Life of Bees | Good in Bed | 0.0485436916351 | 1 | 406 | | Bridget Jones's Diary | Girl in Hyacinth Blue | 0.0266075134277 | 10 | 377 | | Bridget Jones's Diary | Empire Falls | 0.0280560851097 | 9 | 377 | | Bridget Jones's Diary | Wicked: The Life and Times... | 0.0295275449753 | 8 | 377 | | Bridget Jones's Diary | Me Talk Pretty One Day | 0.0300353169441 | 7 | 377 | | Bridget Jones's Diary | Dude, Where's My Country? | 0.0315315127373 | 6 | 377 | | Bridget Jones's Diary | The Bridges of Madison County | 0.0321360826492 | 5 | 377 | | Bridget Jones's Diary | The Secret Life of Bees | 0.0322147607803 | 4 | 377 | | Bridget Jones's Diary | The Girls' Guide to Huntin... | 0.0348837375641 | 3 | 377 | | Bridget Jones's Diary | Good in Bed | 0.0354729890823 | 2 | 377 | | Bridget Jones's Diary | Bridget Jones: The Edge of... | 0.0734966397285 | 1 | 377 | | Life of Pi | The Little Friend | 0.024324297905 | 10 | 336 | | Life of Pi | Bastard Out of Carolina | 0.0246305465698 | 9 | 336 | | Life of Pi | Good in Bed | 0.0246913433075 | 8 | 336 | | Life of Pi | The Five People You Meet i... | 0.0247787833214 | 7 | 336 | | Life of Pi | Wicked: The Life and Times... | 0.025052189827 | 6 | 336 | | Life of Pi | I Know This Much Is True | 0.0255814194679 | 5 | 336 | | Life of Pi | Dude, Where's My Country? | 0.0265060067177 | 4 | 336 | | Life of Pi | Empire Falls | 0.0321888327599 | 3 | 336 | | Life of Pi | The Da Vinci Code | 0.036523938179 | 2 | 336 | | Life of Pi | The Secret Life of Bees | 0.0439093708992 | 1 | 336 | | The Summons | The Beach House | 0.0269709825516 | 10 | 309 | | The Summons | Last Man Standing | 0.0274725556374 | 9 | 309 | | The Summons | The Pelican Brief | 0.0276595950127 | 8 | 309 | | The Summons | Four Blind Mice | 0.0300751924515 | 7 | 309 | | The Summons | The King of Torts | 0.0310077667236 | 6 | 309 | | The Summons | The Client | 0.03125 | 5 | 309 | | The Summons | A Painted House | 0.031358897686 | 4 | 309 | | The Summons | Violets Are Blue | 0.0342465639114 | 3 | 309 | | The Summons | The Firm | 0.0371819734573 | 2 | 309 | | The Summons | The Brethren | 0.0407239794731 | 1 | 309 | | A Painted House | Good in Bed | 0.0211945772171 | 10 | 284 | | A Painted House | The Client | 0.023913025856 | 9 | 284 | | A Painted House | The Deep End of the Ocean | 0.0240963697433 | 8 | 284 | | A Painted House | When the Wind Blows | 0.0241546034813 | 7 | 284 | | A Painted House | We Were the Mulvaneys | 0.0242130756378 | 6 | 284 | | A Painted House | The Secret Life of Bees | 0.0254110693932 | 5 | 284 | | A Painted House | The Pelican Brief | 0.0291479825974 | 4 | 284 | | A Painted House | The Summons | 0.031358897686 | 3 | 284 | | A Painted House | The Firm | 0.0347648262978 | 2 | 284 | | A Painted House | The Brethren | 0.0380952358246 | 1 | 284 | | The Girls' Guide to Huntin... | Pigs in Heaven | 0.0258620977402 | 10 | 259 | | The Girls' Guide to Huntin... | Girl in Hyacinth Blue | 0.0262390375137 | 9 | 259 | | The Girls' Guide to Huntin... | Wicked: The Life and Times... | 0.0299999713898 | 8 | 259 | | The Girls' Guide to Huntin... | The Stone Diaries | 0.0316455960274 | 7 | 259 | | The Girls' Guide to Huntin... | The Boy Next Door | 0.0327869057655 | 6 | 259 | | The Girls' Guide to Huntin... | The Secret Life of Bees | 0.0345911979675 | 5 | 259 | | The Girls' Guide to Huntin... | Bridget Jones's Diary | 0.0348837375641 | 4 | 259 | | The Girls' Guide to Huntin... | The Virgin Suicides | 0.03519064188 | 3 | 259 | | The Girls' Guide to Huntin... | Bridget Jones: The Edge of... | 0.0363128781319 | 2 | 259 | | The Girls' Guide to Huntin... | Good in Bed | 0.0393374562263 | 1 | 259 | | Good in Bed | The Crimson Petal and the ... | 0.029126226902 | 10 | 247 | | Good in Bed | The Idiot Girls' Action Ad... | 0.03125 | 9 | 247 | | Good in Bed | High Maintenance | 0.0323740839958 | 8 | 247 | | Good in Bed | Girl in Hyacinth Blue | 0.0332326292992 | 7 | 247 | | Good in Bed | Bridget Jones: The Edge of... | 0.0343839526176 | 6 | 247 | | Good in Bed | Bridget Jones's Diary | 0.0354729890823 | 5 | 247 | | Good in Bed | The Five People You Meet i... | 0.0381355881691 | 4 | 247 | | Good in Bed | The Girls' Guide to Huntin... | 0.0393374562263 | 3 | 247 | | Good in Bed | Empire Falls | 0.0397877693176 | 2 | 247 | | Good in Bed | The Secret Life of Bees | 0.0485436916351 | 1 | 247 | | The Five People You Meet i... | Life of Pi | 0.0247787833214 | 10 | 244 | | The Five People You Meet i... | Bleachers | 0.025806427002 | 9 | 244 | | The Five People You Meet i... | The Last Juror | 0.0266666412354 | 8 | 244 | | The Five People You Meet i... | Black House | 0.0268656611443 | 7 | 244 | | The Five People You Meet i... | Suzanne's Diary for Nicholas | 0.0283842682838 | 6 | 244 | | The Five People You Meet i... | Girl in Hyacinth Blue | 0.0303030014038 | 5 | 244 | | The Five People You Meet i... | Four Blind Mice | 0.0327380895615 | 4 | 244 | | The Five People You Meet i... | The Da Vinci Code | 0.033898293972 | 3 | 244 | | The Five People You Meet i... | Good in Bed | 0.0381355881691 | 2 | 244 | | The Five People You Meet i... | The Secret Life of Bees | 0.0470016002655 | 1 | 244 | | Suzanne's Diary for Nicholas | When the Wind Blows | 0.0222841501236 | 10 | 228 | | Suzanne's Diary for Nicholas | Bridget Jones's Diary | 0.0223752260208 | 9 | 228 | | Suzanne's Diary for Nicholas | Wanderlust | 0.0234375 | 8 | 228 | | Suzanne's Diary for Nicholas | The Summerhouse | 0.023569047451 | 7 | 228 | | Suzanne's Diary for Nicholas | Good in Bed | 0.0238095521927 | 6 | 228 | | Suzanne's Diary for Nicholas | Here on Earth | 0.0240550041199 | 5 | 228 | | Suzanne's Diary for Nicholas | Violets Are Blue | 0.0247933864594 | 4 | 228 | | Suzanne's Diary for Nicholas | The King of Torts | 0.025806427002 | 3 | 228 | | Suzanne's Diary for Nicholas | The Five People You Meet i... | 0.0283842682838 | 2 | 228 | | Suzanne's Diary for Nicholas | The Beach House | 0.0324189662933 | 1 | 228 | | The Firm | A Painted House | 0.0347648262978 | 10 | 224 | | The Firm | The Brethren | 0.0360110998154 | 9 | 224 | | The Firm | The Summons | 0.0371819734573 | 8 | 224 | | The Firm | Skeleton Crew | 0.0378006696701 | 7 | 224 | | The Firm | Unnatural Exposure | 0.0379746556282 | 6 | 224 | | The Firm | Postmortem | 0.0390070676804 | 5 | 224 | | The Firm | Silence of the Lambs | 0.0393939614296 | 4 | 224 | | The Firm | The Hunt for Red October | 0.0415225028992 | 3 | 224 | | The Firm | The Client | 0.0877659320831 | 2 | 224 | | The Firm | The Pelican Brief | 0.108938574791 | 1 | 224 | | Me Talk Pretty One Day | Wicked: The Life and Times... | 0.0247933864594 | 10 | 218 | | Me Talk Pretty One Day | The Honk and Holler Openin... | 0.0252707600594 | 9 | 218 | | Me Talk Pretty One Day | Geek Love | 0.0266159772873 | 8 | 218 | | Me Talk Pretty One Day | The Idiot Girls' Action Ad... | 0.0269230604172 | 7 | 218 | | Me Talk Pretty One Day | The Secret Life of Bees | 0.0282862186432 | 6 | 218 | | Me Talk Pretty One Day | The Color of Water: A Blac... | 0.0283911824226 | 5 | 218 | | Me Talk Pretty One Day | Good in Bed | 0.0289532542229 | 4 | 218 | | Me Talk Pretty One Day | Five Quarters of the Orange | 0.0297029614449 | 3 | 218 | | Me Talk Pretty One Day | Bridget Jones's Diary | 0.0300353169441 | 2 | 218 | | Me Talk Pretty One Day | Girl in Hyacinth Blue | 0.0365448594093 | 1 | 218 | | The Joy Luck Club | Fingersmith | 0.0249999761581 | 10 | 208 | | The Joy Luck Club | The Secret Life of Bees | 0.025210082531 | 9 | 208 | | The Joy Luck Club | Two for the Dough | 0.0256410241127 | 8 | 208 | | The Joy Luck Club | I Know This Much Is True | 0.0261437892914 | 7 | 208 | | The Joy Luck Club | The World According to Garp | 0.0268199443817 | 6 | 208 | | The Joy Luck Club | The Firm | 0.0287081599236 | 5 | 208 | | The Joy Luck Club | The Bonesetter's Daughter | 0.0307262539864 | 4 | 208 | | The Joy Luck Club | The Color Purple | 0.0322580933571 | 3 | 208 | | The Joy Luck Club | The Hundred Secret Senses | 0.0546624064445 | 2 | 208 | | The Joy Luck Club | The Kitchen God's Wife | 0.0627062916756 | 1 | 208 | | The Client | Needful Things | 0.0313725471497 | 10 | 189 | | The Client | Deja Dead | 0.0316205620766 | 9 | 189 | | The Client | The Body Farm | 0.0346153974533 | 8 | 189 | | The Client | When the Wind Blows | 0.0348101258278 | 7 | 189 | | The Client | Skeleton Crew | 0.04296875 | 6 | 189 | | The Client | Silence of the Lambs | 0.044067800045 | 5 | 189 | | The Client | Postmortem | 0.0445343852043 | 4 | 189 | | The Client | The Street Lawyer | 0.0483871102333 | 3 | 189 | | The Client | The Pelican Brief | 0.0805970430374 | 2 | 189 | | The Client | The Firm | 0.0877659320831 | 1 | 189 | | The Beach House | The Summons | 0.0269709825516 | 10 | 189 | | The Beach House | Q Is for Quarry | 0.0271186232567 | 9 | 189 | | The Beach House | Unfit to Practice | 0.0289855003357 | 8 | 189 | | The Beach House | Mortal Prey | 0.0291666388512 | 7 | 189 | | The Beach House | Chosen Prey | 0.0308880209923 | 6 | 189 | | The Beach House | Pop Goes the Weasel | 0.031358897686 | 5 | 189 | | The Beach House | Suzanne's Diary for Nicholas | 0.0324189662933 | 4 | 189 | | The Beach House | When the Wind Blows | 0.0414012670517 | 3 | 189 | | The Beach House | Four Blind Mice | 0.0431654453278 | 2 | 189 | | The Beach House | Violets Are Blue | 0.0473186373711 | 1 | 189 | | The Horse Whisperer | Turtle Moon | 0.0232558250427 | 10 | 182 | | The Horse Whisperer | Wanderlust | 0.0236966609955 | 9 | 182 | | The Horse Whisperer | The Kitchen God's Wife | 0.0243055820465 | 8 | 182 | | The Horse Whisperer | Postmortem | 0.0243902206421 | 7 | 182 | | The Horse Whisperer | Beach Music | 0.0251045823097 | 6 | 182 | | The Horse Whisperer | Unnatural Exposure | 0.0286738276482 | 5 | 182 | | The Horse Whisperer | The Pelican Brief | 0.0289017558098 | 4 | 182 | | The Horse Whisperer | Final Target | 0.0290456414223 | 3 | 182 | | The Horse Whisperer | The Client | 0.0308123230934 | 2 | 182 | | The Horse Whisperer | The Loop | 0.0528634190559 | 1 | 182 | | The Bridges of Madison County | Don't Stand Too Close to a... | 0.0222222208977 | 10 | 182 | | The Bridges of Madison County | Mirror Image | 0.0224215388298 | 9 | 182 | | The Bridges of Madison County | The King of Torts | 0.0227272510529 | 8 | 182 | | The Bridges of Madison County | The Firm | 0.0229591727257 | 7 | 182 | | The Bridges of Madison County | The Street Lawyer | 0.0243902206421 | 6 | 182 | | The Bridges of Madison County | Good in Bed | 0.0265700221062 | 5 | 182 | | The Bridges of Madison County | That Camden Summer | 0.0276497602463 | 4 | 182 | | The Bridges of Madison County | Slow Waltz in Cedar Bend | 0.0288065671921 | 3 | 182 | | The Bridges of Madison County | Bridget Jones's Diary | 0.0321360826492 | 2 | 182 | | The Bridges of Madison County | Tell Me Your Dreams | 0.033018887043 | 1 | 182 | | A Prayer for Owen Meany | The Bonesetter's Daughter | 0.0269461274147 | 10 | 182 | | A Prayer for Owen Meany | Silence of the Lambs | 0.0271186232567 | 9 | 182 | | A Prayer for Owen Meany | The Crimson Petal and the ... | 0.0283401012421 | 8 | 182 | | A Prayer for Owen Meany | I Know This Much Is True | 0.0285714268684 | 7 | 182 | | A Prayer for Owen Meany | The Stand: Complete and Uncut | 0.0285714268684 | 6 | 182 | | A Prayer for Owen Meany | A Son of the Circus | 0.0287081599236 | 5 | 182 | | A Prayer for Owen Meany | Geek Love | 0.0305677056313 | 4 | 182 | | A Prayer for Owen Meany | The Fourth Hand | 0.0325581431389 | 3 | 182 | | A Prayer for Owen Meany | Empire Falls | 0.0347003340721 | 2 | 182 | | A Prayer for Owen Meany | The Cider House Rules | 0.0410094857216 | 1 | 182 | | The Pelican Brief | Two for the Dough | 0.036101102829 | 10 | 175 | | The Pelican Brief | Silence of the Lambs | 0.0385965108871 | 9 | 175 | | The Pelican Brief | Hornet's Nest | 0.0390625 | 8 | 175 | | The Pelican Brief | The Burden of Proof | 0.0392156839371 | 7 | 175 | | The Pelican Brief | When the Wind Blows | 0.0396039485931 | 6 | 175 | | The Pelican Brief | Pet Sematary | 0.0418251156807 | 5 | 175 | | The Pelican Brief | The Brethren | 0.0480769276619 | 4 | 175 | | The Pelican Brief | Postmortem | 0.0557940006256 | 3 | 175 | | The Pelican Brief | The Client | 0.0805970430374 | 2 | 175 | | The Pelican Brief | The Firm | 0.108938574791 | 1 | 175 | | The Bonesetter's Daughter | The Fourth Hand | 0.0307692289352 | 10 | 163 | | The Bonesetter's Daughter | The Crimson Petal and the ... | 0.0309734344482 | 9 | 163 | | The Bonesetter's Daughter | The Secret Life of Bees | 0.031135559082 | 8 | 163 | | The Bonesetter's Daughter | The Handmaid's Tale | 0.0327273011208 | 7 | 163 | | The Bonesetter's Daughter | It Was on Fire When I Lay ... | 0.033018887043 | 6 | 163 | | The Bonesetter's Daughter | I Know This Much Is True | 0.0348837375641 | 5 | 163 | | The Bonesetter's Daughter | The Mistress of Spices | 0.0351758599281 | 4 | 163 | | The Bonesetter's Daughter | The Stone Diaries | 0.0358744263649 | 3 | 163 | | The Bonesetter's Daughter | The Hundred Secret Senses | 0.0446096658707 | 2 | 163 | | The Bonesetter's Daughter | The Kitchen God's Wife | 0.0536398291588 | 1 | 163 | | Wicked: The Life and Times... | The Crimson Petal and the ... | 0.0316742062569 | 10 | 156 | | Wicked: The Life and Times... | Fingersmith | 0.0319148898125 | 9 | 156 | | Wicked: The Life and Times... | Bridget Jones: The Edge of... | 0.034351170063 | 8 | 156 | | Wicked: The Life and Times... | Peace Like a River | 0.0363636612892 | 7 | 156 | | Wicked: The Life and Times... | The Honk and Holler Openin... | 0.0370370149612 | 6 | 156 | | Wicked: The Life and Times... | Empire Falls | 0.0378006696701 | 5 | 156 | | Wicked: The Life and Times... | The Secret Life of Bees | 0.0410447716713 | 4 | 156 | | Wicked: The Life and Times... | Girl in Hyacinth Blue | 0.0413222908974 | 3 | 156 | | Wicked: The Life and Times... | The Other Boleyn Girl | 0.0423728823662 | 2 | 156 | | Wicked: The Life and Times... | Einstein's Dreams | 0.0439024567604 | 1 | 156 | | The Brethren | Bag of Bones | 0.0253164768219 | 10 | 152 | | The Brethren | Presumed Innocent | 0.0263158082962 | 9 | 152 | | The Brethren | The Hunt for Red October | 0.0266666412354 | 8 | 152 | | The Brethren | The Client | 0.0272727012634 | 7 | 152 | | The Brethren | Violets Are Blue | 0.0276816487312 | 6 | 152 | | The Brethren | The Firm | 0.0360110998154 | 5 | 152 | | The Brethren | A Painted House | 0.0380952358246 | 4 | 152 | | The Brethren | The Simple Truth | 0.0384615659714 | 3 | 152 | | The Brethren | The Summons | 0.0407239794731 | 2 | 152 | | The Brethren | The Pelican Brief | 0.0480769276619 | 1 | 152 | | The Cider House Rules | A Walk Through the Fire | 0.025806427002 | 10 | 149 | | The Cider House Rules | Bridget Jones's Diary | 0.0258964300156 | 9 | 149 | | The Cider House Rules | The Kitchen God's Wife | 0.027450978756 | 8 | 149 | | The Cider House Rules | Here on Earth | 0.0281690359116 | 7 | 149 | | The Cider House Rules | Girl in Hyacinth Blue | 0.0295358896255 | 6 | 149 | | The Cider House Rules | The Hundred Secret Senses | 0.0307692289352 | 5 | 149 | | The Cider House Rules | The Prince of Tides | 0.0309734344482 | 4 | 149 | | The Cider House Rules | One True Thing | 0.031963467598 | 3 | 149 | | The Cider House Rules | The World According to Garp | 0.034825861454 | 2 | 149 | | The Cider House Rules | A Prayer for Owen Meany | 0.0410094857216 | 1 | 149 | | Violets Are Blue | Mortal Prey | 0.0459183454514 | 10 | 147 | | Violets Are Blue | The Beach House | 0.0473186373711 | 9 | 147 | | Violets Are Blue | Deadly Decisions | 0.047872364521 | 8 | 147 | | Violets Are Blue | Final Target | 0.0495049357414 | 7 | 147 | | Violets Are Blue | Pop Goes the Weasel | 0.0495867729187 | 6 | 147 | | Violets Are Blue | Isle of Dogs | 0.0502092242241 | 5 | 147 | | Violets Are Blue | When the Wind Blows | 0.059479534626 | 4 | 147 | | Violets Are Blue | Roses Are Red | 0.0621469020844 | 3 | 147 | | Violets Are Blue | Chosen Prey | 0.0663506984711 | 2 | 147 | | Violets Are Blue | Four Blind Mice | 0.0877193212509 | 1 | 147 | | Neverwhere | N Is for Noose | 0.0207253694534 | 10 | 146 | | Neverwhere | Servant of the Bones | 0.021505355835 | 9 | 146 | | Neverwhere | Word Freak: Heartbreak, Tr... | 0.0216216444969 | 8 | 146 | | Neverwhere | A Son of the Circus | 0.0229884982109 | 7 | 146 | | Neverwhere | Pattern Recognition | 0.0233917832375 | 6 | 146 | | Neverwhere | Practical Demonkeeping | 0.0233917832375 | 5 | 146 | | Neverwhere | Death: The High Cost of Living | 0.0266666412354 | 4 | 146 | | Neverwhere | Summon the Keeper | 0.0310559272766 | 3 | 146 | | Neverwhere | Good Omens | 0.0311111211777 | 2 | 146 | | Neverwhere | Sandman: The Dream Hunters | 0.0400000214577 | 1 | 146 | | Empire Falls | The Crimson Petal and the ... | 0.0331753492355 | 10 | 146 | | Empire Falls | The Stone Diaries | 0.0334928035736 | 9 | 146 | | Empire Falls | Fingersmith | 0.033707857132 | 8 | 146 | | Empire Falls | The Honk and Holler Openin... | 0.0338163971901 | 7 | 146 | | Empire Falls | A Prayer for Owen Meany | 0.0347003340721 | 6 | 146 | | Empire Falls | Gap Creek: The Story Of A ... | 0.0355330109596 | 5 | 146 | | Empire Falls | Wicked: The Life and Times... | 0.0378006696701 | 4 | 146 | | Empire Falls | Good in Bed | 0.0397877693176 | 3 | 146 | | Empire Falls | Peace Like a River | 0.043062210083 | 2 | 146 | | Empire Falls | The Little Friend | 0.0439560413361 | 1 | 146 | | The Color Purple | The Kitchen God's Wife | 0.027888417244 | 10 | 145 | | The Color Purple | Pet Sematary | 0.0296609997749 | 9 | 145 | | The Color Purple | Oranges Are Not the Only Fruit | 0.0308641791344 | 8 | 145 | | The Color Purple | The Firm | 0.030985891819 | 7 | 145 | | The Color Purple | The River King | 0.0319148898125 | 6 | 145 | | The Color Purple | The Joy Luck Club | 0.0322580933571 | 5 | 145 | | The Color Purple | Four Past Midnight | 0.033018887043 | 4 | 145 | | The Color Purple | The Temple of My Familiar | 0.0346820950508 | 3 | 145 | | The Color Purple | Animal Farm | 0.0352941155434 | 2 | 145 | | The Color Purple | The Prince of Tides | 0.036199092865 | 1 | 145 | | When the Wind Blows | The Program | 0.0382165312767 | 10 | 144 | | When the Wind Blows | The Devil's Code | 0.03867405653 | 9 | 144 | | When the Wind Blows | Pet Sematary | 0.039130449295 | 8 | 144 | | When the Wind Blows | The Pelican Brief | 0.0396039485931 | 7 | 144 | | When the Wind Blows | The Beach House | 0.0414012670517 | 6 | 144 | | When the Wind Blows | Mortal Prey | 0.0416666865349 | 5 | 144 | | When the Wind Blows | Four Blind Mice | 0.0474137663841 | 4 | 144 | | When the Wind Blows | The Lake House | 0.053763449192 | 3 | 144 | | When the Wind Blows | Roses Are Red | 0.0578034520149 | 2 | 144 | | When the Wind Blows | Violets Are Blue | 0.059479534626 | 1 | 144 | | A Heartbreaking Work of St... | Life of Pi | 0.0191489458084 | 10 | 144 | | A Heartbreaking Work of St... | Mariette in Ecstasy | 0.0193548202515 | 9 | 144 | | A Heartbreaking Work of St... | Portnoy's Complaint | 0.0202702879906 | 8 | 144 | | A Heartbreaking Work of St... | Pet Sematary | 0.0210084319115 | 7 | 144 | | A Heartbreaking Work of St... | Mutant Message Down Under | 0.0210526585579 | 6 | 144 | | A Heartbreaking Work of St... | Empire Falls | 0.0211267471313 | 5 | 144 | | A Heartbreaking Work of St... | Good Omens | 0.0221238732338 | 4 | 144 | | A Heartbreaking Work of St... | Bridget Jones: The Edge of... | 0.0237154364586 | 3 | 144 | | A Heartbreaking Work of St... | Geek Love | 0.0259067416191 | 2 | 144 | | A Heartbreaking Work of St... | Wicked: The Life and Times... | 0.027397274971 | 1 | 144 | | Icy Sparks | Four Blind Mice | 0.0251045823097 | 10 | 142 | | Icy Sparks | When the Wind Blows | 0.0254545211792 | 9 | 142 | | Icy Sparks | Drowning Ruth | 0.0268816947937 | 8 | 142 | | Icy Sparks | Bastard Out of Carolina | 0.0276497602463 | 7 | 142 | | Icy Sparks | Roses Are Red | 0.0277777910233 | 6 | 142 | | Icy Sparks | Empire Falls | 0.0285714268684 | 5 | 142 | | Icy Sparks | Tell Me Your Dreams | 0.0340909361839 | 4 | 142 | | Icy Sparks | River, Cross My Heart | 0.0373831987381 | 3 | 142 | | Icy Sparks | I Know This Much Is True | 0.0508474707603 | 2 | 142 | | Icy Sparks | We Were the Mulvaneys | 0.0524344444275 | 1 | 142 | | We Were the Mulvaneys | Mary, Called Magdalene | 0.0256410241127 | 10 | 139 | | We Were the Mulvaneys | The Loop | 0.0260416865349 | 9 | 139 | | We Were the Mulvaneys | The Bonesetter's Daughter | 0.027397274971 | 8 | 139 | | We Were the Mulvaneys | The Rapture of Canaan | 0.0283018946648 | 7 | 139 | | We Were the Mulvaneys | Empire Falls | 0.0288808941841 | 6 | 139 | | We Were the Mulvaneys | The Hundred Secret Senses | 0.03187251091 | 5 | 139 | | We Were the Mulvaneys | Drowning Ruth | 0.032967031002 | 4 | 139 | | We Were the Mulvaneys | I Know This Much Is True | 0.0470085740089 | 3 | 139 | | We Were the Mulvaneys | Icy Sparks | 0.0524344444275 | 2 | 139 | | We Were the Mulvaneys | River, Cross My Heart | 0.0528846383095 | 1 | 139 | | A Child Called "It": One C... | Not Without My Daughter | 0.0191082954407 | 10 | 136 | | A Child Called "It": One C... | The Pelican Brief | 0.0196721553802 | 9 | 136 | | A Child Called "It": One C... | Life's Little Instruction ... | 0.0202702879906 | 8 | 136 | | A Child Called "It": One C... | The Color of Water: A Blac... | 0.0207468867302 | 7 | 136 | | A Child Called "It": One C... | The Deep End of the Ocean | 0.0212765932083 | 6 | 136 | | A Child Called "It": One C... | Simple Abundance: A Daybo... | 0.0220994353294 | 5 | 136 | | A Child Called "It": One C... | Wanderlust | 0.0239520668983 | 4 | 136 | | A Child Called "It": One C... | Babyhood | 0.0253164768219 | 3 | 136 | | A Child Called "It": One C... | Blue Diary | 0.0261780023575 | 2 | 136 | | A Child Called "It": One C... | J Is for Judgment | 0.0268816947937 | 1 | 136 | | The Alienist | When the Wind Blows | 0.0300751924515 | 10 | 134 | | The Alienist | Postmortem | 0.0301507711411 | 9 | 134 | | The Alienist | Black House | 0.0308369994164 | 8 | 134 | | The Alienist | The Tommyknockers | 0.0316742062569 | 7 | 134 | | The Alienist | The Handmaid's Tale | 0.0321285128593 | 6 | 134 | | The Alienist | The Liar's Club: A Memoir | 0.0350877046585 | 5 | 134 | | The Alienist | The Stone Diaries | 0.0355330109596 | 4 | 134 | | The Alienist | Pet Sematary | 0.0355555415154 | 3 | 134 | | The Alienist | The Shining | 0.0357142686844 | 2 | 134 | | The Alienist | The Angel of Darkness | 0.0869565010071 | 1 | 134 | | Mystic River | The Lake House | 0.0274725556374 | 10 | 133 | | Mystic River | Night Chills | 0.0275862216949 | 9 | 133 | | Mystic River | Trial by Fire | 0.0279720425606 | 8 | 133 | | Mystic River | Mortal Fear | 0.0285714268684 | 7 | 133 | | Mystic River | Falling Backwards | 0.0291970968246 | 6 | 133 | | Mystic River | The Da Vinci Code | 0.0299500823021 | 5 | 133 | | Mystic River | Faking It | 0.0306122303009 | 4 | 133 | | Mystic River | Blue Shoe | 0.0308641791344 | 3 | 133 | | Mystic River | Dead Run | 0.0324675440788 | 2 | 133 | | Mystic River | Deja Dead | 0.0353535413742 | 1 | 133 | | One Door Away from Heaven | Cold Fire | 0.0304877758026 | 10 | 131 | | One Door Away from Heaven | Winter Moon | 0.0304877758026 | 9 | 131 | | One Door Away from Heaven | Four Blind Mice | 0.0309734344482 | 8 | 131 | | One Door Away from Heaven | The Key to Midnight | 0.0324675440788 | 7 | 131 | | One Door Away from Heaven | Gerald's Game | 0.0343137383461 | 6 | 131 | | One Door Away from Heaven | Rose Madder | 0.0346534848213 | 5 | 131 | | One Door Away from Heaven | False Memory | 0.0396039485931 | 4 | 131 | | One Door Away from Heaven | Black House | 0.0407239794731 | 3 | 131 | | One Door Away from Heaven | Violets Are Blue | 0.0416666865349 | 2 | 131 | | One Door Away from Heaven | Mr. Murder | 0.0476190447807 | 1 | 131 | | Black Notice | Southern Cross | 0.0445544719696 | 10 | 131 | | Black Notice | Sudden Prey | 0.0457516312599 | 9 | 131 | | Black Notice | Postmortem | 0.0466321110725 | 8 | 131 | | Black Notice | The Body Farm | 0.0597015023232 | 7 | 131 | | Black Notice | Cause of Death | 0.0645161271095 | 6 | 131 | | Black Notice | The Last Precinct | 0.0655737519264 | 5 | 131 | | Black Notice | Hornet's Nest | 0.0673077106476 | 4 | 131 | | Black Notice | From Potter's Field | 0.0697674155235 | 3 | 131 | | Black Notice | Unnatural Exposure | 0.0821917653084 | 2 | 131 | | Black Notice | Point of Origin | 0.104072391987 | 1 | 131 | | The Last Precinct | Praying for Sleep | 0.0349650382996 | 10 | 130 | | The Last Precinct | Isle of Dogs | 0.0352423191071 | 9 | 130 | | The Last Precinct | Hornet's Nest | 0.0377358198166 | 8 | 130 | | The Last Precinct | Southern Cross | 0.0398010015488 | 7 | 130 | | The Last Precinct | The Body Farm | 0.0550000071526 | 6 | 130 | | The Last Precinct | Cause of Death | 0.0552995204926 | 5 | 130 | | The Last Precinct | From Potter's Field | 0.0604650974274 | 4 | 130 | | The Last Precinct | Black Notice | 0.0655737519264 | 3 | 130 | | The Last Precinct | Unnatural Exposure | 0.0730593800545 | 2 | 130 | | The Last Precinct | Point of Origin | 0.0852017998695 | 1 | 130 | | The Alchemist: A Fable Abo... | Five Quarters of the Orange | 0.0231481194496 | 10 | 126 | | The Alchemist: A Fable Abo... | The Idiot Girls' Action Ad... | 0.0232558250427 | 9 | 126 | | The Alchemist: A Fable Abo... | Dude, Where's My Country? | 0.0236966609955 | 8 | 126 | | The Alchemist: A Fable Abo... | A Tree Grows in Brooklyn | 0.0246913433075 | 7 | 126 | | The Alchemist: A Fable Abo... | No Second Chance | 0.0259740352631 | 6 | 126 | | The Alchemist: A Fable Abo... | A Natural History of the S... | 0.027397274971 | 5 | 126 | | The Alchemist: A Fable Abo... | Dogbert's Top Secret Manag... | 0.0294117927551 | 4 | 126 | | The Alchemist: A Fable Abo... | The LETTER | 0.0300751924515 | 3 | 126 | | The Alchemist: A Fable Abo... | Veronika Decides to Die | 0.0324675440788 | 2 | 126 | | The Alchemist: A Fable Abo... | Sophie's Choice | 0.035971224308 | 1 | 126 | | The Handmaid's Tale | Animist | 0.0314960479736 | 10 | 125 | | The Handmaid's Tale | Baby With the Bathwater an... | 0.0320000052452 | 9 | 125 | | The Handmaid's Tale | The Alienist | 0.0321285128593 | 8 | 125 | | The Handmaid's Tale | The Bonesetter's Daughter | 0.0327273011208 | 7 | 125 | | The Handmaid's Tale | I Capture the Castle | 0.0331491827965 | 6 | 125 | | The Handmaid's Tale | The Kitchen God's Wife | 0.0349345207214 | 5 | 125 | | The Handmaid's Tale | Cat's Eye | 0.0352112650871 | 4 | 125 | | The Handmaid's Tale | Bastard Out of Carolina | 0.0355330109596 | 3 | 125 | | The Handmaid's Tale | Doomsday Book | 0.0394737124443 | 2 | 125 | | The Handmaid's Tale | The Prince of Tides | 0.0400000214577 | 1 | 125 | | The Hundred Secret Senses | Hidden Fires | 0.03125 | 10 | 121 | | The Hundred Secret Senses | We Were the Mulvaneys | 0.03187251091 | 9 | 121 | | The Hundred Secret Senses | The Diary of Mattie Spenser | 0.0320000052452 | 8 | 121 | | The Hundred Secret Senses | A Son of the Circus | 0.0337837934494 | 7 | 121 | | The Hundred Secret Senses | Dude, Where's My Country? | 0.0343137383461 | 6 | 121 | | The Hundred Secret Senses | Bridget Jones: The Edge of... | 0.0398229956627 | 5 | 121 | | The Hundred Secret Senses | The Bonesetter's Daughter | 0.0446096658707 | 4 | 121 | | The Hundred Secret Senses | The Kitchen God's Wife | 0.0493273735046 | 3 | 121 | | The Hundred Secret Senses | The Stone Diaries | 0.0497237443924 | 2 | 121 | | The Hundred Secret Senses | The Joy Luck Club | 0.0546624064445 | 1 | 121 | | Silence of the Lambs | The Firm | 0.0393939614296 | 10 | 121 | | Silence of the Lambs | Rules of Prey | 0.0402684807777 | 9 | 121 | | Silence of the Lambs | The Body Farm | 0.0410256385803 | 8 | 121 | | Silence of the Lambs | Four Past Midnight | 0.0425531864166 | 7 | 121 | | Silence of the Lambs | The Mummy or Ramses the Damned | 0.0434782505035 | 6 | 121 | | Silence of the Lambs | The Tommyknockers | 0.0436893105507 | 5 | 121 | | Silence of the Lambs | The Client | 0.044067800045 | 4 | 121 | | Silence of the Lambs | Skeleton Crew | 0.046875 | 3 | 121 | | Silence of the Lambs | Pet Sematary | 0.0476190447807 | 2 | 121 | | Silence of the Lambs | The Dark Half | 0.0480769276619 | 1 | 121 | | Animal Farm | Fight Club | 0.0233917832375 | 10 | 120 | | Animal Farm | The Eyes of the Dragon | 0.0233917832375 | 9 | 120 | | Animal Farm | How Wal-Mart is Destroying... | 0.0234375 | 8 | 120 | | Animal Farm | The Adventures of Hucklebe... | 0.0234375 | 7 | 120 | | Animal Farm | The Tommyknockers | 0.0239234566689 | 6 | 120 | | Animal Farm | Peace Breaks Out | 0.0243902206421 | 5 | 120 | | Animal Farm | Skeleton Crew | 0.0256410241127 | 4 | 120 | | Animal Farm | Moreta: Dragonlady of Pern | 0.0281690359116 | 3 | 120 | | Animal Farm | The Color Purple | 0.0352941155434 | 2 | 120 | | Animal Farm | Tis: A Memoir | 0.0355029702187 | 1 | 120 | | The Kitchen God's Wife | Girl in Hyacinth Blue | 0.0294117927551 | 10 | 116 | | The Kitchen God's Wife | The Cat Who Came for Christmas | 0.0298507213593 | 9 | 116 | | The Kitchen God's Wife | Summon the Keeper | 0.0305343270302 | 8 | 116 | | The Kitchen God's Wife | Westing Game | 0.0307692289352 | 7 | 116 | | The Kitchen God's Wife | Woman of Independence | 0.0333333611488 | 6 | 116 | | The Kitchen God's Wife | Doomsday Book | 0.0347222089767 | 5 | 116 | | The Kitchen God's Wife | The Handmaid's Tale | 0.0349345207214 | 4 | 116 | | The Kitchen God's Wife | The Hundred Secret Senses | 0.0493273735046 | 3 | 116 | | The Kitchen God's Wife | The Bonesetter's Daughter | 0.0536398291588 | 2 | 116 | | The Kitchen God's Wife | The Joy Luck Club | 0.0627062916756 | 1 | 116 | | Q Is for Quarry | Last Man Standing | 0.0340909361839 | 10 | 116 | | Q Is for Quarry | The Cat Who Brought Down t... | 0.0342465639114 | 9 | 116 | | Q Is for Quarry | Mind Prey | 0.0342465639114 | 8 | 116 | | Q Is for Quarry | The Last Juror | 0.0344827771187 | 7 | 116 | | Q Is for Quarry | The Art of Deception | 0.0384615659714 | 6 | 116 | | Q Is for Quarry | How to Murder a Millionaire | 0.0387597084045 | 5 | 116 | | Q Is for Quarry | The Cereal Murders | 0.0451127886772 | 4 | 116 | | Q Is for Quarry | M Is for Malice | 0.0454545617104 | 3 | 116 | | Q Is for Quarry | "O" Is for Outlaw | 0.0533333420753 | 2 | 116 | | Q Is for Quarry | P Is for Peril | 0.0648648738861 | 1 | 116 | | Bridget Jones: The Edge of... | Enduring Love | 0.0322580933571 | 10 | 115 | | Bridget Jones: The Edge of... | Round Ireland With a Fridge | 0.0324675440788 | 9 | 115 | | Bridget Jones: The Edge of... | Shopaholic Ties the Knot | 0.033898293972 | 8 | 115 | | Bridget Jones: The Edge of... | Wicked: The Life and Times... | 0.034351170063 | 7 | 115 | | Bridget Jones: The Edge of... | Good in Bed | 0.0343839526176 | 6 | 115 | | Bridget Jones: The Edge of... | The Girls' Guide to Huntin... | 0.0363128781319 | 5 | 115 | | Bridget Jones: The Edge of... | A Year in Provence | 0.0384615659714 | 4 | 115 | | Bridget Jones: The Edge of... | The Hundred Secret Senses | 0.0398229956627 | 3 | 115 | | Bridget Jones: The Edge of... | Cause Celeb | 0.0444444417953 | 2 | 115 | | Bridget Jones: The Edge of... | Bridget Jones's Diary | 0.0734966397285 | 1 | 115 | | The Green Mile | The Stand: Complete and Uncut | 0.0395480394363 | 10 | 114 | | The Green Mile | Firestarter | 0.0397350788116 | 9 | 114 | | The Green Mile | The Tommyknockers | 0.0400000214577 | 8 | 114 | | The Green Mile | Gerald's Game | 0.0427807569504 | 7 | 114 | | The Green Mile | Salem's Lot | 0.0431654453278 | 6 | 114 | | The Green Mile | Rose Madder | 0.0432432293892 | 5 | 114 | | The Green Mile | The Girl Who Loved Tom Gordon | 0.044117629528 | 4 | 114 | | The Green Mile | The Dark Half | 0.0550000071526 | 3 | 114 | | The Green Mile | Pet Sematary | 0.0597015023232 | 2 | 114 | | The Green Mile | Skeleton Crew | 0.0659340620041 | 1 | 114 | | Point of Origin | Sudden Prey | 0.0364963412285 | 10 | 114 | | Point of Origin | Fat Tuesday | 0.043795645237 | 9 | 114 | | Point of Origin | Postmortem | 0.0514285564423 | 8 | 114 | | Point of Origin | Hornet's Nest | 0.051546394825 | 7 | 114 | | Point of Origin | Cause of Death | 0.0757575631142 | 6 | 114 | | Point of Origin | The Last Precinct | 0.0852017998695 | 5 | 114 | | Point of Origin | From Potter's Field | 0.0871794819832 | 4 | 114 | | Point of Origin | The Body Farm | 0.0893854498863 | 3 | 114 | | Point of Origin | Unnatural Exposure | 0.0895522236824 | 2 | 114 | | Point of Origin | Black Notice | 0.104072391987 | 1 | 114 | | Two for the Dough | When the Wind Blows | 0.0327869057655 | 10 | 112 | | Two for the Dough | M Is for Malice | 0.0328947305679 | 9 | 112 | | Two for the Dough | Night Whispers | 0.0342465639114 | 8 | 112 | | Two for the Dough | Pet Sematary | 0.0343137383461 | 7 | 112 | | Two for the Dough | The Honk and Holler Openin... | 0.0344827771187 | 6 | 112 | | Two for the Dough | Eden Close | 0.0352112650871 | 5 | 112 | | Two for the Dough | The Grilling Season | 0.0352112650871 | 4 | 112 | | Two for the Dough | The Pelican Brief | 0.036101102829 | 3 | 112 | | Two for the Dough | Westing Game | 0.039370059967 | 2 | 112 | | Two for the Dough | Peace Like a River | 0.0395480394363 | 1 | 112 | | The Poisonwood Bible | My Name Is Red | 0.0258620977402 | 10 | 112 | | The Poisonwood Bible | Loose Lips | 0.0258620977402 | 9 | 112 | | The Poisonwood Bible | Bastard Out of Carolina | 0.0267379879951 | 8 | 112 | | The Poisonwood Bible | In the Heart of the Sea: T... | 0.0272108912468 | 7 | 112 | | The Poisonwood Bible | The Midnight Club | 0.0279720425606 | 6 | 112 | | The Poisonwood Bible | Postmortem | 0.0282486081123 | 5 | 112 | | The Poisonwood Bible | The Alienist | 0.0294117927551 | 4 | 112 | | The Poisonwood Bible | Geek Love | 0.03125 | 3 | 112 | | The Poisonwood Bible | One True Thing | 0.0327869057655 | 2 | 112 | | The Poisonwood Bible | Pigs in Heaven | 0.0341463685036 | 1 | 112 | | The Color of Water: A Blac... | Here on Earth | 0.0284090638161 | 10 | 111 | | The Color of Water: A Blac... | I Know This Much Is True | 0.0285714268684 | 9 | 111 | | The Color of Water: A Blac... | Catch Me If You Can: The T... | 0.0287356376648 | 8 | 111 | | The Color of Water: A Blac... | The Honk and Holler Openin... | 0.0289017558098 | 7 | 111 | | The Color of Water: A Blac... | Running with Scissors | 0.0291970968246 | 6 | 111 | | The Color of Water: A Blac... | Roll of Thunder, Hear My Cry | 0.0300751924515 | 5 | 111 | | The Color of Water: A Blac... | Dating Big Bird | 0.0303030014038 | 4 | 111 | | The Color of Water: A Blac... | Seventh Heaven | 0.0320000052452 | 3 | 111 | | The Color of Water: A Blac... | Blue Diary | 0.0365853905678 | 2 | 111 | | The Color of Water: A Blac... | Ladder of Years | 0.0372670888901 | 1 | 111 | | Pop Goes the Weasel | The Night Crew | 0.035971224308 | 10 | 110 | | Pop Goes the Weasel | Acts of Malice | 0.0378788113594 | 9 | 110 | | Pop Goes the Weasel | Heartbreaker | 0.0379746556282 | 8 | 110 | | Pop Goes the Weasel | Chosen Prey | 0.0384615659714 | 7 | 110 | | Pop Goes the Weasel | Rules of Prey | 0.043795645237 | 6 | 110 | | Pop Goes the Weasel | Four Blind Mice | 0.0443349480629 | 5 | 110 | | Pop Goes the Weasel | Violets Are Blue | 0.0495867729187 | 4 | 110 | | Pop Goes the Weasel | Eyes of Prey | 0.0510948896408 | 3 | 110 | | Pop Goes the Weasel | See How They Run | 0.0514705777168 | 2 | 110 | | Pop Goes the Weasel | Roses Are Red | 0.0704225301743 | 1 | 110 | | The Smoke Jumper | Bloodstream | 0.0234375 | 10 | 108 | | The Smoke Jumper | The First Wives Club Movie... | 0.0239999890327 | 9 | 108 | | The Smoke Jumper | It's Not About the Bike: M... | 0.0243902206421 | 8 | 108 | | The Smoke Jumper | The Mulberry Tree | 0.0243902206421 | 7 | 108 | | The Smoke Jumper | Tis: A Memoir | 0.0251572132111 | 6 | 108 | | The Smoke Jumper | Simple Abundance: A Daybo... | 0.0261437892914 | 5 | 108 | | The Smoke Jumper | A Winter Haunting | 0.0270270109177 | 4 | 108 | | The Smoke Jumper | No Other Man | 0.0275229215622 | 3 | 108 | | The Smoke Jumper | Isle of Dogs | 0.0288461446762 | 2 | 108 | | The Smoke Jumper | The Loop | 0.0310559272766 | 1 | 108 | | Midnight in the Garden of ... | English Passengers | 0.0263158082962 | 10 | 107 | | Midnight in the Garden of ... | The Hundred Secret Senses | 0.0271493196487 | 9 | 107 | | Midnight in the Garden of ... | Bastard Out of Carolina | 0.0273224115372 | 8 | 107 | | Midnight in the Garden of ... | The Lust Lizard of Melanch... | 0.0281690359116 | 7 | 107 | | Midnight in the Garden of ... | The Diary of Ellen Rimbaue... | 0.0285714268684 | 6 | 107 | | Midnight in the Garden of ... | The Absence of Nectar | 0.0289855003357 | 5 | 107 | | Midnight in the Garden of ... | The Stand: Complete and Uncut | 0.0290697813034 | 4 | 107 | | Midnight in the Garden of ... | The Professor and the Madman | 0.0291970968246 | 3 | 107 | | Midnight in the Garden of ... | The Blind Assassin | 0.0297029614449 | 2 | 107 | | Midnight in the Garden of ... | The Crimson Petal and the ... | 0.0346820950508 | 1 | 107 | | Unnatural Exposure | Southern Cross | 0.0508474707603 | 10 | 106 | | Unnatural Exposure | Eyes of Prey | 0.0522388219833 | 9 | 106 | | Unnatural Exposure | Sudden Prey | 0.0546875 | 8 | 106 | | Unnatural Exposure | The Last Precinct | 0.0730593800545 | 7 | 106 | | Unnatural Exposure | Postmortem | 0.0792682766914 | 6 | 106 | | Unnatural Exposure | Black Notice | 0.0821917653084 | 5 | 106 | | Unnatural Exposure | Cause of Death | 0.0842105150223 | 4 | 106 | | Unnatural Exposure | Point of Origin | 0.0895522236824 | 3 | 106 | | Unnatural Exposure | The Body Farm | 0.0994151830673 | 2 | 106 | | Unnatural Exposure | From Potter's Field | 0.108108103275 | 1 | 106 | | Isle of Dogs | The Last Precinct | 0.0352423191071 | 10 | 106 | | Isle of Dogs | Last Man Standing | 0.036144554615 | 9 | 106 | | Isle of Dogs | Void Moon | 0.0370370149612 | 8 | 106 | | Isle of Dogs | I Is for Innocent | 0.0387097001076 | 7 | 106 | | Isle of Dogs | Black Notice | 0.0394737124443 | 6 | 106 | | Isle of Dogs | Easy Prey | 0.0432098507881 | 5 | 106 | | Isle of Dogs | Eyes of Prey | 0.0444444417953 | 4 | 106 | | Isle of Dogs | Violets Are Blue | 0.0502092242241 | 3 | 106 | | Isle of Dogs | The Hearing | 0.053846180439 | 2 | 106 | | Isle of Dogs | Southern Cross | 0.0628571510315 | 1 | 106 | | I Know This Much Is True | The Hotel New Hampshire | 0.0336134433746 | 10 | 106 | | I Know This Much Is True | The Bonesetter's Daughter | 0.0348837375641 | 9 | 106 | | I Know This Much Is True | The Passion of Artemisia | 0.0349650382996 | 8 | 106 | | I Know This Much Is True | Slammerkin | 0.0354610085487 | 7 | 106 | | I Know This Much Is True | Tell Me Your Dreams | 0.0354610085487 | 6 | 106 | | I Know This Much Is True | Here on Earth | 0.0411764979362 | 5 | 106 | | I Know This Much Is True | Ladder of Years | 0.0448718070984 | 4 | 106 | | I Know This Much Is True | We Were the Mulvaneys | 0.0470085740089 | 3 | 106 | | I Know This Much Is True | Icy Sparks | 0.0508474707603 | 2 | 106 | | I Know This Much Is True | Dating Big Bird | 0.055999994278 | 1 | 106 | | Four Blind Mice | The Last Juror | 0.0437499880791 | 10 | 104 | | Four Blind Mice | Leap of Faith | 0.043795645237 | 9 | 104 | | Four Blind Mice | Turbulence | 0.0442478060722 | 8 | 104 | | Four Blind Mice | Pop Goes the Weasel | 0.0443349480629 | 7 | 104 | | Four Blind Mice | Eyes of Prey | 0.0454545617104 | 6 | 104 | | Four Blind Mice | When the Wind Blows | 0.0474137663841 | 5 | 104 | | Four Blind Mice | Mortal Prey | 0.0584415793419 | 4 | 104 | | Four Blind Mice | The Lake House | 0.0600000023842 | 3 | 104 | | Four Blind Mice | Roses Are Red | 0.0656934380531 | 2 | 104 | | Four Blind Mice | Violets Are Blue | 0.0877193212509 | 1 | 104 | | The Blind Assassin | A Man in Full | 0.0261437892914 | 10 | 102 | | The Blind Assassin | Bastard Out of Carolina | 0.0282486081123 | 9 | 102 | | The Blind Assassin | The Mambo Kings Play Songs... | 0.0288461446762 | 8 | 102 | | The Blind Assassin | Midnight in the Garden of ... | 0.0297029614449 | 7 | 102 | | The Blind Assassin | The Rule of Four | 0.0298507213593 | 6 | 102 | | The Blind Assassin | Girl in Hyacinth Blue | 0.0314136147499 | 5 | 102 | | The Blind Assassin | The Breaker | 0.0327869057655 | 4 | 102 | | The Blind Assassin | The Crimson Petal and the ... | 0.0359281301498 | 3 | 102 | | The Blind Assassin | The Corrections | 0.0372670888901 | 2 | 102 | | The Blind Assassin | Tipping the Velvet | 0.0423728823662 | 1 | 102 | | Pigs in Heaven | Q Is for Quarry | 0.0284360051155 | 10 | 102 | | Pigs in Heaven | In the Land of Dreamy Dreams | 0.0285714268684 | 9 | 102 | | Pigs in Heaven | Possessing the Secret of Joy | 0.0287770032883 | 8 | 102 | | Pigs in Heaven | The Vineyard | 0.0300751924515 | 7 | 102 | | Pigs in Heaven | Stiff: The Curious Lives o... | 0.03125 | 6 | 102 | | Pigs in Heaven | The Cat Who Could Read Bac... | 0.0317460298538 | 5 | 102 | | Pigs in Heaven | The Poisonwood Bible | 0.0341463685036 | 4 | 102 | | Pigs in Heaven | Homeland and Other Stories | 0.0344827771187 | 3 | 102 | | Pigs in Heaven | Freedomland | 0.0357142686844 | 2 | 102 | | Pigs in Heaven | Animal Dreams | 0.0500000119209 | 1 | 102 | | Pet Sematary | The Girl Who Loved Tom Gordon | 0.0819672346115 | 10 | 102 | | Pet Sematary | Gerald's Game | 0.0909090638161 | 9 | 102 | | Pet Sematary | The Dead Zone | 0.0985915660858 | 8 | 102 | | Pet Sematary | Rose Madder | 0.0987654328346 | 7 | 102 | | Pet Sematary | Needful Things | 0.100628912449 | 6 | 102 | | Pet Sematary | The Shining | 0.112582802773 | 5 | 102 | | Pet Sematary | The Dark Half | 0.126436769962 | 4 | 102 | | Pet Sematary | Skeleton Crew | 0.132911384106 | 3 | 102 | | Pet Sematary | The Tommyknockers | 0.135294139385 | 2 | 102 | | Pet Sematary | Four Past Midnight | 0.13725489378 | 1 | 102 | | Black House | Gerald's Game | 0.0523256063461 | 10 | 101 | | Black House | Rose Madder | 0.0529412031174 | 9 | 101 | | Black House | Bag of Bones | 0.0552486181259 | 8 | 101 | | Black House | The Dead Zone | 0.0608108043671 | 7 | 101 | | Black House | Pet Sematary | 0.0641711354256 | 6 | 101 | | Black House | Skeleton Crew | 0.0650887489319 | 5 | 101 | | Black House | The Tommyknockers | 0.0777778029442 | 4 | 101 | | Black House | Four Past Midnight | 0.080246925354 | 3 | 101 | | Black House | The Regulators | 0.0884353518486 | 2 | 101 | | Black House | The Dark Half | 0.0944444537163 | 1 | 101 | | Deception Point | Reunion in Death | 0.0300751924515 | 10 | 100 | | Deception Point | City of Bones | 0.0308641791344 | 9 | 100 | | Deception Point | Blue Diary | 0.0322580933571 | 8 | 100 | | Deception Point | Speaking in Tongues | 0.0322580933571 | 7 | 100 | | Deception Point | The Lake House | 0.0331125855446 | 6 | 100 | | Deception Point | Q Is for Quarry | 0.0334928035736 | 5 | 100 | | Deception Point | Monkeewrench | 0.0341880321503 | 4 | 100 | | Deception Point | The Fourth Perimeter | 0.0350877046585 | 3 | 100 | | Deception Point | Turbulence | 0.0360360145569 | 2 | 100 | | Deception Point | The Last Juror | 0.0445860028267 | 1 | 100 | | Cause of Death | Night Prey | 0.0406504273415 | 10 | 100 | | Cause of Death | Eyes of Prey | 0.0465116500854 | 9 | 100 | | Cause of Death | Postmortem | 0.0490797758102 | 8 | 100 | | Cause of Death | The Last Precinct | 0.0552995204926 | 7 | 100 | | Cause of Death | Hornet's Nest | 0.0611110925674 | 6 | 100 | | Cause of Death | Black Notice | 0.0645161271095 | 5 | 100 | | Cause of Death | The Body Farm | 0.0705882310867 | 4 | 100 | | Cause of Death | From Potter's Field | 0.0756756663322 | 3 | 100 | | Cause of Death | Point of Origin | 0.0757575631142 | 2 | 100 | | Cause of Death | Unnatural Exposure | 0.0842105150223 | 1 | 100 | | The Girl Who Loved Tom Gordon | False Memory | 0.0467836260796 | 10 | 99 | | The Girl Who Loved Tom Gordon | Black House | 0.0473684072495 | 9 | 99 | | The Girl Who Loved Tom Gordon | Needful Things | 0.0479041934013 | 8 | 99 | | The Girl Who Loved Tom Gordon | Firestarter | 0.0518518686295 | 7 | 99 | | The Girl Who Loved Tom Gordon | The Tommyknockers | 0.0546448230743 | 6 | 99 | | The Girl Who Loved Tom Gordon | Four Past Midnight | 0.0609756112099 | 5 | 99 | | The Girl Who Loved Tom Gordon | The Dark Half | 0.0652173757553 | 4 | 99 | | The Girl Who Loved Tom Gordon | Rose Madder | 0.0658682584763 | 3 | 99 | | The Girl Who Loved Tom Gordon | The Regulators | 0.0743243098259 | 2 | 99 | | The Girl Who Loved Tom Gordon | Pet Sematary | 0.0819672346115 | 1 | 99 | | Night | How to Make an American Quilt | 0.0266666412354 | 10 | 99 | | Night | My Antonia | 0.0267857313156 | 9 | 99 | | Night | A Walk Across America | 0.0267857313156 | 8 | 99 | | Night | Dude, Where's My Country? | 0.0270270109177 | 7 | 99 | | Night | Children of the Night | 0.0275229215622 | 6 | 99 | | Night | Good Omens | 0.027624309063 | 5 | 99 | | Night | How to Cook a Tart | 0.0280373692513 | 4 | 99 | | Night | Night Gardening | 0.0283018946648 | 3 | 99 | | Night | The Last Kabbalist of Lisbon | 0.0283018946648 | 2 | 99 | | Night | Fingersmith | 0.0300751924515 | 1 | 99 | | From Potter's Field | Hornet's Nest | 0.0439560413361 | 10 | 99 | | From Potter's Field | Deadly Decisions | 0.0486111044884 | 9 | 99 | | From Potter's Field | See How They Run | 0.0555555820465 | 8 | 99 | | From Potter's Field | The Last Precinct | 0.0604650974274 | 7 | 99 | | From Potter's Field | Black Notice | 0.0697674155235 | 6 | 99 | | From Potter's Field | Cause of Death | 0.0756756663322 | 5 | 99 | | From Potter's Field | Postmortem | 0.0828025341034 | 4 | 99 | | From Potter's Field | Point of Origin | 0.0871794819832 | 3 | 99 | | From Potter's Field | Unnatural Exposure | 0.108108103275 | 2 | 99 | | From Potter's Field | The Body Farm | 0.131250023842 | 1 | 99 | | Sick Puppy | The Diary of Ellen Rimbaue... | 0.0303030014038 | 10 | 98 | | Sick Puppy | Bundori | 0.030927836895 | 9 | 98 | | Sick Puppy | Final Target | 0.0314465165138 | 8 | 98 | | Sick Puppy | The Simple Truth | 0.0320512652397 | 7 | 98 | | Sick Puppy | Strip Tease | 0.0327869057655 | 6 | 98 | | Sick Puppy | The Lone Ranger and Tonto ... | 0.0330578684807 | 5 | 98 | | Sick Puppy | Lethal Seduction | 0.0377358198166 | 4 | 98 | | Sick Puppy | Stormy Weather | 0.0454545617104 | 3 | 98 | | Sick Puppy | Lucky You | 0.0500000119209 | 2 | 98 | | Sick Puppy | Basket Case | 0.0625 | 1 | 98 | | The Virgin Suicides | In the Land of Dreamy Dreams | 0.0297029614449 | 10 | 97 | | The Virgin Suicides | The Crimson Petal and the ... | 0.0304877758026 | 9 | 97 | | The Virgin Suicides | So Much to Tell You | 0.0306122303009 | 8 | 97 | | The Virgin Suicides | The Box Garden | 0.0306122303009 | 7 | 97 | | The Virgin Suicides | Big Stone Gap | 0.03125 | 6 | 97 | | The Virgin Suicides | The Honk and Holler Openin... | 0.03125 | 5 | 97 | | The Virgin Suicides | Bastard Out of Carolina | 0.0348837375641 | 4 | 97 | | The Virgin Suicides | The Girls' Guide to Huntin... | 0.03519064188 | 3 | 97 | | The Virgin Suicides | Westing Game | 0.0353982448578 | 2 | 97 | | The Virgin Suicides | The Stone Diaries | 0.0372670888901 | 1 | 97 | | The Tommyknockers | The Bachman Books: Rage, t... | 0.0900900959969 | 10 | 97 | | The Tommyknockers | Gerald's Game | 0.107594907284 | 9 | 97 | | The Tommyknockers | The Regulators | 0.107913672924 | 8 | 97 | | The Tommyknockers | Rose Madder | 0.116129040718 | 7 | 97 | | The Tommyknockers | Needful Things | 0.118421077728 | 6 | 97 | | The Tommyknockers | The Dead Zone | 0.118518531322 | 5 | 97 | | The Tommyknockers | Skeleton Crew | 0.129870116711 | 4 | 97 | | The Tommyknockers | Pet Sematary | 0.135294139385 | 3 | 97 | | The Tommyknockers | The Dark Half | 0.164634168148 | 2 | 97 | | The Tommyknockers | Four Past Midnight | 0.173611104488 | 1 | 97 | | The Dark Half | Black House | 0.0944444537163 | 10 | 97 | | The Dark Half | The Shining | 0.0993377566338 | 9 | 97 | | The Dark Half | The Regulators | 0.105633795261 | 8 | 97 | | The Dark Half | The Dead Zone | 0.107913672924 | 7 | 97 | | The Dark Half | Rose Madder | 0.113924026489 | 6 | 97 | | The Dark Half | Gerald's Game | 0.119496881962 | 5 | 97 | | The Dark Half | Pet Sematary | 0.126436769962 | 4 | 97 | | The Dark Half | Skeleton Crew | 0.14193546772 | 3 | 97 | | The Dark Half | The Tommyknockers | 0.164634168148 | 2 | 97 | | The Dark Half | Four Past Midnight | 0.178082168102 | 1 | 97 | | Girl in Hyacinth Blue | Me Talk Pretty One Day | 0.0365448594093 | 10 | 97 | | Girl in Hyacinth Blue | Round Ireland With a Fridge | 0.0370370149612 | 9 | 97 | | Girl in Hyacinth Blue | The Secret Life of Bees | 0.0375000238419 | 8 | 97 | | Girl in Hyacinth Blue | The Sisterhood of the Trav... | 0.037593960762 | 7 | 97 | | Girl in Hyacinth Blue | The Honk and Holler Openin... | 0.0379746556282 | 6 | 97 | | Girl in Hyacinth Blue | Fingersmith | 0.0387597084045 | 5 | 97 | | Girl in Hyacinth Blue | Blue Shoe | 0.039370059967 | 4 | 97 | | Girl in Hyacinth Blue | Blue Diary | 0.0400000214577 | 3 | 97 | | Girl in Hyacinth Blue | Wicked: The Life and Times... | 0.0413222908974 | 2 | 97 | | Girl in Hyacinth Blue | The Passion of Artemisia | 0.0454545617104 | 1 | 97 | | Five Quarters of the Orange | Girl in Hyacinth Blue | 0.0322580933571 | 10 | 97 | | Five Quarters of the Orange | Sun Also Rises | 0.0327869057655 | 9 | 97 | | Five Quarters of the Orange | Blue Diary | 0.0331125855446 | 8 | 97 | | Five Quarters of the Orange | A Natural History of the S... | 0.0341880321503 | 7 | 97 | | Five Quarters of the Orange | Mutant Message Down Under | 0.0354610085487 | 6 | 97 | | Five Quarters of the Orange | Fingersmith | 0.0387597084045 | 5 | 97 | | Five Quarters of the Orange | Como Agua Para Chocolate/L... | 0.0388349294662 | 4 | 97 | | Five Quarters of the Orange | The Last Kabbalist of Lisbon | 0.0392156839371 | 3 | 97 | | Five Quarters of the Orange | Falling Backwards | 0.0392156839371 | 2 | 97 | | Five Quarters of the Orange | Every Living Thing | 0.0406504273415 | 1 | 97 | | The Hot Zone | Chromosome 6 | 0.03125 | 10 | 95 | | The Hot Zone | The House on the Borderland | 0.0319148898125 | 9 | 95 | | The Hot Zone | Lake Wobegon Days | 0.0322580933571 | 8 | 95 | | The Hot Zone | The Outsiders | 0.0325203537941 | 7 | 95 | | The Hot Zone | The Cat Who Could Read Bac... | 0.033898293972 | 6 | 95 | | The Hot Zone | Geek Love | 0.0352112650871 | 5 | 95 | | The Hot Zone | Downsize This! Random Thre... | 0.0363636612892 | 4 | 95 | | The Hot Zone | The Temple of My Familiar | 0.0406504273415 | 3 | 95 | | The Hot Zone | The Bonfire of the Vanities | 0.0416666865349 | 2 | 95 | | The Hot Zone | In the Land of Dreamy Dreams | 0.0416666865349 | 1 | 95 | | Hornet's Nest | Southern Cross | 0.0426829457283 | 10 | 93 | | Hornet's Nest | From Potter's Field | 0.0439560413361 | 9 | 93 | | Hornet's Nest | Certain Prey | 0.0454545617104 | 8 | 93 | | Hornet's Nest | Unnatural Exposure | 0.047872364521 | 7 | 93 | | Hornet's Nest | Eyes of Prey | 0.0500000119209 | 6 | 93 | | Hornet's Nest | Mind Prey | 0.0500000119209 | 5 | 93 | | Hornet's Nest | Point of Origin | 0.051546394825 | 4 | 93 | | Hornet's Nest | Sudden Prey | 0.0526315569878 | 3 | 93 | | Hornet's Nest | Cause of Death | 0.0611110925674 | 2 | 93 | | Hornet's Nest | Black Notice | 0.0673077106476 | 1 | 93 | | Tell No One | Patty Jane's House of Curl | 0.0265486836433 | 10 | 92 | | Tell No One | Black House | 0.0267379879951 | 9 | 92 | | Tell No One | Tending Roses | 0.0275229215622 | 8 | 92 | | Tell No One | Two for the Dough | 0.0303030014038 | 7 | 92 | | Tell No One | Last Breath | 0.0303030014038 | 6 | 92 | | Tell No One | Suffer the Children | 0.030927836895 | 5 | 92 | | Tell No One | The Blue Bottle Club | 0.03125 | 4 | 92 | | Tell No One | House of Leaves | 0.03125 | 3 | 92 | | Tell No One | Embraced by the Light | 0.0314960479736 | 2 | 92 | | Tell No One | Pet Sematary | 0.0380434989929 | 1 | 92 | | The King of Torts | A Sudden Change of Heart | 0.0285714268684 | 10 | 91 | | The King of Torts | Bunnicula: A Rabbit-Tale o... | 0.0288461446762 | 9 | 91 | | The King of Torts | Diary of a Mad Mom-To-Be | 0.0297029614449 | 8 | 91 | | The King of Torts | The Summons | 0.0310077667236 | 7 | 91 | | The King of Torts | 3rd Degree | 0.03125 | 6 | 91 | | The King of Torts | Odd Thomas | 0.0325203537941 | 5 | 91 | | The King of Torts | The English Assassin | 0.0396039485931 | 4 | 91 | | The King of Torts | Four Blind Mice | 0.0430107712746 | 3 | 91 | | The King of Torts | The Last Juror | 0.0616438388824 | 2 | 91 | | The King of Torts | Bleachers | 0.0855262875557 | 1 | 91 | | From the Corner of His Eye | Mutation | 0.033898293972 | 10 | 91 | | From the Corner of His Eye | Salem's Lot | 0.0341880321503 | 9 | 91 | | From the Corner of His Eye | The Mothman Prophecies | 0.0344827771187 | 8 | 91 | | From the Corner of His Eye | The Key to Midnight | 0.0347825884819 | 7 | 91 | | From the Corner of His Eye | Contagion | 0.037593960762 | 6 | 91 | | From the Corner of His Eye | The Bad Place | 0.037593960762 | 5 | 91 | | From the Corner of His Eye | Violets Are Blue | 0.0398229956627 | 4 | 91 | | From the Corner of His Eye | False Memory | 0.0429447889328 | 3 | 91 | | From the Corner of His Eye | The Door to December | 0.0606060624123 | 2 | 91 | | From the Corner of His Eye | The Funhouse | 0.0630630850792 | 1 | 91 | | Dude, Where's My Country? | The Devil's Code | 0.0296296477318 | 10 | 91 | | Dude, Where's My Country? | Harry Potter y la piedra f... | 0.0297029614449 | 9 | 91 | | Dude, Where's My Country? | I Know This Much Is True | 0.0314136147499 | 8 | 91 | | Dude, Where's My Country? | Bridget Jones's Diary | 0.0315315127373 | 7 | 91 | | Dude, Where's My Country? | Midnight Louie's Pet Detec... | 0.0322580933571 | 6 | 91 | | Dude, Where's My Country? | The Hundred Secret Senses | 0.0343137383461 | 5 | 91 | | Dude, Where's My Country? | Naked Lunch | 0.0360360145569 | 4 | 91 | | Dude, Where's My Country? | Lies and the Lying Liars W... | 0.0394737124443 | 3 | 91 | | Dude, Where's My Country? | The Corrections | 0.0397350788116 | 2 | 91 | | Dude, Where's My Country? | Downsize This! Random Thre... | 0.0467289686203 | 1 | 91 | | Bag of Bones | Black House | 0.0552486181259 | 10 | 91 | | Bag of Bones | Rose Madder | 0.0559006333351 | 9 | 91 | | Bag of Bones | Needful Things | 0.0569620132446 | 8 | 91 | | Bag of Bones | The Dead Zone | 0.0571428537369 | 7 | 91 | | Bag of Bones | Four Past Midnight | 0.0573248267174 | 6 | 91 | | Bag of Bones | Pet Sematary | 0.0674157142639 | 5 | 91 | | Bag of Bones | The Dark Half | 0.0681818127632 | 4 | 91 | | Bag of Bones | Skeleton Crew | 0.0754716992378 | 3 | 91 | | Bag of Bones | Gerald's Game | 0.0817610025406 | 2 | 91 | | Bag of Bones | The Tommyknockers | 0.0818713307381 | 1 | 91 | | Who Moved My Cheese? An Am... | Border Music | 0.0256410241127 | 10 | 90 | | Who Moved My Cheese? An Am... | The Secret Garden | 0.0268456339836 | 9 | 90 | | Who Moved My Cheese? An Am... | A Light in the Attic | 0.0272727012634 | 8 | 90 | | Who Moved My Cheese? An Am... | Disappearing Acts | 0.0288461446762 | 7 | 90 | | Who Moved My Cheese? An Am... | Don't Stand Too Close to a... | 0.0291970968246 | 6 | 90 | | Who Moved My Cheese? An Am... | White Shark | 0.03125 | 5 | 90 | | Who Moved My Cheese? An Am... | Rich Dad, Poor Dad: What t... | 0.03125 | 4 | 90 | | Who Moved My Cheese? An Am... | Life Strategies: Doing Wha... | 0.0336134433746 | 3 | 90 | | Who Moved My Cheese? An Am... | Fish! A Remarkable Way to ... | 0.0373831987381 | 2 | 90 | | Who Moved My Cheese? An Am... | The Tightwad Gazette: Prom... | 0.0396039485931 | 1 | 90 | | The Other Boleyn Girl | Milkrun | 0.0239999890327 | 10 | 90 | | The Other Boleyn Girl | Fingersmith | 0.0239999890327 | 9 | 90 | | The Other Boleyn Girl | The Secret Life of Bees | 0.0249999761581 | 8 | 90 | | The Other Boleyn Girl | Breach of Promise | 0.0275229215622 | 7 | 90 | | The Other Boleyn Girl | Cold Comfort Farm | 0.0275229215622 | 6 | 90 | | The Other Boleyn Girl | The Broke Diaries: The Com... | 0.0288461446762 | 5 | 90 | | The Other Boleyn Girl | Beware, Princess Elizabeth... | 0.032967031002 | 4 | 90 | | The Other Boleyn Girl | Girl in Hyacinth Blue | 0.0333333611488 | 3 | 90 | | The Other Boleyn Girl | Fat Girls and Lawn Chairs | 0.0388349294662 | 2 | 90 | | The Other Boleyn Girl | Wicked: The Life and Times... | 0.0423728823662 | 1 | 90 | | Wish You Well | Personal Injuries | 0.0336134433746 | 10 | 89 | | Wish You Well | Saving Faith | 0.0337837934494 | 9 | 89 | | Wish You Well | Grand Avenue | 0.033898293972 | 8 | 89 | | Wish You Well | The Simple Truth | 0.0340136289597 | 7 | 89 | | Wish You Well | The Chinese Art of Face Re... | 0.0340909361839 | 6 | 89 | | Wish You Well | Total Control | 0.0367646813393 | 5 | 89 | | Wish You Well | Middle of Nowhere | 0.0370370149612 | 4 | 89 | | Wish You Well | That Camden Summer | 0.039370059967 | 3 | 89 | | Wish You Well | The Cat Who Blew the Whistle | 0.0430107712746 | 2 | 89 | | Wish You Well | Dark Lady | 0.0442478060722 | 1 | 89 | | The Pillars of the Earth | The Lost Continent: Travel... | 0.0243902206421 | 10 | 89 | | The Pillars of the Earth | Hornet Flight | 0.0247933864594 | 9 | 89 | | The Pillars of the Earth | Strip Tease | 0.0260869860649 | 8 | 89 | | The Pillars of the Earth | The Fiery Cross | 0.0277777910233 | 7 | 89 | | The Pillars of the Earth | Tis: A Memoir | 0.0285714268684 | 6 | 89 | | The Pillars of the Earth | Silence in Hanover Close | 0.0299999713898 | 5 | 89 | | The Pillars of the Earth | The Sunne in Splendour | 0.03125 | 4 | 89 | | The Pillars of the Earth | Sky Masters | 0.0322580933571 | 3 | 89 | | The Pillars of the Earth | Jackdaws | 0.0342465639114 | 2 | 89 | | The Pillars of the Earth | Night over Water | 0.0479999780655 | 1 | 89 | | Falling Angels | Breathing Lessons | 0.0264900922775 | 10 | 88 | | Falling Angels | Chasing Cezanne | 0.0270270109177 | 9 | 88 | | Falling Angels | Dress Your Family in Cordu... | 0.0270270109177 | 8 | 88 | | Falling Angels | Tipping the Velvet | 0.0280373692513 | 7 | 88 | | Falling Angels | Beauty: A Retelling of the... | 0.0294117927551 | 6 | 88 | | Falling Angels | Niagara Falls All Over Again | 0.0306122303009 | 5 | 88 | | Falling Angels | The Crimson Petal and the ... | 0.0322580933571 | 4 | 88 | | Falling Angels | Blue Shoe | 0.0333333611488 | 3 | 88 | | Falling Angels | The Virgin Blue | 0.035971224308 | 2 | 88 | | Falling Angels | The Stone Diaries | 0.0394737124443 | 1 | 88 | | Good Omens | The Far Side Observer | 0.029126226902 | 10 | 87 | | Good Omens | Skeleton Crew | 0.0308641791344 | 9 | 87 | | Good Omens | Children of the Night | 0.030927836895 | 8 | 87 | | Good Omens | Neverwhere | 0.0311111211777 | 7 | 87 | | Good Omens | Sandman: The Dream Hunters | 0.0315789580345 | 6 | 87 | | Good Omens | Basket Case | 0.0327869057655 | 5 | 87 | | Good Omens | Pet Sematary | 0.0333333611488 | 4 | 87 | | Good Omens | 2010: Odyssey Two | 0.0347825884819 | 3 | 87 | | Good Omens | Lucky You | 0.0378788113594 | 2 | 87 | | Good Omens | Black Sunday | 0.0400000214577 | 1 | 87 | | The Prince of Tides | Critical Judgment | 0.0377358198166 | 10 | 86 | | The Prince of Tides | Hideaway | 0.0378788113594 | 9 | 86 | | The Prince of Tides | One True Thing | 0.0382165312767 | 8 | 86 | | The Prince of Tides | Simple Abundance: A Daybo... | 0.0387597084045 | 7 | 86 | | The Prince of Tides | Rage of Angels | 0.0396039485931 | 6 | 86 | | The Prince of Tides | The Handmaid's Tale | 0.0400000214577 | 5 | 86 | | The Prince of Tides | Postmortem | 0.0400000214577 | 4 | 86 | | The Prince of Tides | From the Mixed-Up Files of... | 0.0416666865349 | 3 | 86 | | The Prince of Tides | Turtle Moon | 0.0420168042183 | 2 | 86 | | The Prince of Tides | The Great Santini | 0.0594059228897 | 1 | 86 | | Three Fates | Naked in Death | 0.066666662693 | 10 | 85 | | Three Fates | The Reef | 0.0672268867493 | 9 | 85 | | Three Fates | Rising Tides | 0.0720720887184 | 8 | 85 | | Three Fates | River's End | 0.0743801593781 | 7 | 85 | | Three Fates | Private Scandals | 0.0744680762291 | 6 | 85 | | Three Fates | Carnal Innocence | 0.0747663378716 | 5 | 85 | | Three Fates | Born in Shame | 0.0761904716492 | 4 | 85 | | Three Fates | Carolina Moon | 0.0764331221581 | 3 | 85 | | Three Fates | Brazen Virtue | 0.0825688242912 | 2 | 85 | | Three Fates | Midnight Bayou | 0.0833333134651 | 1 | 85 | | The Door to December | Eyes of Prey | 0.0434782505035 | 10 | 85 | | The Door to December | The Servants of Twilight | 0.0434782505035 | 9 | 85 | | The Door to December | Rules of Prey | 0.0438596606255 | 8 | 85 | | The Door to December | Gerald's Game | 0.044025182724 | 7 | 85 | | The Door to December | The Bad Place | 0.0472440719604 | 6 | 85 | | The Door to December | Unnatural Exposure | 0.0494505763054 | 5 | 85 | | The Door to December | The Key to Midnight | 0.0555555820465 | 4 | 85 | | The Door to December | The House of Thunder | 0.0566037893295 | 3 | 85 | | The Door to December | From the Corner of His Eye | 0.0606060624123 | 2 | 85 | | The Door to December | The Funhouse | 0.0660377144814 | 1 | 85 | | Carolina Moon | Summer Pleasures | 0.0650406479836 | 10 | 85 | | Carolina Moon | Dream a Little Dream | 0.0652173757553 | 9 | 85 | | Carolina Moon | Born in Shame | 0.0654205679893 | 8 | 85 | | Carolina Moon | The Reef | 0.066666662693 | 7 | 85 | | Carolina Moon | Sacred Sins | 0.0689654946327 | 6 | 85 | | Carolina Moon | Brazen Virtue | 0.0720720887184 | 5 | 85 | | Carolina Moon | Genuine Lies | 0.0729166865349 | 4 | 85 | | Carolina Moon | Three Fates | 0.0764331221581 | 3 | 85 | | Carolina Moon | River's End | 0.0826446413994 | 2 | 85 | | Carolina Moon | Montana Sky | 0.0869565010071 | 1 | 85 | | The Body Farm | Chromosome 6 | 0.0521739125252 | 10 | 82 | | The Body Farm | Deja Dead | 0.05405408144 | 9 | 82 | | The Body Farm | The Last Precinct | 0.0550000071526 | 8 | 82 | | The Body Farm | Black Notice | 0.0597015023232 | 7 | 82 | | The Body Farm | Body of Evidence | 0.0600000023842 | 6 | 82 | | The Body Farm | Cause of Death | 0.0705882310867 | 5 | 82 | | The Body Farm | Postmortem | 0.0851063728333 | 4 | 82 | | The Body Farm | Point of Origin | 0.0893854498863 | 3 | 82 | | The Body Farm | Unnatural Exposure | 0.0994151830673 | 2 | 82 | | The Body Farm | From Potter's Field | 0.131250023842 | 1 | 82 | | Presumed Innocent | From Potter's Field | 0.0342857241631 | 10 | 82 | | Presumed Innocent | Rush Limbaugh Is a Big Fat... | 0.0347825884819 | 9 | 82 | | Presumed Innocent | A Deceptive Clarity | 0.0365853905678 | 8 | 82 | | Presumed Innocent | Night Prey | 0.0377358198166 | 7 | 82 | | Presumed Innocent | Pleading Guilty | 0.0380952358246 | 6 | 82 | | Presumed Innocent | Skeleton Crew | 0.0384615659714 | 5 | 82 | | Presumed Innocent | The Laws of Our Fathers | 0.0384615659714 | 4 | 82 | | Presumed Innocent | When Bad Things Happen to ... | 0.04123711586 | 3 | 82 | | Presumed Innocent | A Simple Plan | 0.0462962985039 | 2 | 82 | | Presumed Innocent | Degree of Guilt | 0.0480769276619 | 1 | 82 | | Gerald's Game | The Dead Zone | 0.078125 | 10 | 82 | | Gerald's Game | Firestarter | 0.0782608985901 | 9 | 82 | | Gerald's Game | Bag of Bones | 0.0817610025406 | 8 | 82 | | Gerald's Game | Skeleton Crew | 0.0878378152847 | 7 | 82 | | Gerald's Game | Pet Sematary | 0.0909090638161 | 6 | 82 | | Gerald's Game | Needful Things | 0.105633795261 | 5 | 82 | | Gerald's Game | The Tommyknockers | 0.107594907284 | 4 | 82 | | Gerald's Game | The Dark Half | 0.119496881962 | 3 | 82 | | Gerald's Game | Rose Madder | 0.126760542393 | 2 | 82 | | Gerald's Game | Four Past Midnight | 0.130434811115 | 1 | 82 | | Skeleton Crew | Bag of Bones | 0.0754716992378 | 10 | 81 | | Skeleton Crew | The Shining | 0.0797101259232 | 9 | 81 | | Skeleton Crew | Needful Things | 0.0833333134651 | 8 | 81 | | Skeleton Crew | The Regulators | 0.0852712988853 | 7 | 81 | | Skeleton Crew | Gerald's Game | 0.0878378152847 | 6 | 81 | | Skeleton Crew | Rose Madder | 0.0965517163277 | 5 | 81 | | Skeleton Crew | Four Past Midnight | 0.107142865658 | 4 | 81 | | Skeleton Crew | The Tommyknockers | 0.129870116711 | 3 | 81 | | Skeleton Crew | Pet Sematary | 0.132911384106 | 2 | 81 | | Skeleton Crew | The Dark Half | 0.14193546772 | 1 | 81 | | P Is for Peril | See Jane Run | 0.0315789580345 | 10 | 81 | | P Is for Peril | Trial by Fire | 0.0319148898125 | 9 | 81 | | P Is for Peril | Chosen Prey | 0.0320512652397 | 8 | 81 | | P Is for Peril | L Is for Lawless | 0.0322580933571 | 7 | 81 | | P Is for Peril | The Face of Deception | 0.0322580933571 | 6 | 81 | | P Is for Peril | The Empty Chair | 0.0322580933571 | 5 | 81 | | P Is for Peril | How to Cook a Tart | 0.033707857132 | 4 | 81 | | P Is for Peril | J Is for Judgment | 0.0381679534912 | 3 | 81 | | P Is for Peril | M Is for Malice | 0.0500000119209 | 2 | 81 | | P Is for Peril | Q Is for Quarry | 0.0648648738861 | 1 | 81 | | One True Thing | And the Band Played on: Po... | 0.0352941155434 | 10 | 81 | | One True Thing | Bitterroot Landing | 0.0352941155434 | 9 | 81 | | One True Thing | The Honk and Holler Openin... | 0.0354610085487 | 8 | 81 | | One True Thing | Breaking the Surface | 0.0365853905678 | 7 | 81 | | One True Thing | Eden Close | 0.036697268486 | 6 | 81 | | One True Thing | Before I Say Goodbye: Reco... | 0.0375000238419 | 5 | 81 | | One True Thing | Alaska Bear Tales | 0.0379746556282 | 4 | 81 | | One True Thing | The Prince of Tides | 0.0382165312767 | 3 | 81 | | One True Thing | Local Girls | 0.0384615659714 | 2 | 81 | | One True Thing | Tourist Season | 0.0384615659714 | 1 | 81 | | Bastard Out of Carolina | The Handmaid's Tale | 0.0355330109596 | 10 | 81 | | Bastard Out of Carolina | The Professor and the Madman | 0.0360360145569 | 9 | 81 | | Bastard Out of Carolina | The Box Garden | 0.0365853905678 | 8 | 81 | | Bastard Out of Carolina | The Mothman Prophecies | 0.0373831987381 | 7 | 81 | | Bastard Out of Carolina | Geek Love | 0.0384615659714 | 6 | 81 | | Bastard Out of Carolina | Charming Billy | 0.0384615659714 | 5 | 81 | | Bastard Out of Carolina | Cavedweller | 0.0388349294662 | 4 | 81 | | Bastard Out of Carolina | The Honk and Holler Openin... | 0.0419580340385 | 3 | 81 | | Bastard Out of Carolina | The Liar's Club: A Memoir | 0.0420168042183 | 2 | 81 | | Bastard Out of Carolina | The River King | 0.0479999780655 | 1 | 81 | | Southern Cross | Acts of Malice | 0.0384615659714 | 10 | 80 | | Southern Cross | N Is for Noose | 0.039370059967 | 9 | 80 | | Southern Cross | The Last Precinct | 0.0398010015488 | 8 | 80 | | Southern Cross | Hornet's Nest | 0.0426829457283 | 7 | 80 | | Southern Cross | Black Notice | 0.0445544719696 | 6 | 80 | | Southern Cross | Eyes of Prey | 0.0454545617104 | 5 | 80 | | Southern Cross | Taltos: Lives of the Mayfa... | 0.0465116500854 | 4 | 80 | | Southern Cross | Unnatural Exposure | 0.0508474707603 | 3 | 80 | | Southern Cross | Shadow Prey | 0.051546394825 | 2 | 80 | | Southern Cross | Isle of Dogs | 0.0628571510315 | 1 | 80 | | River, Cross My Heart | Breath, Eyes, Memory | 0.0314960479736 | 10 | 80 | | River, Cross My Heart | The Song Reader | 0.0322580933571 | 9 | 80 | | River, Cross My Heart | The Rapture of Canaan | 0.0324675440788 | 8 | 80 | | River, Cross My Heart | My Soul to Keep | 0.0340909361839 | 7 | 80 | | River, Cross My Heart | The Book of Questions | 0.0350877046585 | 6 | 80 | | River, Cross My Heart | Icy Sparks | 0.0373831987381 | 5 | 80 | | River, Cross My Heart | Local Girls | 0.0377358198166 | 4 | 80 | | River, Cross My Heart | Drowning Ruth | 0.0403226017952 | 3 | 80 | | River, Cross My Heart | Here on Earth | 0.0486111044884 | 2 | 80 | | River, Cross My Heart | We Were the Mulvaneys | 0.0528846383095 | 1 | 80 | | False Memory | One Door Away from Heaven | 0.0396039485931 | 10 | 80 | | False Memory | The Bad Place | 0.0406504273415 | 9 | 80 | | False Memory | The Dark Half | 0.0411764979362 | 8 | 80 | | False Memory | From the Corner of His Eye | 0.0429447889328 | 7 | 80 | | False Memory | The Eyes of Darkness | 0.0431034564972 | 6 | 80 | | False Memory | The Servants of Twilight | 0.0454545617104 | 5 | 80 | | False Memory | Black House | 0.0465116500854 | 4 | 80 | | False Memory | The Girl Who Loved Tom Gordon | 0.0467836260796 | 3 | 80 | | False Memory | The Funhouse | 0.0485436916351 | 2 | 80 | | False Memory | Firestarter | 0.0512820482254 | 1 | 80 | +-------------------------------+--------------------------------+------------------+------+-------+ [106502 rows x 5 columns] ###Markdown Experimenting with other models Create a dataset called `implicit` that contains only ratings data where `rating` was 4 or greater. ###Code implicit = ratings[ratings['rating'] >= 4] ###Output _____no_output_____ ###Markdown Create a train/test split of the `implicit` data created above. Hint: Use [random_split_by_user](https://turi.com/products/create/docs/generated/graphlab.recommender.random_split_by_user.htmlgraphlab.recommender.random_split_by_user). ###Code train, test = gl.recommender.util.random_split_by_user(implicit, user_id='name', item_id='book') ###Output _____no_output_____ ###Markdown Print the first 5 rows of the training set. ###Code train.head(5) ###Output _____no_output_____ ###Markdown Create a `ranking_factorization_recommender` model using just the training set and 20 factors. ###Code m = gl.ranking_factorization_recommender.create(train, 'name', 'book', target='rating', num_factors=20) ###Output _____no_output_____ ###Markdown Evaluate how well this model recommends items that were seen in the test set you created above. Hint: Check out `m.evaluate_precision_recall()`. ###Code m.evaluate_precision_recall(test, cutoffs=[50])['precision_recall_overall'] ###Output _____no_output_____ ###Markdown Create an SFrame containing only one observation, where 'Billy Bob' has rated 'Animal Farm' with score 5.0. ###Code new_observation_data = gl.SFrame({'name': ['Me'], 'book': ['Animal Farm'], 'rating': [5.0]}) ###Output _____no_output_____ ###Markdown Use this data when querying for recommendations. ###Code m.recommend(users=['Me'], new_observation_data=new_observation_data) ###Output _____no_output_____
footprint.ipynb
###Markdown Footprint Determination--- ###Code import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from cpymad.madx import Madx from pyhdtoolkit.cpymadtools import matching, orbit, special, track, tune from pyhdtoolkit.utils import defaults defaults.config_logger(level="DEBUG") plt.style.use("phd") # pyhdtoolkit.utils.defaults.install_mpl_style() %matplotlib inline %config InlineBackend.figure_format = "retina" sns.set_palette("pastel") ###Output _____no_output_____ ###Markdown Getting the Dynap Table ###Code with Madx(stdout=False) as madx: # ----- Machine & Optics ----- # madx.option(echo=False, warn=False) madx.call("lhc/lhc_as-built.seq") madx.call("lhc/opticsfile.22") # ----- Setup ----- # # orbit_scheme = orbit.setup_lhc_orbit(madx, scheme="flat") # if you know what you are doing! special.make_lhc_beams(madx) madx.command.use(sequence="lhcb1") matching.match_tunes_and_chromaticities(madx, "lhc", "lhcb1", 62.31, 60.32, 2.0, 2.0, telescopic_squeeze=True) # ----- Slicing and Dynap Table ----- # special.make_lhc_thin(madx, sequence="lhcb1", slicefactor=4) # necessary for MAD-X tracking madx.use(sequence="lhcb1") dynap_footprint = tune.make_footprint_table(madx, sigma=5) ###Output 2021-09-22 14:00:26 | INFO | pyhdtoolkit.cpymadtools.special:34 - Making default beams for 'lhcb1' and 'lhbc2' sequences 2021-09-22 14:00:27 | INFO | pyhdtoolkit.cpymadtools.matching:118 - Doing combined matching to Qx=62.31, Qy=60.32, dqx=2.0, dqy=2.0 for sequence 'lhcb1' 2021-09-22 14:00:27 | DEBUG | pyhdtoolkit.cpymadtools.matching:105 - Executing matching commands, using sequence 'lhcb1' 2021-09-22 14:01:02 | INFO | pyhdtoolkit.cpymadtools.special:436 - Slicing sequence 'lhcb1' 2021-09-22 14:01:03 | INFO | pyhdtoolkit.cpymadtools.tune:51 - Initiating particules up to 5 bunch sigma to create a tune footprint table 2021-09-22 14:01:03 | DEBUG | pyhdtoolkit.cpymadtools.tune:55 - Initializing particles 2021-09-22 14:01:03 | DEBUG | pyhdtoolkit.cpymadtools.tune:71 - Starting DYNAP tracking with initialized particles 2021-09-22 14:01:54 | DEBUG | pyhdtoolkit.cpymadtools.tune:85 - Cleaning up DYNAP output files `fort.69` and `lyapunov.data` ###Markdown Visualizing the Footprint ###Code # Doesn't always work! Depends a lot on your machine conditions. # dynap_footprint.headers["AMPLITUDE"] = 13 # footprint_polygons = tune.get_footprint_patches(dynap_footprint) # qxs, qys = tune.get_footprint_lines(dynap_footprint) fig, axis = plt.subplots(figsize=(18, 11)) axis.scatter(dynap_footprint.tunx, dynap_footprint.tuny, marker=".", label="Data Points") # axis.add_collection(footprint_polygons) # axis.plot(qxs, qys, c="red", label="Computed Footprint") plt.legend() ###Output _____no_output_____
modeling/classification_models.ipynb
###Markdown Stock Price ClassificationCredits for inspiration for plot code: https://stackoverflow.com/questions/28200786/how-to-plot-scikit-learn-classification-report https://stackoverflow.com/questions/25009284/how-to-plot-roc-curve-in-python https://stackoverflow.com/questions/29656550/how-to-plot-pr-curve-over-10-folds-of-cross-validation-in-scikit-learnBy: Jared Berry ###Code # Quality of life import os import time import warnings from collections import defaultdict # I/O and data structures import pickle import pandas as pd import numpy as np # Classification models from sklearn.linear_model import LogisticRegression from sklearn.linear_model import RidgeClassifier from sklearn.svm import SVC from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier from lightgbm import LGBMClassifier # Model selection from sklearn.model_selection import KFold from sklearn.model_selection import TimeSeriesSplit from sklearn.model_selection import GridSearchCV # Evaluation from sklearn import metrics import statsmodels.tsa.stattools as ts # Visualization import matplotlib.pyplot as plt import seaborn as sns # Magic %matplotlib inline %load_ext pycodestyle_magic sns.set_style('darkgrid') warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown Set-up Imports ###Code # Import modeling helper functions from modeling_funcs import * # Import inpath = "model_dictionary.pickle" with open(inpath, 'rb') as f: modeling = pickle.load(f) # Pull out the features dataframe train = modeling['features'] # Remove tickers with fewer than 5-years worth of data ticker_counts = (train['ticker'] .value_counts() .reset_index() .rename({'ticker':'count','index':'ticker'}, axis=1)) keep_tickers = (ticker_counts .loc[ticker_counts['count'] >= (252*5), 'ticker'] .tolist()) keep_idx = train['ticker'].isin(keep_tickers) train = train[keep_idx] ###Output _____no_output_____ ###Markdown Feature selection ###Code # Set a feature selection list (THINK ABOUT INFORMING THIS SELECTION WITH SHRINKAGE METHODS, I.E. LASSO REGRESSION) features = ['High', 'Low', 'Close', 'Volume', 'AdjClose', 'Year', 'Month', 'Week', 'Day', 'Dayofyear', 'Pct_Change_Monthly', 'Pct_Change_Yearly', 'RSI', 'Volatility', 'Yearly_Return_Rank', 'Monthly_Return_Rank', 'Rolling_Yearly_Mean_Positive_Days', 'Rolling_Monthly_Mean_Positive_Days', 'Rolling_Monthly_Mean_Price', 'Rolling_Yearly_Mean_Price', 'Momentum_Quality_Monthly', 'Momentum_Quality_Yearly', 'SPY_Trailing_Month_Return', 'open_l10', 'return_prev5_close_raw', 'return_prev10_close_raw', 'pe_ratio', 'debt_ratio', 'roa', 'beta'] # Select on features to pass to modeling machinery, along with necessary indexers X = train[features] tickers = train['ticker'].unique().tolist() # Choose a ticker - remove the tickers as above target = modeling['target_21_rel_return'] target = target[keep_idx] ###Output _____no_output_____ ###Markdown Modeling Panel-level Given that there are bound to be a number of systemic considerations that impact the price of a stock at any given point in time, it is prudent to perform and evaluate predictions across the panel of S&P 500 stocks in our sample, which will capture potential linkages between different stocks, and allow us to explore the possibility of using features generated from clustering to group like stocks in the panel. ###Code # Create a panel-level copy y_p = target.copy() # Indexes of hold-out test data (the 21 days of data preceding the present day) test_idx = np.where(np.isnan(y_p))[0].tolist() # In order to ensure grouping is done properly, remove this data from a ticker-identification set as well ticker_locs = (train[['ticker','date_of_transaction']] .drop(train.index[test_idx]) .reset_index() .drop('index', axis=1)) # Create a panel-level copy; normalize by day X_p = X.copy(deep=True) X_p = (X_p.groupby(['Year', 'Month', 'Day']) .apply(lambda x: (x - np.mean(x))/np.std(x)) .fillna(0) .drop(['Year', 'Month', 'Day'], axis=1)) # Remove hold-out test data y_p = np.delete(y_p, test_idx) X_p_holdout = X_p.loc[X_p.index[test_idx]] X_p = X_p.drop(X_p.index[test_idx]) # Exponential Moving Average smoothing (skip if not) y_p_smoothed = np.zeros(y_p.shape[0]) for t in tickers: idx = ticker_locs.loc[ticker_locs['ticker'] == t].index.tolist() y_to_smooth = y_p[idx] # Compute EMA smoothing of target within ticker EMA = 0 gamma_ = 1 for ti in range(len(y_to_smooth)): EMA = gamma_*y_to_smooth[ti] + (1-gamma_)*EMA y_to_smooth[ti] = EMA y_p_smoothed[idx] = y_to_smooth # LGBM model_dict = fit_lgbm_classifier(X_p, y_p_smoothed, X_p_holdout, ticker="", ema_gamma=1, n_splits=12, cv_method='ts', groups=ticker_locs, labeled=False, label="lgbm_final", param_search=None, holdout_method='distributed', threshold_search=True, export=True) # kNN model_dict = fit_sklearn_classifier(X_p, y_p, X_p_holdout, ticker="", ema_gamma=1, n_splits=12, cv_method='panel', model=KNeighborsClassifier, groups=ticker_locs, label='kNN Classifier', param_search=None, holdout_method='distributed', threshold_search=False, n_jobs=-1, export=True) test = model_dict['preds_df'] test = test[test['split_number'] != 0] print(metrics.confusion_matrix(test['expected'], test['predicted'])) print(metrics.roc_auc_score(test['expected'], test['predicted'])) print(metrics.classification_report(test['expected'], test['predicted'])) ###Output _____no_output_____ ###Markdown Ticker-level At the heart of this analysis is a time-series prediction problem. As such, it is prudent to explore running models for each individual stock. We can envision averaging the results of both modeling approaches to incorporate the contribution of both into a final prediction. ###Code # Set parameters cv_method_ = 'tswindow' label_ = 'lgbm_final' results_dfs = [] for i, t in enumerate(tickers[:5]): # Pull only feature/target data for the relevant stocker X_t = X.loc[train['ticker'] == t,:].drop(['Year', 'Month', 'Day'], axis=1) y_t = np.array(target)[train['ticker'] == t] # Indexes of hold-out test data (the 21 days of data preceding the present day) test_idx = np.where(np.isnan(y_t))[0].tolist() # Simple feature-scaling - for now, replace missings with 0 (i.e. the mean of a normalized feature) X_t = X_t.apply(lambda x: (x - np.mean(x))/np.std(x)).fillna(0) # Remove hold-out test data y_t = np.delete(y_t, test_idx) y_t = np.array((pd.Series(y_t) - pd.Series(y_t).shift()).fillna(0).tolist()) X_t_holdout = X_t.loc[X_t.index[test_idx]] X_t = X_t.drop(X_t.index[test_idx]) # Fit and evaluate model_dict = fit_lgbm_classifier(X_t, y_t, X_t_holdout, ticker=t, ema_gamma=1, n_splits=12, cv_method='tswindow', labeled=False, param_search=None, holdout_method='distributed', threshold_search=True, export=False) results_dfs.append(model_dict) (pd.Series(y_t) - pd.Series(y_t).shift()) # Export ticker-level models model_outpath = "{}_{}_{}.pickle".format(slugify(label_), "all_tickers_", cv_method_) with open(model_outpath, 'wb') as f: pickle.dump(results_dfs, f) # Set parameters cv_method_ = 'ts' label_ = 'RF Window' model_ = RandomForestClassifier results_dfs = [] for i, t in enumerate(tickers): # Pull only feature/target data for the relevant stocker X_t = X.loc[train['ticker'] == t,:].drop(['Year', 'Month', 'Day'], axis=1) y_t = np.array(target)[train['ticker'] == t] # Indexes of hold-out test data (the 21 days of data preceding the present day) test_idx = np.where(np.isnan(y_t))[0].tolist() # Simple feature-scaling - for now, replace missings with 0 (i.e. the mean of a normalized feature) X_t = X_t.apply(lambda x: (x - np.mean(x))/np.std(x)).fillna(0) # Remove hold-out test data y_t = np.delete(y_t, test_idx) X_t_holdout = X_t.loc[X_t.index[test_idx]] X_t = X_t.drop(X_t.index[test_idx]) # Fit and evaluate model_dict = fit_sklearn_classifier(X_t, y_t, X_t_holdout, ticker=t, ema_gamma=1, n_splits=36, cv_method=cv_method_, model=model_, label=label_, param_search=None, holdout_method='distributed', threshold_search=True, n_estimators=1000, export=False) results_dfs.append(model_dict) # Export ticker-level models model_outpath = "{}_{}_{}.pickle".format(slugify(label_), "all_tickers", cv_method_) with open(model_outpath, 'wb') as f: pickle.dump(results_dfs, f) ###Output _____no_output_____ ###Markdown Evaluation Panel-level ###Code # Set path to pickle file containing panel-level model model_inpath = "lgbm_final_select_panel_ts.pickle" # Import with open(model_inpath, 'rb') as f: results_df = pickle.load(f) ticker_performance = results_df['preds_df'] try: feature_importances = pd.DataFrame(results_df['feature_importances'], columns=['feature', 'importance']) except KeyError: print("No variable importances for this model") ###Output _____no_output_____ ###Markdown Ticker-level ###Code # Set path to pickle file containing ticker-level model model_inpath = "lgbm_final_all_tickers_252_21_tswindow.pickle" # Import with open(model_inpath, 'rb') as f: results_dfs = pickle.load(f) # Stand up results dataframes performance_dfs = [] feature_importance_dfs = [] holdout_predictions = defaultdict(list) for r in results_dfs: performance_dfs.append(r['preds_df']) try: feature_importance_dfs.append(pd.DataFrame(r['feature_importances'], columns=['feature', 'importance'])) except KeyError: print("No variable importances for this model") holdout_predictions[r['preds_df'].ticker.unique().tolist()[0]] = r['holdout_probs'] ticker_performance = pd.concat(performance_dfs, axis=0) feature_importances = pd.concat(feature_importance_dfs, axis=0) ###Output _____no_output_____ ###Markdown Visualization ###Code # Remove unpopulated splits (training data never used for validation) ticker_performance = ticker_performance[ticker_performance['split_number'] != 0] # Average feature importances across all ticker-level models average_importances = feature_importances.groupby('feature').mean().sort_values('importance') average_importances.plot(kind='barh', title="Feature Importances - Ticker", legend=False, figsize=(16,12)) plt.savefig(fname='varimp_tickers_252_63_final.jpg', pad_inches=0, bbox_inches='tight') plt.show() # AUC Curve fpr, tpr, threshold = metrics.roc_curve(ticker_performance['expected'], ticker_performance['predicted_prob']) roc_auc = metrics.auc(fpr, tpr) plt.title('Receiver Operating Characteristic Curve') plt.plot(fpr, tpr, 'c', label = 'AUC = %0.2f' % roc_auc) plt.legend(loc = 'lower right') plt.plot([0, 1], [0, 1],'k--') plt.xlim([0, 1]) plt.ylim([0, 1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.savefig(fname='auc_ticker_lgbm_252_21_final.jpg', pad_inches=0, bbox_inches='tight') plt.show() # Precision-Recall Curves precision, recall, _ = metrics.precision_recall_curve(ticker_performance['expected'], ticker_performance['predicted_prob'], pos_label=1) average_precision = metrics.average_precision_score(ticker_performance['expected'], ticker_performance['predicted_prob']) plt.plot(recall, precision, label='area = %0.2f' % average_precision, color="green") plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('Recall') plt.ylabel('Precision') plt.title('Precision Recall Curve') plt.legend(loc="lower right") plt.savefig(fname='prc_ticker_lgbm_252_126_final.jpg', pad_inches=0, bbox_inches='tight') plt.show() # Classification Report fig, ax = plt.subplots(figsize=(12,8)) import matplotlib.pyplot as plt scores = metrics.precision_recall_fscore_support(ticker_performance['expected'], ticker_performance['predicted']) score_matrix = [[s[0] for s in scores[:3]], [s[1] for s in scores[:3]]] print(score_matrix) plt.imshow(score_matrix, interpolation='nearest', cmap='RdBu_r', vmin=0, vmax=1) plt.title('LightGBM Classification Report - window CV') plt.colorbar() x_tick_marks = np.arange(3) y_tick_marks = np.arange(2) plt.xticks(x_tick_marks, ['precision', 'recall', 'f1-score'], rotation=45, ) ax.yaxis.label.set_size(25) ax.xaxis.label.set_size(25) ax.set_title('LightGBM Classification Report - window CV', size=20) plt.yticks(y_tick_marks, ['Outperform', 'Underperform']) plt.tight_layout() plt.ylabel('Classes') plt.xlabel('Measures') plt.savefig(fname='lgbm_window_map_ticker_252_21_final.jpg', pad_inches=0, bbox_inches='tight') plt.show() print(metrics.classification_report(ticker_performance['expected'], ticker_performance['predicted'])) ###Output _____no_output_____
lec/8/08-generative.ipynb
###Markdown MADE ###Code def to_one_hot(labels, d): one_hot = torch.FloatTensor(labels.shape[0], d).cuda() one_hot.zero_() one_hot.scatter_(1, labels.unsqueeze(1), 1) return one_hot # https://github.com/karpathy/pytorch-made class MaskedLinear(nn.Linear): def __init__(self, in_features, out_features, bias=True): super().__init__(in_features, out_features, bias) self.register_buffer('mask', torch.ones(out_features, in_features)) def set_mask(self, mask): self.mask.data.copy_(torch.from_numpy(mask.astype(np.uint8).T)) def forward(self, input): return F.linear(input, self.mask * self.weight, self.bias) class MADE(nn.Module): def __init__(self, input_shape, d, hidden_size=[512, 512, 512], ordering=None, one_hot_input=False): super().__init__() self.input_shape = input_shape self.nin = np.prod(input_shape) self.nout = self.nin * d self.d = d self.hidden_sizes = hidden_size self.ordering = np.arange(self.nin) if ordering is None else ordering self.one_hot_input = one_hot_input # define a simple MLP neural net self.net = [] hs = [self.nin * d if one_hot_input else self.nin] + self.hidden_sizes + [self.nout] for h0, h1 in zip(hs, hs[1:]): self.net.extend([ MaskedLinear(h0, h1), nn.ReLU(), ]) self.net.pop() # pop the last ReLU for the output layer self.net = nn.Sequential(*self.net) self.m = {} self.create_mask() # builds the initial self.m connectivity def create_mask(self): L = len(self.hidden_sizes) # sample the order of the inputs and the connectivity of all neurons self.m[-1] = self.ordering for l in range(L): self.m[l] = np.random.randint(self.m[l - 1].min(), self.nin - 1, size=self.hidden_sizes[l]) # construct the mask matrices masks = [self.m[l - 1][:, None] <= self.m[l][None, :] for l in range(L)] masks.append(self.m[L - 1][:, None] < self.m[-1][None, :]) masks[-1] = np.repeat(masks[-1], self.d, axis=1) if self.one_hot_input: masks[0] = np.repeat(masks[0], self.d, axis=0) # set the masks in all MaskedLinear layers layers = [l for l in self.net.modules() if isinstance(l, MaskedLinear)] for l, m in zip(layers, masks): l.set_mask(m) def forward(self, x): batch_size = x.shape[0] if self.one_hot_input: x = x.long().view(-1) x = to_one_hot(x, self.d) x = x.view(batch_size, -1) else: x = x.float() x = x.view(batch_size, self.nin) logits = self.net(x).view(batch_size, self.nin, self.d) return logits.permute(0, 2, 1).contiguous().view(batch_size, self.d, *self.input_shape) def loss(self, x): return F.cross_entropy(self(x), x.long()) def sample(self, n): samples = torch.zeros(n, self.nin).cuda() with torch.no_grad(): for i in range(self.nin): logits = self(samples).view(n, self.d, self.nin)[:, :, self.ordering[i]] probs = F.softmax(logits, dim=1) samples[:, self.ordering[i]] = torch.multinomial(probs, 1).squeeze(-1) samples = samples.view(n, *self.input_shape) return samples.cpu().numpy() def get_distribution(self): assert self.input_shape == (2,), 'Only available for 2D joint' x = np.mgrid[0:self.d, 0:self.d].reshape(2, self.d ** 2).T x = torch.LongTensor(x).cuda() log_probs = F.log_softmax(self(x), dim=1) distribution = torch.gather(log_probs, 1, x.unsqueeze(1)).squeeze(1) distribution = distribution.sum(dim=1) return distribution.exp().view(self.d, self.d).detach().cpu().numpy() def load_dataset(dirname, dataset): mnist_trainset = dataset(root=dirname, train=True, download=True, transform=None) mnist_testset = dataset(root=dirname, train=False, download=True, transform=None) train_data, test_data = mnist_trainset.data, mnist_testset.data train_data = (train_data > 127).numpy().astype('uint8') test_data = (test_data > 127).numpy().astype('uint8') return np.transpose([train_data], (1, 0, 2, 3)), np.transpose([test_data], (1, 0, 2, 3)) def load_mnist(dirname): return load_dataset(dirname, datasets.MNIST) def load_fashionmnist(dirname): return load_dataset(dirname, datasets.FashionMNIST) def train(model, train_loader, optimizer, epoch, grad_clip=None, quiet=False): model.train() train_losses = [] for x in train_loader: x = x.cuda().contiguous() loss = model.loss(x) optimizer.zero_grad() loss.backward() if grad_clip: torch.nn.utils.clip_grad_norm_(model.parameters(), grad_clip) optimizer.step() train_losses.append(loss.item()) return train_losses def eval_loss(model, data_loader, quiet=False): model.eval() total_loss = 0 with torch.no_grad(): for x in data_loader: x = x.cuda().contiguous() loss = model.loss(x) total_loss += loss * x.shape[0] avg_loss = total_loss / len(data_loader.dataset) return avg_loss.item() def train_epochs(model, train_loader, test_loader, train_args, quiet=False): epochs, lr = train_args['epochs'], train_args['lr'] grad_clip = train_args.get('grad_clip', None) optimizer = optim.Adam(model.parameters(), lr=lr) train_losses = [] test_losses = [eval_loss(model, test_loader)] for epoch in range(epochs): model.train() train_losses.extend(train(model, train_loader, optimizer, epoch, grad_clip)) test_loss = eval_loss(model, test_loader) test_losses.append(test_loss) if not quiet: print(f'Epoch {epoch}, Test loss {test_loss:.4f}') return train_losses, test_losses train_data, test_data = load_mnist('data') train_data_f, test_data_f = load_fashionmnist('datafashion') def train_model(model, train_data, test_data, epochs=10, batch_size=128): train_loader = data.DataLoader(train_data, batch_size=batch_size, shuffle=True) test_loader = data.DataLoader(test_data, batch_size=batch_size) train_losses, test_losses = train_epochs(model, train_loader, test_loader, dict(epochs=epochs, lr=1e-3)) return model, train_losses, test_losses model = MADE((1, 28, 28), 2, hidden_size=[512, 512]).cuda() model, train_losses, test_losses = train_model(model, train_data, test_data, epochs=20) def plot_losses(train_losses, test_losses): fig = plt.figure(figsize=(6, 4)) ax = fig.add_subplot(111) n_epochs = len(test_losses) - 1 x_train = np.linspace(0, n_epochs, len(train_losses)) x_test = np.arange(n_epochs + 1) ax.plot(x_train, train_losses, label='Ошибка на тренировочном множестве') ax.plot(x_test, test_losses, label='Ошибка на тестовом множестве') ax.legend() plt.xlabel('Эпоха обучения') plt.ylabel('Ошибка') plot_losses(train_losses, test_losses) def plot_sample_grid(im_samples, nrows): grid_img = make_grid(im_samples, nrow=nrows) fig = plt.figure() plt.imshow(grid_img.permute(1, 2, 0)) plt.axis('off') im_samples = torch.FloatTensor(model.sample(49)) plot_sample_grid(im_samples, 7) ###Output _____no_output_____ ###Markdown PixelCNN ###Code class MaskConv2d(nn.Conv2d): def __init__(self, mask_type, *args, conditional_size=None, color_conditioning=False, **kwargs): assert mask_type == 'A' or mask_type == 'B' super().__init__(*args, **kwargs) self.conditional_size = conditional_size self.color_conditioning = color_conditioning self.register_buffer('mask', torch.zeros_like(self.weight)) self.create_mask(mask_type) if self.conditional_size: if len(self.conditional_size) == 1: self.cond_op = nn.Linear(conditional_size[0], self.out_channels) else: self.cond_op = nn.Conv2d(conditional_size[0], self.out_channels, kernel_size=3, padding=1) def forward(self, input, cond=None): batch_size = input.shape[0] out = F.conv2d(input, self.weight * self.mask, self.bias, self.stride, self.padding, self.dilation, self.groups) if self.conditional_size: if len(self.conditional_size) == 1: # Broadcast across height and width of image and add as conditional bias out = out + self.cond_op(cond).view(batch_size, -1, 1, 1) else: out = out + self.cond_op(cond) return out def create_mask(self, mask_type): k = self.kernel_size[0] self.mask[:, :, :k // 2] = 1 self.mask[:, :, k // 2, :k // 2] = 1 if self.color_conditioning: assert self.in_channels % 3 == 0 and self.out_channels % 3 == 0 one_third_in, one_third_out = self.in_channels // 3, self.out_channels // 3 if mask_type == 'B': self.mask[:one_third_out, :one_third_in, k // 2, k // 2] = 1 self.mask[one_third_out:2*one_third_out, :2*one_third_in, k // 2, k // 2] = 1 self.mask[2*one_third_out:, :, k // 2, k // 2] = 1 else: self.mask[one_third_out:2*one_third_out, :one_third_in, k // 2, k // 2] = 1 self.mask[2*one_third_out:, :2*one_third_in, k // 2, k // 2] = 1 else: if mask_type == 'B': self.mask[:, :, k // 2, k // 2] = 1 class ResBlock(nn.Module): def __init__(self, in_channels, **kwargs): super().__init__() self.block = nn.ModuleList([ nn.ReLU(), MaskConv2d('B', in_channels, in_channels // 2, 1, **kwargs), nn.ReLU(), MaskConv2d('B', in_channels // 2, in_channels // 2, 7, padding=3, **kwargs), nn.ReLU(), MaskConv2d('B', in_channels // 2, in_channels, 1, **kwargs) ]) def forward(self, x, cond=None): out = x for layer in self.block: if isinstance(layer, MaskConv2d): out = layer(out, cond=cond) else: out = layer(out) return out + x class LayerNorm(nn.LayerNorm): def __init__(self, color_conditioning, *args, **kwargs): super().__init__(*args, **kwargs) self.color_conditioning = color_conditioning def forward(self, x): x = x.permute(0, 2, 3, 1).contiguous() x_shape = x.shape if self.color_conditioning: x = x.contiguous().view(*(x_shape[:-1] + (3, -1))) x = super().forward(x) if self.color_conditioning: x = x.view(*x_shape) return x.permute(0, 3, 1, 2).contiguous() class PixelCNN(nn.Module): def __init__(self, input_shape, n_colors, n_filters=64, kernel_size=7, n_layers=5, conditional_size=None, use_resblock=False, color_conditioning=False): super().__init__() assert n_layers >= 2 n_channels = input_shape[0] kwargs = dict(conditional_size=conditional_size, color_conditioning=color_conditioning) if use_resblock: block_init = lambda: ResBlock(n_filters, **kwargs) else: block_init = lambda: MaskConv2d('B', n_filters, n_filters, kernel_size=kernel_size, padding=kernel_size // 2, **kwargs) model = nn.ModuleList([MaskConv2d('A', n_channels, n_filters, kernel_size=kernel_size, padding=kernel_size // 2, **kwargs)]) for _ in range(n_layers): if color_conditioning: model.append(LayerNorm(color_conditioning, n_filters // 3)) else: model.append(LayerNorm(color_conditioning, n_filters)) model.extend([nn.ReLU(), block_init()]) model.extend([nn.ReLU(), MaskConv2d('B', n_filters, n_filters, 1, **kwargs)]) model.extend([nn.ReLU(), MaskConv2d('B', n_filters, n_colors * n_channels, 1, **kwargs)]) if conditional_size: if len(conditional_size) == 1: self.cond_op = lambda x: x else: self.cond_op = nn.Sequential( nn.Conv2d(1, 64, 3, padding=1), nn.ReLU(), nn.Conv2d(64, 64, 3, padding=1), nn.ReLU(), nn.Conv2d(64, 64, 3, padding=1), nn.ReLU() ) self.net = model self.input_shape = input_shape self.n_colors = n_colors self.n_channels = n_channels self.color_conditioning = color_conditioning self.conditional_size = conditional_size def forward(self, x, cond=None): batch_size = x.shape[0] out = (x.float() / (self.n_colors - 1) - 0.5) / 0.5 if self.conditional_size: cond = self.cond_op(cond) for layer in self.net: if isinstance(layer, MaskConv2d) or isinstance(layer, ResBlock): out = layer(out, cond=cond) else: out = layer(out) if self.color_conditioning: return out.view(batch_size, self.n_channels, self.n_colors, *self.input_shape[1:]).permute(0, 2, 1, 3, 4) else: return out.view(batch_size, self.n_colors, *self.input_shape) def loss(self, x, cond=None): return F.cross_entropy(self(x, cond=cond), x.long()) def sample(self, n, cond=None): samples = torch.zeros(n, *self.input_shape).cuda() with torch.no_grad(): for r in range(self.input_shape[1]): for c in range(self.input_shape[2]): for k in range(self.n_channels): logits = self(samples, cond=cond)[:, :, k, r, c] probs = F.softmax(logits, dim=1) samples[:, k, r, c] = torch.multinomial(probs, 1).squeeze(-1) return samples.permute(0, 2, 3, 1).cpu().numpy() model = PixelCNN((1, 28, 28), 2, n_layers=5).cuda() model_pixelcnn, train_losses_pixelcnn, test_losses_pixelcnn = train_model(model, train_data, test_data) plot_losses(train_losses_pixelcnn, test_losses_pixelcnn) im_samples_pixelcnn = torch.FloatTensor(model_pixelcnn.sample(49)) plot_sample_grid(im_samples_pixelcnn.permute(0,3,1,2), 7) model = PixelCNN((1, 28, 28), 2, n_layers=5).cuda() model_pixelcnn_f, train_losses_pixelcnn_f, test_losses_pixelcnn_f = train_model(model, train_data_f, test_data_f) im_samples_pixelcnn_f = torch.FloatTensor(model_pixelcnn_f.sample(49)) plot_sample_grid(im_samples_pixelcnn_f.permute(0,3,1,2), 7) ###Output _____no_output_____
nbody.ipynb
###Markdown DESI and the fastest supercomputer in the West Understanding _how_ the 30 million galaxies surveyed by DESI actually formed in the Universe is hard, really hard. So hard in fact that DESI scientists exploit [Summit](https://www.olcf.ornl.gov/summit/), the world's fastest supercomputer[1](Footnotes) at Oak Ridge National Lab to calculate how the distribution of galaxies should look depending on the type of Dark Energy: Costing a cool 325 million dollars to build, Summit is capable of calculating addition and multiplication operations $1.486 \times 10^{17}$ times a second, equivalent to $1.486 \times 10^{11}$ MegaFlops or MFLOPS. For comparison, let's see what Binder provides (you'll need some patience, maybe leave this to later): ###Code _ = flops() ###Output _____no_output_____ ###Markdown So Summit is at least a billion times more powerful! With Summit, we can resolve the finest details of the distribution of _dark matter_ that all galaxies trace: Here the brightest regions signify the densest regions of dark matter in the Universe, in which we expect to find more galaxies (for some zoom-ins, [click here](https://lgarrison.github.io/halos/)). The video below shows that we have observed this predicted structure in the distribution of real galaxies observed with experiments prior to DESI: ###Code YouTubeVideo('08LBltePDZw', width=800, height=400) ###Output _____no_output_____ ###Markdown [Dark matter](https://en.wikipedia.org/wiki/Dark_matter:~:text=Dark%20matter%20is%20a%20form,%E2%88%9227%20kg%2Fm3.) is a pervasive element in our Universe, making up 25% of the total (energy) density. With Dark Energy and the common atom ("baryonic matter") making up the remainder. We know next to nothing about Dark Matter, beyond its gravitational attraction of other matter and light in the Universe. Fortunately, the equations that describe the evolution of dark matter, rather than the [complex formation of galaxies](https://www.space.com/15680-galaxies.html), are relatively simple for the Universe in which we seem to live. All that is required is to track the gravitational attraction of dark matter particles (on an expanding stage). We can predict the evolution of dark matter by sampling the gravitational force, velocity and position with a set of (fictitious) particles that each represent a 'clump' of dark matter with some total mass. Of course, this means we cannot solve for the distribution of dark matter within these clump sized regions, but just the distribution amongst clumps that leads to the structure you can see above. With Summit, the smallest clump we can resolve is not far from the combined mass of all the stars in the [Milky Way](https://www.nasa.gov/feature/goddard/2019/what-does-the-milky-way-weigh-hubble-and-gaia-investigate): To start, we'll initially postition a set of clumps at random positions within a 3D cube and give them zero initial velocities. Velocities will be generated at subsequent times as the ($1/r^2$) gravitational attraction of a particle to all others causes a net acceleration. ###Code def init_dof(npt=1): # Create a set of particles at random positions in a box, which will soon predict the distribution of dark matter # as we see above. xs = np.random.uniform(0., 1., npt) ys = np.random.uniform(0., 1., npt) zs = np.random.uniform(0., 1., npt) pos = np.vstack((xs,ys,zs)).T vel = np.zeros_like(pos) return pos, vel ###Output _____no_output_____ ###Markdown The gravitational force experienced by each dark matter particle is [Newton's](https://en.wikipedia.org/wiki/Isaac_Newton) $F = \frac{GmM}{r^2} \hat r$ that you may be familiar with. We just need to do a thorough job on the book keeping required for to calculate the total force experienced by one particle due to all others: ###Code def g_at_pos(pos, particles, mass, epsilon=1.0, doimages=True): # eqn. (10) of http://www.skiesanduniverses.org/resources/KlypinNbody.pdf. # Here epsilon is a fudge factor to stop a blow up of the gravitational force at zero distance. delta_r = particles - pos result = mass * np.sum(delta_r / (delta_r**2. + epsilon**2.)**(3./2.), axis=0) # If 'pos' is one of the particles, then technically we've including the "self-force" # But such a pos will have delta_r = 0, and thus contribute nothing to the total force, as it should! if doimages: # Our simulation assumes periodic boundary conditions, so for the acceleration of each particle, there's a # corresponding acceleration due to the image of the particle produced by applying periodic shifts to its # position. shift = np.array([-1, 0, 1]) images = [] for triple in itertools.product(shift, repeat=3): images.append(triple) images.remove((0, 0, 0)) images = np.array(images) for image in images: delta_r_displaced = delta_r + image result += mass * np.sum(delta_r_displaced / (delta_r_displaced**2. + epsilon**2.)**(3./2.), axis=0) return result ###Output _____no_output_____ ###Markdown In a remarkable experiment in 1941, Erik Holmberg used the fact that the brightness of light decays with distance at the same ($1/r^2$) rate as gravity. To calculate the total force on a 'particle' in his 'simulation', Holmberg placed a lightbulb at the position of each particle and calculated the effective force on a given particle by measuring the total brightness at each point! The figure below illustrates this idea.Try running the following cell a few times! You'll get a different random layout of "lightbulbs" each time. ###Code fig, ax = plt.subplots(1, 1, figsize=(5,5), dpi=150) xmin, xmax, ymin, ymax = (0., 1., 0., 1.) Ngrid = 100 xx, yy = np.meshgrid(np.linspace(xmin, xmax, Ngrid), np.linspace(ymin, ymax, Ngrid)) epsilon = 0.1 weights = np.zeros_like(xx) npt = 10 pos, vel = init_dof(npt=npt) for par in pos: weights += 1. / ((xx - par[0])**2 + (yy - par[1])**2 + epsilon**2.) ax.imshow(weights, extent=(xmin, xmax, ymin, ymax), cmap=plt.cm.afmhot, alpha=1., origin='lower') ax.scatter(pos[:,0], pos[:,1], color='k', edgecolor='w') ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False) ax.set_title(f"Holmberg's Lightbulb Experiment with $N={npt}$ Bulbs") ax.set_xlim(0., 1.) ax.set_ylim(0., 1.) fig.tight_layout() ###Output _____no_output_____ ###Markdown This work was the original concept of gravitational 'N-body' simulations that are described here. It's almost criminal that only 118 authors have referenced this groundbreaking idea! Today, given the mini supercomputers we often have at our fingertips, we can determine the final distribution of dark matter more accurately with computers than light bulbs. By evolving an initial homogeneous distribution (a nearly uniform distribution of dark matter clumps, as the universe produces in the Big Bang), we can accurately predict the locations of galaxies (the places where the biggest dark matter clumps form).To do this, we just need to calculate the acceleration on each particle at a series of time steps and update the velocity and position accordingly according to the acceleration that particle experiences. You'll be familiar with this as the sensation you feel as a car turns a corner, or speeds up. ###Code # We'll sample the equations of motion in discrete time steps. dt = 5e-4 nsteps = 500 timesteps = np.linspace(0, (nsteps)*dt, nsteps, endpoint=False) # Number and mass of particles npt = 50 mass = 0.25 # Whether to draw arrows for the acceleration and velocity draw_acc = False draw_vel = False # A small drag term to simulate the real drag dark matter particles experience due to the expanding universe drag = 1e-2 ###Output _____no_output_____ ###Markdown Now we simply have to run the simulation! ###Code fig, ax = plt.subplots(1,1, figsize=(5,5), dpi=150) ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False) # Reinitialise particles. pos, vel = init_dof(npt=npt) # A helper function to make a nice-looking legend for our arrows # from https://stackoverflow.com/a/22349717 def make_legend_arrow(legend, orig_handle, xdescent, ydescent, width, height, fontsize): p = matplotlib.patches.FancyArrow(0, 0.5*height, width, 0, length_includes_head=True, head_width=0.75*height) return p for index_in_timestep, time in enumerate(timesteps): ax.clear() ax.set_title(f'N-body simulation with $N={npt}$ particles') step_label = ax.text(0.03, .97, f'Step {index_in_timestep}', transform=ax.transAxes, verticalalignment='top', c='k', bbox=dict(color='w', alpha=0.8)) dvel = np.zeros_like(vel) dpos = np.zeros_like(pos) acc = np.zeros_like(pos) for index_in_particle in range(npt): acc[index_in_particle] = g_at_pos(pos[index_in_particle], pos, mass, epsilon=0.1) # Update velocities. dvel[index_in_particle] = dt * acc[index_in_particle] # Update positions. dpos[index_in_particle] = dt * vel[index_in_particle] vel += dvel - drag*vel pos += dpos # Our simulation has periodic boundaries, if you go off one side you come back on the other! pos = pos % 1. ax.scatter(pos[:,0], pos[:,1], color='darkorange', edgecolor='w') # Draw arrows representing the velocity and acceleration vectors, if requested # The code here is a little verbose to get nice-looking arrows in the legend arrows = [] if draw_vel: ax.quiver(pos[:,0], pos[:,1], vel[:,0], vel[:,1], color='w', zorder=0) arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Velocity', color='w')] if draw_acc: ax.quiver(pos[:,0], pos[:,1], acc[:,0], acc[:,1], color='darkorange', zorder=0) arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Accel', color='darkorange')] if draw_vel or draw_acc: ax.legend(handles=arrows, handler_map={matplotlib.patches.FancyArrow:matplotlib.legend_handler.HandlerPatch(patch_func=make_legend_arrow)}, facecolor='k', edgecolor='white', framealpha=0.8, loc='lower right') ax.set_xlim(0., 1.) ax.set_ylim(0., 1.) fig.canvas.draw() ###Output _____no_output_____ ###Markdown Try playing around with the settings! More than 100 particles won't run very smoothly, however.With the default settings, you'll find that the particles tend fall into one or two clumps before too long. This is due to the drag that we put in. The drag simulates the effect that the expanding universe has on real dark matter particles, which is to slow them down and cause them to group together. These clumps are known as *halos*, and form "galactic nurseries" where gas can gather to form new stars and galaxies. Now, when DESI scientists run huge simulations, such as those run on Summit, a total of ~48 _trillion_ particles are solved for. Don't try this here! But the results are really quite extraordinary (skip to 6 mins 45 seconds if you're impatient to see the result!): ###Code YouTubeVideo('LQMLFryA_7k', width=800, height=400) ###Output _____no_output_____ ###Markdown ###Code from google.colab import drive drive.mount('/content/drive') import sys sys.path.append('/content/drive/MyDrive/desihigh') import time import astropy import itertools import matplotlib import numpy as np import pylab as pl import matplotlib.pyplot as plt import astropy.units as u from astropy.cosmology import FlatLambdaCDM from IPython.display import YouTubeVideo from tools.flops import flops %matplotlib notebook plt.style.use('dark_background') ###Output _____no_output_____ ###Markdown DESI and the fastest supercomputer in the West Understanding _how_ the 30 million galaxies surveyed by DESI actually formed in the Universe is hard, really hard. So hard in fact that DESI scientists exploit [Summit](https://www.olcf.ornl.gov/summit/), the world's fastest supercomputer[1](Footnotes) at Oak Ridge National Lab to calculate how the distribution of galaxies should look depending on the type of Dark Energy: Costing a cool 325 million dollars to build, Summit is capable of calculating addition and multiplication operations $1.486 \times 10^{17}$ times a second, equivalent to $1.486 \times 10^{11}$ MegaFlops or MFLOPS. For comparison, let's see what Binder provides (you'll need some patience, maybe leave this to later): ###Code _ = flops() ###Output _____no_output_____ ###Markdown So Summit is at least a billion times more powerful! With Summit, we can resolve the finest details of the distribution of _dark matter_ that all galaxies trace: Here the brightest regions signify the densest regions of dark matter in the Universe, in which we expect to find more galaxies (for some zoom-ins, [click here](https://lgarrison.github.io/halos/)). The video below shows that we have observed this predicted structure in the distribution of real galaxies observed with experiments prior to DESI: ###Code YouTubeVideo('08LBltePDZw', width=800, height=400) ###Output _____no_output_____ ###Markdown [Dark matter](https://en.wikipedia.org/wiki/Dark_matter:~:text=Dark%20matter%20is%20a%20form,%E2%88%9227%20kg%2Fm3.) is a pervasive element in our Universe, making up 25% of the total (energy) density. With Dark Energy and the common atom ("baryonic matter") making up the remainder. We know next to nothing about Dark Matter, beyond its gravitational attraction of other matter and light in the Universe. Fortunately, the equations that describe the evolution of dark matter, rather than the [complex formation of galaxies](https://www.space.com/15680-galaxies.html), are relatively simple for the Universe in which we seem to live. All that is required is to track the gravitational attraction of dark matter particles (on an expanding stage). We can predict the evolution of dark matter by sampling the gravitational force, velocity and position with a set of (fictitious) particles that each represent a 'clump' of dark matter with some total mass. Of course, this means we cannot solve for the distribution of dark matter within these clump sized regions, but just the distribution amongst clumps that leads to the structure you can see above. With Summit, the smallest clump we can resolve is not far from the combined mass of all the stars in the [Milky Way](https://www.nasa.gov/feature/goddard/2019/what-does-the-milky-way-weigh-hubble-and-gaia-investigate): To start, we'll initially postition a set of clumps at random positions within a 3D cube and give them zero initial velocities. Velocities will be generated at subsequent times as the ($1/r^2$) gravitational attraction of a particle to all others causes a net acceleration. ###Code def init_dof(npt=1): # Create a set of particles at random positions in a box, which will soon predict the distribution of dark matter # as we see above. xs = np.random.uniform(0., 1., npt) ys = np.random.uniform(0., 1., npt) zs = np.random.uniform(0., 1., npt) pos = np.vstack((xs,ys,zs)).T vel = np.zeros_like(pos) return pos, vel ###Output _____no_output_____ ###Markdown The gravitational force experienced by each dark matter particle is [Newton's](https://en.wikipedia.org/wiki/Isaac_Newton) $F = \frac{GmM}{r^2} \hat r$ that you may be familiar with. We just need to do a thorough job on the book keeping required for to calculate the total force experienced by one particle due to all others: ###Code def g_at_pos(pos, particles, mass, epsilon=1.0, doimages=True): # eqn. (10) of http://www.skiesanduniverses.org/resources/KlypinNbody.pdf. # Here epsilon is a fudge factor to stop a blow up of the gravitational force at zero distance. delta_r = particles - pos result = mass * np.sum(delta_r / (delta_r**2. + epsilon**2.)**(3./2.), axis=0) # If 'pos' is one of the particles, then technically we've including the "self-force" # But such a pos will have delta_r = 0, and thus contribute nothing to the total force, as it should! if doimages: # Our simulation assumes periodic boundary conditions, so for the acceleration of each particle, there's a # corresponding acceleration due to the image of the particle produced by applying periodic shifts to its # position. shift = np.array([-1, 0, 1]) images = [] for triple in itertools.product(shift, repeat=3): images.append(triple) images.remove((0, 0, 0)) images = np.array(images) for image in images: delta_r_displaced = delta_r + image result += mass * np.sum(delta_r_displaced / (delta_r_displaced**2. + epsilon**2.)**(3./2.), axis=0) return result ###Output _____no_output_____ ###Markdown In a remarkable experiment in 1941, Erik Holmberg used the fact that the brightness of light decays with distance at the same ($1/r^2$) rate as gravity. To calculate the total force on a 'particle' in his 'simulation', Holmberg placed a lightbulb at the position of each particle and calculated the effective force on a given particle by measuring the total brightness at each point! The figure below illustrates this idea.Try running the following cell a few times! You'll get a different random layout of "lightbulbs" each time. ###Code fig, ax = plt.subplots(1, 1, figsize=(5,5), dpi=150) xmin, xmax, ymin, ymax = (0., 1., 0., 1.) Ngrid = 100 xx, yy = np.meshgrid(np.linspace(xmin, xmax, Ngrid), np.linspace(ymin, ymax, Ngrid)) epsilon = 0.1 weights = np.zeros_like(xx) npt = 10 pos, vel = init_dof(npt=npt) for par in pos: weights += 1. / ((xx - par[0])**2 + (yy - par[1])**2 + epsilon**2.) ax.imshow(weights, extent=(xmin, xmax, ymin, ymax), cmap=plt.cm.afmhot, alpha=1., origin='lower') ax.scatter(pos[:,0], pos[:,1], color='k', edgecolor='w') ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False) ax.set_title(f"Holmberg's Lightbulb Experiment with $N={npt}$ Bulbs") ax.set_xlim(0., 1.) ax.set_ylim(0., 1.) fig.tight_layout() ###Output _____no_output_____ ###Markdown This work was the original concept of gravitational 'N-body' simulations that are described here. It's almost criminal that only 118 authors have referenced this groundbreaking idea! Today, given the mini supercomputers we often have at our fingertips, we can determine the final distribution of dark matter more accurately with computers than light bulbs. By evolving an initial homogeneous distribution (a nearly uniform distribution of dark matter clumps, as the universe produces in the Big Bang), we can accurately predict the locations of galaxies (the places where the biggest dark matter clumps form).To do this, we just need to calculate the acceleration on each particle at a series of time steps and update the velocity and position accordingly according to the acceleration that particle experiences. You'll be familiar with this as the sensation you feel as a car turns a corner, or speeds up. ###Code # We'll sample the equations of motion in discrete time steps. dt = 5e-4 nsteps = 500 timesteps = np.linspace(0, (nsteps)*dt, nsteps, endpoint=False) # Number and mass of particles npt = 50 mass = 0.25 # Whether to draw arrows for the acceleration and velocity draw_acc = False draw_vel = False # A small drag term to simulate the real drag dark matter particles experience due to the expanding universe drag = 1e-2 ###Output _____no_output_____ ###Markdown Now we simply have to run the simulation! ###Code fig, ax = plt.subplots(1,1, figsize=(5,5), dpi=150) ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False) # Reinitialise particles. pos, vel = init_dof(npt=npt) # A helper function to make a nice-looking legend for our arrows # from https://stackoverflow.com/a/22349717 def make_legend_arrow(legend, orig_handle, xdescent, ydescent, width, height, fontsize): p = matplotlib.patches.FancyArrow(0, 0.5*height, width, 0, length_includes_head=True, head_width=0.75*height) return p for index_in_timestep, time in enumerate(timesteps): ax.clear() ax.set_title(f'N-body simulation with $N={npt}$ particles') step_label = ax.text(0.03, .97, f'Step {index_in_timestep}', transform=ax.transAxes, verticalalignment='top', c='k', bbox=dict(color='w', alpha=0.8)) dvel = np.zeros_like(vel) dpos = np.zeros_like(pos) acc = np.zeros_like(pos) for index_in_particle in range(npt): acc[index_in_particle] = g_at_pos(pos[index_in_particle], pos, mass, epsilon=0.1) # Update velocities. dvel[index_in_particle] = dt * acc[index_in_particle] # Update positions. dpos[index_in_particle] = dt * vel[index_in_particle] vel += dvel - drag*vel pos += dpos # Our simulation has periodic boundaries, if you go off one side you come back on the other! pos = pos % 1. ax.scatter(pos[:,0], pos[:,1], color='darkorange', edgecolor='w') # Draw arrows representing the velocity and acceleration vectors, if requested # The code here is a little verbose to get nice-looking arrows in the legend arrows = [] if draw_vel: ax.quiver(pos[:,0], pos[:,1], vel[:,0], vel[:,1], color='w', zorder=0) arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Velocity', color='w')] if draw_acc: ax.quiver(pos[:,0], pos[:,1], acc[:,0], acc[:,1], color='darkorange', zorder=0) arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Accel', color='darkorange')] if draw_vel or draw_acc: ax.legend(handles=arrows, handler_map={matplotlib.patches.FancyArrow:matplotlib.legend_handler.HandlerPatch(patch_func=make_legend_arrow)}, facecolor='k', edgecolor='white', framealpha=0.8, loc='lower right') ax.set_xlim(0., 1.) ax.set_ylim(0., 1.) fig.canvas.draw() ###Output _____no_output_____ ###Markdown Try playing around with the settings! More than 100 particles won't run very smoothly, however.With the default settings, you'll find that the particles tend fall into one or two clumps before too long. This is due to the drag that we put in. The drag simulates the effect that the expanding universe has on real dark matter particles, which is to slow them down and cause them to group together. These clumps are known as *halos*, and form "galactic nurseries" where gas can gather to form new stars and galaxies. Now, when DESI scientists run huge simulations, such as those run on Summit, a total of ~48 _trillion_ particles are solved for. Don't try this here! But the results are really quite extraordinary (skip to 6 mins 45 seconds if you're impatient to see the result!): ###Code YouTubeVideo('LQMLFryA_7k', width=800, height=400) ###Output _____no_output_____ ###Markdown ###Code from google.colab import drive drive.mount('/content/drive') from IPython.display import clear_output from time import sleep import sys sys.path.append('/content/drive/MyDrive/desihigh') import time import astropy import itertools import matplotlib import numpy as np import pylab as pl import matplotlib.pyplot as plt import astropy.units as u from astropy.cosmology import FlatLambdaCDM from IPython.display import YouTubeVideo from tools.flops import flops #%matplotlib notebook %matplotlib inline plt.style.use('dark_background') ###Output _____no_output_____ ###Markdown DESI and the fastest supercomputer in the West Understanding _how_ the 30 million galaxies surveyed by DESI actually formed in the Universe is hard, really hard. So hard in fact that DESI scientists exploit [Summit](https://www.olcf.ornl.gov/summit/), the world's fastest supercomputer[1](Footnotes) at Oak Ridge National Lab to calculate how the distribution of galaxies should look depending on the type of Dark Energy: Costing a cool 325 million dollars to build, Summit is capable of calculating addition and multiplication operations $1.486 \times 10^{17}$ times a second, equivalent to $1.486 \times 10^{11}$ MegaFlops or MFLOPS. For comparison, let's see what Binder provides (you'll need some patience, maybe leave this to later): ###Code _ = flops() ###Output _____no_output_____ ###Markdown So Summit is at least a billion times more powerful! With Summit, we can resolve the finest details of the distribution of _dark matter_ that all galaxies trace: Here the brightest regions signify the densest regions of dark matter in the Universe, in which we expect to find more galaxies (for some zoom-ins, [click here](https://lgarrison.github.io/halos/)). The video below shows that we have observed this predicted structure in the distribution of real galaxies observed with experiments prior to DESI: ###Code YouTubeVideo('08LBltePDZw', width=800, height=400) ###Output _____no_output_____ ###Markdown [Dark matter](https://en.wikipedia.org/wiki/Dark_matter:~:text=Dark%20matter%20is%20a%20form,%E2%88%9227%20kg%2Fm3.) is a pervasive element in our Universe, making up 25% of the total (energy) density. With Dark Energy and the common atom ("baryonic matter") making up the remainder. We know next to nothing about Dark Matter, beyond its gravitational attraction of other matter and light in the Universe. Fortunately, the equations that describe the evolution of dark matter, rather than the [complex formation of galaxies](https://www.space.com/15680-galaxies.html), are relatively simple for the Universe in which we seem to live. All that is required is to track the gravitational attraction of dark matter particles (on an expanding stage). We can predict the evolution of dark matter by sampling the gravitational force, velocity and position with a set of (fictitious) particles that each represent a 'clump' of dark matter with some total mass. Of course, this means we cannot solve for the distribution of dark matter within these clump sized regions, but just the distribution amongst clumps that leads to the structure you can see above. With Summit, the smallest clump we can resolve is not far from the combined mass of all the stars in the [Milky Way](https://www.nasa.gov/feature/goddard/2019/what-does-the-milky-way-weigh-hubble-and-gaia-investigate): To start, we'll initially postition a set of clumps at random positions within a 3D cube and give them zero initial velocities. Velocities will be generated at subsequent times as the ($1/r^2$) gravitational attraction of a particle to all others causes a net acceleration. ###Code def init_dof(npt=1): # Create a set of particles at random positions in a box, which will soon predict the distribution of dark matter # as we see above. xs = np.random.uniform(0., 1., npt) ys = np.random.uniform(0., 1., npt) zs = np.random.uniform(0., 1., npt) pos = np.vstack((xs,ys,zs)).T vel = np.zeros_like(pos) return pos, vel pos[0][0] = 1 pos[0] ls = [] for i in ls: for j in pos[i] mass_r = np.random.uniform(0., 1., npt) mass_r ###Output _____no_output_____ ###Markdown The gravitational force experienced by each dark matter particle is [Newton's](https://en.wikipedia.org/wiki/Isaac_Newton) $F = \frac{GmM}{r^2} \hat r$ that you may be familiar with. We just need to do a thorough job on the book keeping required for to calculate the total force experienced by one particle due to all others: ###Code def g_at_pos(pos, particles, mass, epsilon=1.0, doimages=True): # eqn. (10) of http://www.skiesanduniverses.org/resources/KlypinNbody.pdf. # Here epsilon is a fudge factor to stop a blow up of the gravitational force at zero distance. delta_r = particles - pos result = mass * np.sum(delta_r / (delta_r**2. + epsilon**2.)**(3./2.), axis=0) # If 'pos' is one of the particles, then technically we've including the "self-force" # But such a pos will have delta_r = 0, and thus contribute nothing to the total force, as it should! if doimages: # Our simulation assumes periodic boundary conditions, so for the acceleration of each particle, there's a # corresponding acceleration due to the image of the particle produced by applying periodic shifts to its # position. shift = np.array([-1, 0, 1]) images = [] for triple in itertools.product(shift, repeat=3): images.append(triple) images.remove((0, 0, 0)) images = np.array(images) for image in images: delta_r_displaced = delta_r + image result += mass * np.sum(delta_r_displaced / (delta_r_displaced**2. + epsilon**2.)**(3./2.), axis=0) return result ###Output _____no_output_____ ###Markdown In a remarkable experiment in 1941, Erik Holmberg used the fact that the brightness of light decays with distance at the same ($1/r^2$) rate as gravity. To calculate the total force on a 'particle' in his 'simulation', Holmberg placed a lightbulb at the position of each particle and calculated the effective force on a given particle by measuring the total brightness at each point! The figure below illustrates this idea.Try running the following cell a few times! You'll get a different random layout of "lightbulbs" each time. ###Code fig, ax = plt.subplots(1, 1, figsize=(5,5), dpi=150) xmin, xmax, ymin, ymax = (0., 1., 0., 1.) Ngrid = 100 xx, yy = np.meshgrid(np.linspace(xmin, xmax, Ngrid), np.linspace(ymin, ymax, Ngrid)) epsilon = 0.1 weights = np.zeros_like(xx) npt = 10 pos, vel = init_dof(npt=npt) for par in pos: weights += 1. / ((xx - par[0])**2 + (yy - par[1])**2 + epsilon**2.) ax.imshow(weights, extent=(xmin, xmax, ymin, ymax), cmap=plt.cm.afmhot, alpha=1., origin='lower') ax.scatter(pos[:,0], pos[:,1], color='k', edgecolor='w') ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False) ax.set_title(f"Holmberg's Lightbulb Experiment with $N={npt}$ Bulbs") ax.set_xlim(0., 1.) ax.set_ylim(0., 1.) fig.tight_layout() ###Output _____no_output_____ ###Markdown This work was the original concept of gravitational 'N-body' simulations that are described here. It's almost criminal that only 118 authors have referenced this groundbreaking idea! Today, given the mini supercomputers we often have at our fingertips, we can determine the final distribution of dark matter more accurately with computers than light bulbs. By evolving an initial homogeneous distribution (a nearly uniform distribution of dark matter clumps, as the universe produces in the Big Bang), we can accurately predict the locations of galaxies (the places where the biggest dark matter clumps form).To do this, we just need to calculate the acceleration on each particle at a series of time steps and update the velocity and position accordingly according to the acceleration that particle experiences. You'll be familiar with this as the sensation you feel as a car turns a corner, or speeds up. ###Code # We'll sample the equations of motion in discrete time steps. dt = 5e-4 nsteps = 500 timesteps = np.linspace(0, (nsteps)*dt, nsteps, endpoint=False) # Number and mass of particles npt = 2 mass = 0.25 # Whether to draw arrows for the acceleration and velocity draw_acc = True draw_vel = False # A small drag term to simulate the real drag dark matter particles experience due to the expanding universe drag = 1e-2 ###Output _____no_output_____ ###Markdown Now we simply have to run the simulation! ###Code fig, ax = plt.subplots(1,1, figsize=(5,5), dpi=150) ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False) # Reinitialise particles. pos, vel = init_dof(npt=npt) # A helper function to make a nice-looking legend for our arrows # from https://stackoverflow.com/a/22349717 def make_legend_arrow(legend, orig_handle, xdescent, ydescent, width, height, fontsize): p = matplotlib.patches.FancyArrow(0, 0.5*height, width, 0, length_includes_head=True, head_width=0.75*height) return p for index_in_timestep, time in enumerate(timesteps): ax.clear() ax.set_title(f'N-body simulation with $N={npt}$ particles') step_label = ax.text(0.03, .97, f'Step {index_in_timestep}', transform=ax.transAxes, verticalalignment='top', c='k', bbox=dict(color='w', alpha=0.8)) dvel = np.zeros_like(vel) dpos = np.zeros_like(pos) acc = np.zeros_like(pos) for index_in_particle in range(npt): acc[index_in_particle] = g_at_pos(pos[index_in_particle], pos, mass, epsilon=0.1) # Update velocities. dvel[index_in_particle] = dt * acc[index_in_particle] # Update positions. dpos[index_in_particle] = dt * vel[index_in_particle] vel += dvel - drag*vel pos += dpos # Our simulation has periodic boundaries, if you go off one side you come back on the other! pos = pos % 1. ax.scatter(pos[:,0], pos[:,1], color='darkorange', edgecolor='w') # Draw arrows representing the velocity and acceleration vectors, if requested # The code here is a little verbose to get nice-looking arrows in the legend arrows = [] if draw_vel: ax.quiver(pos[:,0], pos[:,1], vel[:,0], vel[:,1], color='w', zorder=0) arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Velocity', color='w')] if draw_acc: ax.quiver(pos[:,0], pos[:,1], acc[:,0], acc[:,1], color='darkorange', zorder=0) arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Accel', color='darkorange')] if draw_vel or draw_acc: ax.legend(handles=arrows, handler_map={matplotlib.patches.FancyArrow:matplotlib.legend_handler.HandlerPatch(patch_func=make_legend_arrow)}, facecolor='k', edgecolor='white', framealpha=0.8, loc='lower right') ax.set_xlim(0., 1.) ax.set_ylim(0., 1.) fig.canvas.draw() # Reinitialise particles. pos, vel = init_dof(npt=npt) # A helper function to make a nice-looking legend for our arrows # from https://stackoverflow.com/a/22349717 def make_legend_arrow(legend, orig_handle, xdescent, ydescent, width, height, fontsize): p = matplotlib.patches.FancyArrow(0, 0.5*height, width, 0, length_includes_head=True, head_width=0.75*height) return p for index_in_timestep, time in enumerate(timesteps): clear_output(wait=True) fig, ax = plt.subplots(1,1, figsize=(5,5), dpi=150) ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False) ax.clear() ax.set_title(f'N-body simulation with $N={npt}$ particles') step_label = ax.text(0.03, .97, f'Step {index_in_timestep}', transform=ax.transAxes, verticalalignment='top', c='k', bbox=dict(color='w', alpha=0.8)) dvel = np.zeros_like(vel) dpos = np.zeros_like(pos) acc = np.zeros_like(pos) for index_in_particle in range(npt): acc[index_in_particle] = g_at_pos(pos[index_in_particle], pos, mass, epsilon=0.1,doimages=False) # Update velocities. dvel[index_in_particle] = dt * acc[index_in_particle] # Update positions. dpos[index_in_particle] = dt * vel[index_in_particle] vel += dvel - drag*vel pos += dpos # Our simulation has periodic boundaries, if you go off one side you come back on the other! pos = pos % 1. ax.scatter(pos[:,0], pos[:,1], color='darkorange', edgecolor='w') # Draw arrows representing the velocity and acceleration vectors, if requested # The code here is a little verbose to get nice-looking arrows in the legend arrows = [] if draw_vel: ax.quiver(pos[:,0], pos[:,1], vel[:,0], vel[:,1], color='w', zorder=0) arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Velocity', color='w')] if draw_acc: ax.quiver(pos[:,0], pos[:,1], acc[:,0], acc[:,1], color='darkorange', zorder=0) arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Accel', color='darkorange')] if draw_vel or draw_acc: ax.legend(handles=arrows, handler_map={matplotlib.patches.FancyArrow:matplotlib.legend_handler.HandlerPatch(patch_func=make_legend_arrow)}, facecolor='k', edgecolor='white', framealpha=0.8, loc='lower right') #if index_in_timestep%10 == 1: ax.set_xlim(0., 1.) ax.set_ylim(0., 1.) #fig.canvas.draw() plt.show(fig) sleep(0.001) #temp_points.remove() ###Output _____no_output_____ ###Markdown Try playing around with the settings! More than 100 particles won't run very smoothly, however.With the default settings, you'll find that the particles tend fall into one or two clumps before too long. This is due to the drag that we put in. The drag simulates the effect that the expanding universe has on real dark matter particles, which is to slow them down and cause them to group together. These clumps are known as *halos*, and form "galactic nurseries" where gas can gather to form new stars and galaxies. Now, when DESI scientists run huge simulations, such as those run on Summit, a total of ~48 _trillion_ particles are solved for. Don't try this here! But the results are really quite extraordinary (skip to 6 mins 45 seconds if you're impatient to see the result!): ###Code YouTubeVideo('LQMLFryA_7k', width=800, height=400) ###Output _____no_output_____ ###Markdown DESI and the fastest supercomputer in the West Understanding _how_ the 30 million galaxies surveyed by DESI actually formed in the Universe is hard, really hard. So hard in fact that DESI scientists exploit [Summit](https://www.olcf.ornl.gov/summit/), the world's fastest supercomputer[1](Footnotes) at Oak Ridge National Lab to calculate how the distribution of galaxies should look depending on the type of Dark Energy: Costing a cool 325 million dollars to build, Summit is capable of calculating addition and multiplication operations $1.486 \times 10^{17}$ times a second, equivalent to $1.486 \times 10^{11}$ MegaFlops or MFLOPS. For comparison, let's see what Binder provides (you'll need some patience, maybe leave this to later): ###Code _ = flops() ###Output FLOPS Python Program (Double Precision), V2.0 18 Dec 1992 Module Error RunTime MFLOPS (usec) 1 1.3358e-12 0.2143 65.3323 2 1.9984e-13 0.1033 67.7620 3 -2.4480e-14 0.2486 68.3844 ###Markdown So Summit is at least a billion times more powerful! With Summit, we can resolve the finest details of the distribution of _dark matter_ that all galaxies trace: Here the brightest regions signify the densest regions of dark matter in the Universe, in which we expect to find more galaxies (for some zoom-ins, [click here](https://lgarrison.github.io/halos/)). The video below shows that we have observed this predicted structure in the distribution of real galaxies observed with experiments prior to DESI: ###Code YouTubeVideo('08LBltePDZw', width=800, height=400) ###Output _____no_output_____ ###Markdown [Dark matter](https://en.wikipedia.org/wiki/Dark_matter:~:text=Dark%20matter%20is%20a%20form,%E2%88%9227%20kg%2Fm3.) is a pervasive element in our Universe, making up 25% of the total (energy) density. With Dark Energy and the common atom ("baryonic matter") making up the remainder. We know next to nothing about Dark Matter, beyond its gravitational attraction of other matter and light in the Universe. Fortunately, the equations that describe the evolution of dark matter, rather than the [complex formation of galaxies](https://www.space.com/15680-galaxies.html), are relatively simple for the Universe in which we seem to live. All that is required is to track the gravitational attraction of dark matter particles (on an expanding stage). We can predict the evolution of dark matter by sampling the gravitational force, velocity and position with a set of (fictitious) particles that each represent a 'clump' of dark matter with some total mass. Of course, this means we cannot solve for the distribution of dark matter within these clump sized regions, but just the distribution amongst clumps that leads to the structure you can see above. With Summit, the smallest clump we can resolve is not far from the combined mass of all the stars in the [Milky Way](https://www.nasa.gov/feature/goddard/2019/what-does-the-milky-way-weigh-hubble-and-gaia-investigate): To start, we'll initially postition a set of clumps at random positions within a 3D cube and give them zero initial velocities. Velocities will be generated at subsequent times as the ($1/r^2$) gravitational attraction of a particle to all others causes a net acceleration. ###Code def init_dof(npt=1): # Create a set of particles at random positions in a box, which will soon predict the distribution of dark matter # as we see above. xs = np.random.uniform(0., 1., npt) ys = np.random.uniform(0., 1., npt) zs = np.random.uniform(0., 1., npt) pos = np.vstack((xs,ys,zs)).T vel = np.zeros_like(pos) return pos, vel ###Output _____no_output_____ ###Markdown The gravitational force experienced by each dark matter particle is [Newton's](https://en.wikipedia.org/wiki/Isaac_Newton) $F = \frac{GmM}{r^2} \hat r$ that you may be familiar with. We just need to do a thorough job on the book keeping required for to calculate the total force experienced by one particle due to all others: ###Code def g_at_pos(pos, particles, mass, epsilon=1.0, doimages=True): # eqn. (10) of http://www.skiesanduniverses.org/resources/KlypinNbody.pdf. # Here epsilon is a fudge factor to stop a blow up of the gravitational force at zero distance. delta_r = particles - pos result = mass * np.sum(delta_r / (delta_r**2. + epsilon**2.)**(3./2.), axis=0) # If 'pos' is one of the particles, then technically we've including the "self-force" # But such a pos will have delta_r = 0, and thus contribute nothing to the total force, as it should! if doimages: # Our simulation assumes periodic boundary conditions, so for the acceleration of each particle, there's a # corresponding acceleration due to the image of the particle produced by applying periodic shifts to its # position. shift = np.array([-1, 0, 1]) images = [] for triple in itertools.product(shift, repeat=3): images.append(triple) images.remove((0, 0, 0)) images = np.array(images) for image in images: delta_r_displaced = delta_r + image result += mass * np.sum(delta_r_displaced / (delta_r_displaced**2. + epsilon**2.)**(3./2.), axis=0) return result ###Output _____no_output_____ ###Markdown In a remarkable experiment in 1941, Erik Holmberg used the fact that the brightness of light decays with distance at the same ($1/r^2$) rate as gravity. To calculate the total force on a 'particle' in his 'simulation', Holmberg placed a lightbulb at the position of each particle and calculated the effective force on a given particle by measuring the total brightness at each point! The figure below illustrates this idea.Try running the following cell a few times! You'll get a different random layout of "lightbulbs" each time. ###Code fig, ax = plt.subplots(1, 1, figsize=(5,5), dpi=150) xmin, xmax, ymin, ymax = (0., 1., 0., 1.) Ngrid = 100 xx, yy = np.meshgrid(np.linspace(xmin, xmax, Ngrid), np.linspace(ymin, ymax, Ngrid)) epsilon = 0.1 weights = np.zeros_like(xx) npt = 10 pos, vel = init_dof(npt=npt) for par in pos: weights += 1. / ((xx - par[0])**2 + (yy - par[1])**2 + epsilon**2.) ax.imshow(weights, extent=(xmin, xmax, ymin, ymax), cmap=plt.cm.afmhot, alpha=1., origin='lower') ax.scatter(pos[:,0], pos[:,1], color='k', edgecolor='w') ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False) ax.set_title(f"Holmberg's Lightbulb Experiment with $N={npt}$ Bulbs") ax.set_xlim(0., 1.) ax.set_ylim(0., 1.) fig.tight_layout() ###Output _____no_output_____ ###Markdown This work was the original concept of gravitational 'N-body' simulations that are described here. It's almost criminal that only 118 authors have referenced this groundbreaking idea! Today, given the mini supercomputers we often have at our fingertips, we can determine the final distribution of dark matter more accurately with computers than light bulbs. By evolving an initial homogeneous distribution (a nearly uniform distribution of dark matter clumps, as the universe produces in the Big Bang), we can accurately predict the locations of galaxies (the places where the biggest dark matter clumps form).To do this, we just need to calculate the acceleration on each particle at a series of time steps and update the velocity and position accordingly according to the acceleration that particle experiences. You'll be familiar with this as the sensation you feel as a car turns a corner, or speeds up. ###Code # We'll sample the equations of motion in discrete time steps. dt = 5e-4 nsteps = 500 timesteps = np.linspace(0, (nsteps)*dt, nsteps, endpoint=False) # Number and mass of particles npt = 50 mass = 0.25 # Whether to draw arrows for the acceleration and velocity draw_acc = False draw_vel = False # A small drag term to simulate the real drag dark matter particles experience due to the expanding universe drag = 1e-2 ###Output _____no_output_____ ###Markdown Now we simply have to run the simulation! ###Code fig, ax = plt.subplots(1,1, figsize=(5,5), dpi=150) ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False) # Reinitialise particles. pos, vel = init_dof(npt=npt) # A helper function to make a nice-looking legend for our arrows # from https://stackoverflow.com/a/22349717 def make_legend_arrow(legend, orig_handle, xdescent, ydescent, width, height, fontsize): p = matplotlib.patches.FancyArrow(0, 0.5*height, width, 0, length_includes_head=True, head_width=0.75*height) return p for index_in_timestep, time in enumerate(timesteps): ax.clear() ax.set_title(f'N-body simulation with $N={npt}$ particles') step_label = ax.text(0.03, .97, f'Step {index_in_timestep}', transform=ax.transAxes, verticalalignment='top', c='k', bbox=dict(color='w', alpha=0.8)) dvel = np.zeros_like(vel) dpos = np.zeros_like(pos) acc = np.zeros_like(pos) for index_in_particle in range(npt): acc[index_in_particle] = g_at_pos(pos[index_in_particle], pos, mass, epsilon=0.1) # Update velocities. dvel[index_in_particle] = dt * acc[index_in_particle] # Update positions. dpos[index_in_particle] = dt * vel[index_in_particle] vel += dvel - drag*vel pos += dpos # Our simulation has periodic boundaries, if you go off one side you come back on the other! pos = pos % 1. ax.scatter(pos[:,0], pos[:,1], color='darkorange', edgecolor='w') # Draw arrows representing the velocity and acceleration vectors, if requested # The code here is a little verbose to get nice-looking arrows in the legend arrows = [] if draw_vel: ax.quiver(pos[:,0], pos[:,1], vel[:,0], vel[:,1], color='w', zorder=0) arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Velocity', color='w')] if draw_acc: ax.quiver(pos[:,0], pos[:,1], acc[:,0], acc[:,1], color='darkorange', zorder=0) arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Accel', color='darkorange')] if draw_vel or draw_acc: ax.legend(handles=arrows, handler_map={matplotlib.patches.FancyArrow:matplotlib.legend_handler.HandlerPatch(patch_func=make_legend_arrow)}, facecolor='k', edgecolor='white', framealpha=0.8, loc='lower right') ax.set_xlim(0., 1.) ax.set_ylim(0., 1.) fig.canvas.draw() ###Output _____no_output_____ ###Markdown Try playing around with the settings! More than 100 particles won't run very smoothly, however.With the default settings, you'll find that the particles tend fall into one or two clumps before too long. This is due to the drag that we put in. The drag simulates the effect that the expanding universe has on real dark matter particles, which is to slow them down and cause them to group together. These clumps are known as *halos*, and form "galactic nurseries" where gas can gather to form new stars and galaxies. Now, when DESI scientists run huge simulations, such as those run on Summit, a total of ~48 _trillion_ particles are solved for. Don't try this here! But the results are really quite extraordinary (skip to 6 mins 45 seconds if you're impatient to see the result!): ###Code YouTubeVideo('LQMLFryA_7k', width=800, height=400) ###Output _____no_output_____ ###Markdown DESI and the fastest supercomputer in the West Understanding _how_ the 30 million galaxies surveyed by DESI actually formed in the Universe is hard, really hard. So hard in fact that DESI scientists exploit [Summit](https://www.olcf.ornl.gov/summit/), the world's fastest supercomputer[1](Footnotes) at Oak Ridge National Lab to calculate how the distribution of galaxies should look depending on the type of Dark Energy: Costing a cool 325 million dollars to build, Summit is capable of calculating addition and multiplication operations $1.486 \times 10^{17}$ times a second, equivalent to $1.486 \times 10^{11}$ MegaFlops or MFLOPS. For comparison, let's see what Binder provides (you'll need some patience, maybe leave this to later): ###Code _ = flops() ###Output FLOPS Python Program (Double Precision), V2.0 18 Dec 1992 Module Error RunTime MFLOPS (usec) ###Markdown So Summit is at least a billion times more powerful! With Summit, we can resolve the finest details of the distribution of _dark matter_ that all galaxies trace: Here the brightest regions signify the densest regions of dark matter in the Universe, in which we expect to find more galaxies (for some zoom-ins, [click here](https://lgarrison.github.io/halos/)). The video below shows that we have observed this predicted structure in the distribution of real galaxies observed with experiments prior to DESI: ###Code YouTubeVideo('08LBltePDZw', width=800, height=400) ###Output _____no_output_____ ###Markdown [Dark matter](https://en.wikipedia.org/wiki/Dark_matter:~:text=Dark%20matter%20is%20a%20form,%E2%88%9227%20kg%2Fm3.) is a pervasive element in our Universe, making up 25% of the total (energy) density. With Dark Energy and the common atom ("baryonic matter") making up the remainder. We know next to nothing about Dark Matter, beyond its gravitational attraction of other matter and light in the Universe. Fortunately, the equations that describe the evolution of dark matter, rather than the [complex formation of galaxies](https://www.space.com/15680-galaxies.html), are relatively simple for the Universe in which we seem to live. All that is required is to track the gravitational attraction of dark matter particles (on an expanding stage). We can predict the evolution of dark matter by sampling the gravitational force, velocity and position with a set of (fictitious) particles that each represent a 'clump' of dark matter with some total mass. Of course, this means we cannot solve for the distribution of dark matter within these clump sized regions, but just the distribution amongst clumps that leads to the structure you can see above. With Summit, the smallest clump we can resolve is not far from the combined mass of all the stars in the [Milky Way](https://www.nasa.gov/feature/goddard/2019/what-does-the-milky-way-weigh-hubble-and-gaia-investigate): To start, we'll initially postition a set of clumps at random positions within a 3D cube and give them zero initial velocities. Velocities will be generated at subsequent times as the ($1/r^2$) gravitational attraction of a particle to all others causes a net acceleration. ###Code def init_dof(npt=1): # Create a set of particles at random positions in a box, which will soon predict the distribution of dark matter # as we see above. xs = np.random.uniform(0., 1., npt) ys = np.random.uniform(0., 1., npt) zs = np.random.uniform(0., 1., npt) pos = np.vstack((xs,ys,zs)).T vel = np.zeros_like(pos) return pos, vel ###Output _____no_output_____ ###Markdown The gravitational force experienced by each dark matter particle is [Newton's](https://en.wikipedia.org/wiki/Isaac_Newton) $F = \frac{GmM}{r^2} \hat r$ that you may be familiar with. We just need to do a thorough job on the book keeping required for to calculate the total force experienced by one particle due to all others: ###Code def g_at_pos(pos, particles, mass, epsilon=1.0, doimages=True): # eqn. (10) of http://www.skiesanduniverses.org/resources/KlypinNbody.pdf. # Here epsilon is a fudge factor to stop a blow up of the gravitational force at zero distance. delta_r = particles - pos result = mass * np.sum(delta_r / (delta_r**2. + epsilon**2.)**(3./2.), axis=0) # If 'pos' is one of the particles, then technically we've including the "self-force" # But such a pos will have delta_r = 0, and thus contribute nothing to the total force, as it should! if doimages: # Our simulation assumes periodic boundary conditions, so for the acceleration of each particle, there's a # corresponding acceleration due to the image of the particle produced by applying periodic shifts to its # position. shift = np.array([-1, 0, 1]) images = [] for triple in itertools.product(shift, repeat=3): images.append(triple) images.remove((0, 0, 0)) images = np.array(images) for image in images: delta_r_displaced = delta_r + image result += mass * np.sum(delta_r_displaced / (delta_r_displaced**2. + epsilon**2.)**(3./2.), axis=0) return result ###Output _____no_output_____ ###Markdown In a remarkable experiment in 1941, Erik Holmberg used the fact that the brightness of light decays with distance at the same ($1/r^2$) rate as gravity. To calculate the total force on a 'particle' in his 'simulation', Holmberg placed a lightbulb at the position of each particle and calculated the effective force on a given particle by measuring the total brightness at each point! The figure below illustrates this idea.Try running the following cell a few times! You'll get a different random layout of "lightbulbs" each time. ###Code fig, ax = plt.subplots(1, 1, figsize=(5,5), dpi=150) xmin, xmax, ymin, ymax = (0., 1., 0., 1.) Ngrid = 100 xx, yy = np.meshgrid(np.linspace(xmin, xmax, Ngrid), np.linspace(ymin, ymax, Ngrid)) epsilon = 0.1 weights = np.zeros_like(xx) npt = 10 pos, vel = init_dof(npt=npt) for par in pos: weights += 1. / ((xx - par[0])**2 + (yy - par[1])**2 + epsilon**2.) ax.imshow(weights, extent=(xmin, xmax, ymin, ymax), cmap=plt.cm.afmhot, alpha=1., origin='lower') ax.scatter(pos[:,0], pos[:,1], color='k', edgecolor='w') ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False) ax.set_title(f"Holmberg's Lightbulb Experiment with $N={npt}$ Bulbs") ax.set_xlim(0., 1.) ax.set_ylim(0., 1.) fig.tight_layout() ###Output _____no_output_____ ###Markdown This work was the original concept of gravitational 'N-body' simulations that are described here. It's almost criminal that only 118 authors have referenced this groundbreaking idea! Today, given the mini supercomputers we often have at our fingertips, we can determine the final distribution of dark matter more accurately with computers than light bulbs. By evolving an initial homogeneous distribution (a nearly uniform distribution of dark matter clumps, as the universe produces in the Big Bang), we can accurately predict the locations of galaxies (the places where the biggest dark matter clumps form).To do this, we just need to calculate the acceleration on each particle at a series of time steps and update the velocity and position accordingly according to the acceleration that particle experiences. You'll be familiar with this as the sensation you feel as a car turns a corner, or speeds up. ###Code # We'll sample the equations of motion in discrete time steps. dt = 5e-4 nsteps = 500 timesteps = np.linspace(0, (nsteps)*dt, nsteps, endpoint=False) # Number and mass of particles npt = 50 mass = 0.25 # Whether to draw arrows for the acceleration and velocity draw_acc = False draw_vel = False # A small drag term to simulate the real drag dark matter particles experience due to the expanding universe drag = 1e-2 ###Output _____no_output_____ ###Markdown Now we simply have to run the simulation! ###Code fig, ax = plt.subplots(1,1, figsize=(5,5), dpi=150) ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False) # Reinitialise particles. pos, vel = init_dof(npt=npt) # A helper function to make a nice-looking legend for our arrows # from https://stackoverflow.com/a/22349717 def make_legend_arrow(legend, orig_handle, xdescent, ydescent, width, height, fontsize): p = matplotlib.patches.FancyArrow(0, 0.5*height, width, 0, length_includes_head=True, head_width=0.75*height) return p for index_in_timestep, time in enumerate(timesteps): ax.clear() ax.set_title(f'N-body simulation with $N={npt}$ particles') step_label = ax.text(0.03, .97, f'Step {index_in_timestep}', transform=ax.transAxes, verticalalignment='top', c='k', bbox=dict(color='w', alpha=0.8)) dvel = np.zeros_like(vel) dpos = np.zeros_like(pos) acc = np.zeros_like(pos) for index_in_particle in range(npt): acc[index_in_particle] = g_at_pos(pos[index_in_particle], pos, mass, epsilon=0.1) # Update velocities. dvel[index_in_particle] = dt * acc[index_in_particle] # Update positions. dpos[index_in_particle] = dt * vel[index_in_particle] vel += dvel - drag*vel pos += dpos # Our simulation has periodic boundaries, if you go off one side you come back on the other! pos = pos % 1. ax.scatter(pos[:,0], pos[:,1], color='darkorange', edgecolor='w') # Draw arrows representing the velocity and acceleration vectors, if requested # The code here is a little verbose to get nice-looking arrows in the legend arrows = [] if draw_vel: ax.quiver(pos[:,0], pos[:,1], vel[:,0], vel[:,1], color='w', zorder=0) arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Velocity', color='w')] if draw_acc: ax.quiver(pos[:,0], pos[:,1], acc[:,0], acc[:,1], color='darkorange', zorder=0) arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Accel', color='darkorange')] if draw_vel or draw_acc: ax.legend(handles=arrows, handler_map={matplotlib.patches.FancyArrow:matplotlib.legend_handler.HandlerPatch(patch_func=make_legend_arrow)}, facecolor='k', edgecolor='white', framealpha=0.8, loc='lower right') ax.set_xlim(0., 1.) ax.set_ylim(0., 1.) fig.canvas.draw() ###Output _____no_output_____ ###Markdown Try playing around with the settings! More than 100 particles won't run very smoothly, however.With the default settings, you'll find that the particles tend fall into one or two clumps before too long. This is due to the drag that we put in. The drag simulates the effect that the expanding universe has on real dark matter particles, which is to slow them down and cause them to group together. These clumps are known as *halos*, and form "galactic nurseries" where gas can gather to form new stars and galaxies. Now, when DESI scientists run huge simulations, such as those run on Summit, a total of ~48 _trillion_ particles are solved for. Don't try this here! But the results are really quite extraordinary (skip to 6 mins 45 seconds if you're impatient to see the result!): ###Code YouTubeVideo('LQMLFryA_7k', width=800, height=400) ###Output _____no_output_____ ###Markdown ###Code !pip install notebook-video-writer tensor-canvas import jax.numpy as jnp from jax import jit from jax import vmap import jax from numpy import random import matplotlib.pyplot as plt from tqdm import tqdm import tensorcanvas as tc from notebook_video_writer import VideoWriter #@title VideoWriter #VideoWriter from Alexander Mordvintsev #https://colab.research.google.com/github/znah/notebooks/blob/master/external_colab_snippets.ipynb import os import numpy as np os.environ['FFMPEG_BINARY'] = 'ffmpeg' import moviepy.editor as mvp from moviepy.video.io.ffmpeg_writer import FFMPEG_VideoWriter class VideoWriter: def __init__(self, filename='_autoplay.mp4', fps=30.0, **kw): self.writer = None self.params = dict(filename=filename, fps=fps, **kw) def add(self, img): img = np.asarray(img) if self.writer is None: h, w = img.shape[:2] self.writer = FFMPEG_VideoWriter(size=(w, h), **self.params) if img.dtype in [np.float32, np.float64]: img = np.uint8(img.clip(0, 1)*255) if len(img.shape) == 2: img = np.repeat(img[..., None], 3, -1) self.writer.write_frame(img) def close(self): if self.writer: self.writer.close() def __enter__(self): return self def __exit__(self, *kw): self.close() if self.params['filename'] == '_autoplay.mp4': self.show() def show(self, **kw): self.close() fn = self.params['filename'] display(mvp.ipython_display(fn, **kw)) def draw_sim(parts_pos, parts_vel, grid_r, opacity=1.0, p_size=4.0): canvas = jnp.zeros((grid_r, grid_r, 3)) col = opacity*jnp.array([1.0,0.0,0.0]) # would be interesting to use jax.experimental.loops for these for part_p, part_v in zip(parts_pos, parts_vel): canvas = tc.draw_circle(part_p[0]*grid_r, part_p[1]*grid_r, p_size, col, canvas) return canvas def draw_sim_par(parts_pos, parts_vel, grid_r, opacity=1.0, p_size=4.0): col = opacity*jnp.array([1.0,0.0,0.0]) draw_single = lambda part_p, canv: tc.draw_circle(part_p[0]*grid_r, part_p[1]*grid_r, p_size, col, canv) draw_all = vmap(draw_single) return draw_all(parts_pos, jnp.zeros((parts_pos.shape[0], grid_r, grid_r, 3))).sum(0) def compute_forces(pos, scale, eps=0.1): a, b = jnp.expand_dims(pos, 1), jnp.expand_dims(pos, 0) diff = a - b dist = (diff * diff).sum(axis=-1) ** 0.5 dist = jnp.expand_dims(dist, 2) force = diff / ((dist * scale) ** 3 + eps) return force.sum(0) fast_compute_forces = jit(compute_forces) def sim_update_force(parts_pos, parts_vel, t_delta=0.05, scale=5, repel_mag=0.1, center_mag=2.5, steps=10, damp=0.99): p_p = jnp.array(parts_pos) p_v = jnp.array(parts_vel) # jax.experimental.loops for _ in range(steps): p_p = p_p + t_delta * p_v force = fast_compute_forces(p_p, scale) center_diff = p_p-0.5 centering_force = center_diff / ((center_diff ** 2).sum() ** 0.5) p_v = damp * p_v - t_delta * (force * repel_mag + centering_force * center_mag) return p_p, p_v def make_init_state(p_count): return random.rand(p_count, 2), random.rand(p_count, 2)-0.5 fast_draw = jit(draw_sim, static_argnums=(2,)) fast_draw_par = jit(draw_sim_par, static_argnums=(2,)) fast_sim_update_force = jit(sim_update_force, static_argnames=('steps')) p_state, v_state = make_init_state(128) v_state *= 0 grid_res = 384 for i in tqdm(range(1000)): p_state, v_state = fast_sim_update_force(p_state, v_state, t_delta=0.05, scale=10, center_mag=0.5, repel_mag=0.05, damp=0.996, steps=2) plt.imshow(fast_draw_par(p_state, v_state, grid_res, p_size=4.0)) p_state, v_state = make_init_state(2048) v_state *= 0 grid_res = 512 for i in tqdm(range(100)): p_state, v_state = fast_sim_update_force(p_state, v_state, t_delta=0.05, scale=40, center_mag=0.5, repel_mag=0.05, damp=0.997, steps=20) plt.imshow(fast_draw_par(p_state, v_state, grid_res, p_size=3.0)) render_video = False if render_video: p_state, v_state = make_init_state(128) v_state *= 0 grid_res = 384 with VideoWriter(fps=60) as vw: for i in tqdm(range(1000)): render = fast_draw_par(p_state, v_state, grid_res, p_size=3.0) vw.add(render) p_state, v_state = fast_sim_update_force(p_state, v_state, t_delta=0.05, scale=10, center_mag=0.5, repel_mag=0.05, damp=0.996, steps=2) if render_video: p_state, v_state = make_init_state(512) v_state *= 0 grid_res = 256 with VideoWriter(fps=60) as vw: for i in tqdm(range(1000)): render = fast_draw_par(p_state, v_state, grid_res, opacity=0.5, p_size=3.0) vw.add(render) p_state, v_state = fast_sim_update_force(p_state, v_state, t_delta=0.05, scale=20, center_mag=0.5, repel_mag=0.05, damp=0.998, steps=4) !nvidia-smi p_test = 50 res_test = 512 %%timeit draw_sim(*make_init_state(p_test), res_test) %%timeit draw_sim_par(*make_init_state(p_test), res_test) %%timeit fast_draw(*make_init_state(p_test), res_test) %%timeit fast_draw_par(*make_init_state(p_test), res_test) ###Output 79.6 ms ± 1.66 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ###Markdown DESI and the fastest supercomputer in the West Understanding _how_ the 30 million galaxies surveyed by DESI actually formed in the Universe is hard, really hard. So hard in fact that DESI scientists exploit [Summit](https://www.olcf.ornl.gov/summit/), the world's fastest supercomputer[1](Footnotes) at Oak Ridge National Lab to calculate how the distribution of galaxies should look depending on the type of Dark Energy: Costing a cool 325 million dollars to build, Summit is capable of calculating addition and multiplication operations $1.486 \times 10^{17}$ times a second, equivalent to $1.486 \times 10^{11}$ MegaFlops or MFLOPS. For comparison, let's see what Binder provides (you'll need some patience, maybe leave this to later): ###Code _ = flops() ###Output FLOPS Python Program (Double Precision), V2.0 18 Dec 1992 Module Error RunTime MFLOPS (usec) 1 1.3358e-12 0.2143 65.3323 2 1.9984e-13 0.1033 67.7620 3 -2.4480e-14 0.2486 68.3844 ###Markdown So Summit is at least a billion times more powerful! With Summit, we can resolve the finest details of the distribution of _dark matter_ that all galaxies trace: Here the brightest regions signify the densest regions of dark matter in the Universe, in which we expect to find more galaxies (for some zoom-ins, [click here](https://lgarrison.github.io/halos/)). The video below shows that we have observed this predicted structure in the distribution of real galaxies observed with experiments prior to DESI: ###Code YouTubeVideo('08LBltePDZw', width=800, height=400) ###Output _____no_output_____ ###Markdown [Dark matter](https://en.wikipedia.org/wiki/Dark_matter:~:text=Dark%20matter%20is%20a%20form,%E2%88%9227%20kg%2Fm3.) is a pervasive element in our Universe, making up 25% of the total (energy) density. With Dark Energy and the common atom ("baryonic matter") making up the remainder. We know next to nothing about Dark Matter, beyond its gravitational attraction of other matter and light in the Universe. Fortunately, the equations that describe the evolution of dark matter, rather than the [complex formation of galaxies](https://www.space.com/15680-galaxies.html), are relatively simple for the Universe in which we seem to live. All that is required is to track the gravitational attraction of dark matter particles (on an expanding stage). We can predict the evolution of dark matter by sampling the gravitational force, velocity and position with a set of (fictitious) particles that each represent a 'clump' of dark matter with some total mass. Of course, this means we cannot solve for the distribution of dark matter within these clump sized regions, but just the distribution amongst clumps that leads to the structure you can see above. With Summit, the smallest clump we can resolve is not far from the combined mass of all the stars in the [Milky Way](https://www.nasa.gov/feature/goddard/2019/what-does-the-milky-way-weigh-hubble-and-gaia-investigate): To start, we'll initially postition a set of clumps at random positions within a 3D cube and give them zero initial velocities. Velocities will be generated at subsequent times as the ($1/r^2$) gravitational attraction of a particle to all others causes a net acceleration. ###Code def init_dof(npt=1): # Create a set of particles at random positions in a box, which will soon predict the distribution of dark matter # as we see above. xs = np.random.uniform(0., 1., npt) ys = np.random.uniform(0., 1., npt) zs = np.random.uniform(0., 1., npt) pos = np.vstack((xs,ys,zs)).T vel = np.zeros_like(pos) return pos, vel ###Output _____no_output_____ ###Markdown The gravitational force experienced by each dark matter particle is [Newton's](https://en.wikipedia.org/wiki/Isaac_Newton) $F = \frac{GmM}{r^2} \hat r$ that you may be familiar with. We just need to do a thorough job on the book keeping required for to calculate the total force experienced by one particle due to all others: ###Code def g_at_pos(pos, particles, mass, epsilon=1.0, doimages=True): # eqn. (10) of http://www.skiesanduniverses.org/resources/KlypinNbody.pdf. # Here epsilon is a fudge factor to stop a blow up of the gravitational force at zero distance. delta_r = particles - pos result = mass * np.sum(delta_r / (delta_r**2. + epsilon**2.)**(3./2.), axis=0) # If 'pos' is one of the particles, then technically we've including the "self-force" # But such a pos will have delta_r = 0, and thus contribute nothing to the total force, as it should! if doimages: # Our simulation assumes periodic boundary conditions, so for the acceleration of each particle, there's a # corresponding acceleration due to the image of the particle produced by applying periodic shifts to its # position. shift = np.array([-1, 0, 1]) images = [] for triple in itertools.product(shift, repeat=3): images.append(triple) images.remove((0, 0, 0)) images = np.array(images) for image in images: delta_r_displaced = delta_r + image result += mass * np.sum(delta_r_displaced / (delta_r_displaced**2. + epsilon**2.)**(3./2.), axis=0) return result ###Output _____no_output_____ ###Markdown In a remarkable experiment in 1941, Erik Holmberg used the fact that the brightness of light decays with distance at the same ($1/r^2$) rate as gravity. To calculate the total force on a 'particle' in his 'simulation', Holmberg placed a lightbulb at the position of each particle and calculated the effective force on a given particle by measuring the total brightness at each point! The figure below illustrates this idea.Try running the following cell a few times! You'll get a different random layout of "lightbulbs" each time. ###Code fig, ax = plt.subplots(1, 1, figsize=(5,5), dpi=150) xmin, xmax, ymin, ymax = (0., 1., 0., 1.) Ngrid = 100 xx, yy = np.meshgrid(np.linspace(xmin, xmax, Ngrid), np.linspace(ymin, ymax, Ngrid)) epsilon = 0.1 weights = np.zeros_like(xx) npt = 10 pos, vel = init_dof(npt=npt) for par in pos: weights += 1. / ((xx - par[0])**2 + (yy - par[1])**2 + epsilon**2.) ax.imshow(weights, extent=(xmin, xmax, ymin, ymax), cmap=plt.cm.afmhot, alpha=1., origin='lower') ax.scatter(pos[:,0], pos[:,1], color='k', edgecolor='w') ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False) ax.set_title(f"Holmberg's Lightbulb Experiment with $N={npt}$ Bulbs") ax.set_xlim(0., 1.) ax.set_ylim(0., 1.) fig.tight_layout() ###Output _____no_output_____ ###Markdown This work was the original concept of gravitational 'N-body' simulations that are described here. It's almost criminal that only 118 authors have referenced this groundbreaking idea! Today, given the mini supercomputers we often have at our fingertips, we can determine the final distribution of dark matter more accurately with computers than light bulbs. By evolving an initial homogeneous distribution (a nearly uniform distribution of dark matter clumps, as the universe produces in the Big Bang), we can accurately predict the locations of galaxies (the places where the biggest dark matter clumps form).To do this, we just need to calculate the acceleration on each particle at a series of time steps and update the velocity and position accordingly according to the acceleration that particle experiences. You'll be familiar with this as the sensation you feel as a car turns a corner, or speeds up. ###Code # We'll sample the equations of motion in discrete time steps. dt = 5e-4 nsteps = 500 timesteps = np.linspace(0, (nsteps)*dt, nsteps, endpoint=False) # Number and mass of particles npt = 50 mass = 0.25 # Whether to draw arrows for the acceleration and velocity draw_acc = False draw_vel = False # A small drag term to simulate the real drag dark matter particles experience due to the expanding universe drag = 1e-2 ###Output _____no_output_____ ###Markdown Now we simply have to run the simulation! ###Code fig, ax = plt.subplots(1,1, figsize=(5,5), dpi=150) ax.tick_params(labelbottom=False, labelleft=False, left=False, bottom=False) # Reinitialise particles. pos, vel = init_dof(npt=npt) # A helper function to make a nice-looking legend for our arrows # from https://stackoverflow.com/a/22349717 def make_legend_arrow(legend, orig_handle, xdescent, ydescent, width, height, fontsize): p = matplotlib.patches.FancyArrow(0, 0.5*height, width, 0, length_includes_head=True, head_width=0.75*height) return p for index_in_timestep, time in enumerate(timesteps): ax.clear() ax.set_title(f'N-body simulation with $N={npt}$ particles') step_label = ax.text(0.03, .97, f'Step {index_in_timestep}', transform=ax.transAxes, verticalalignment='top', c='k', bbox=dict(color='w', alpha=0.8)) dvel = np.zeros_like(vel) dpos = np.zeros_like(pos) acc = np.zeros_like(pos) for index_in_particle in range(npt): acc[index_in_particle] = g_at_pos(pos[index_in_particle], pos, mass, epsilon=0.1) # Update velocities. dvel[index_in_particle] = dt * acc[index_in_particle] # Update positions. dpos[index_in_particle] = dt * vel[index_in_particle] vel += dvel - drag*vel pos += dpos # Our simulation has periodic boundaries, if you go off one side you come back on the other! pos = pos % 1. ax.scatter(pos[:,0], pos[:,1], color='darkorange', edgecolor='w') # Draw arrows representing the velocity and acceleration vectors, if requested # The code here is a little verbose to get nice-looking arrows in the legend arrows = [] if draw_vel: ax.quiver(pos[:,0], pos[:,1], vel[:,0], vel[:,1], color='w', zorder=0) arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Velocity', color='w')] if draw_acc: ax.quiver(pos[:,0], pos[:,1], acc[:,0], acc[:,1], color='darkorange', zorder=0) arrows += [matplotlib.patches.FancyArrow(0,0, 0.5, 0.6, label='Accel', color='darkorange')] if draw_vel or draw_acc: ax.legend(handles=arrows, handler_map={matplotlib.patches.FancyArrow:matplotlib.legend_handler.HandlerPatch(patch_func=make_legend_arrow)}, facecolor='k', edgecolor='white', framealpha=0.8, loc='lower right') ax.set_xlim(0., 1.) ax.set_ylim(0., 1.) fig.canvas.draw() ###Output _____no_output_____ ###Markdown Try playing around with the settings! More than 100 particles won't run very smoothly, however.With the default settings, you'll find that the particles tend fall into one or two clumps before too long. This is due to the drag that we put in. The drag simulates the effect that the expanding universe has on real dark matter particles, which is to slow them down and cause them to group together. These clumps are known as *halos*, and form "galactic nurseries" where gas can gather to form new stars and galaxies. Now, when DESI scientists run huge simulations, such as those run on Summit, a total of ~48 _trillion_ particles are solved for. Don't try this here! But the results are really quite extraordinary (skip to 6 mins 45 seconds if you're impatient to see the result!): ###Code YouTubeVideo('LQMLFryA_7k', width=800, height=400) ###Output _____no_output_____
ML_projects/5. Traffic Sign Classification with Deep Learning/deeplearning_traffic_sign_classifier_notebook.ipynb
###Markdown TASK 1: UNDERSTAND THE PROBLEM STATEMENT - Our goal is to build a multiclassifier model based on deep learning to classify various traffic signs. - Dataset that we are using to train the model is **German Traffic Sign Recognition Benchmark**.- Dataset consists of 43 classes: - ( 0, b'Speed limit (20km/h)') ( 1, b'Speed limit (30km/h)') ( 2, b'Speed limit (50km/h)') ( 3, b'Speed limit (60km/h)') ( 4, b'Speed limit (70km/h)') - ( 5, b'Speed limit (80km/h)') ( 6, b'End of speed limit (80km/h)') ( 7, b'Speed limit (100km/h)') ( 8, b'Speed limit (120km/h)') ( 9, b'No passing') - (10, b'No passing for vehicles over 3.5 metric tons') (11, b'Right-of-way at the next intersection') (12, b'Priority road') (13, b'Yield') (14, b'Stop') - (15, b'No vehicles') (16, b'Vehicles over 3.5 metric tons prohibited') (17, b'No entry')- (18, b'General caution') (19, b'Dangerous curve to the left')- (20, b'Dangerous curve to the right') (21, b'Double curve')- (22, b'Bumpy road') (23, b'Slippery road')- (24, b'Road narrows on the right') (25, b'Road work')- (26, b'Traffic signals') (27, b'Pedestrians') (28, b'Children crossing')- (29, b'Bicycles crossing') (30, b'Beware of ice/snow')- (31, b'Wild animals crossing')- (32, b'End of all speed and passing limits') (33, b'Turn right ahead')- (34, b'Turn left ahead') (35, b'Ahead only') (36, b'Go straight or right')- (37, b'Go straight or left') (38, b'Keep right') (39, b'Keep left')- (40, b'Roundabout mandatory') (41, b'End of no passing')- (42, b'End of no passing by vehicles over 3.5 metric tons')- **Data Source** - https://www.kaggle.com/meowmeowmeowmeowmeow/gtsrb-german-traffic-sign TASK 2: GET THE DATA AND VISUALIZE IT ###Code import pickle with open("train.p", mode='rb') as training_data: train = pickle.load(training_data) with open("valid.p", mode='rb') as validation_data: valid = pickle.load(validation_data) with open("test.p", mode='rb') as testing_data: test = pickle.load(testing_data) X_train, y_train = train['features'], train['labels'] X_validation, y_validation = valid['features'], valid['labels'] X_test, y_test = test['features'], test['labels'] X_test.shape import numpy as np import matplotlib.pyplot as plt i = np.random.randint(1, len(X_test)) plt.imshow(X_test[i]) print('label = ', y_test[i]) ###Output label = 18 ###Markdown MINI CHALLENGE- Complete the code below to print out 5 by 5 grid showing random traffic sign images along with their corresponding labels as their titles ###Code # Let's view more images in a grid format # Define the dimensions of the plot grid W_grid = 5 L_grid = 5 # fig, axes = plt.subplots(L_grid, W_grid) # subplot return the figure object and axes object # we can use the axes object to plot specific figures at various locations fig, axes = plt.subplots(L_grid, W_grid, figsize = (10,10)) axes = axes.ravel() # flaten the 15 x 15 matrix into 225 array n_training = len(X_test) # get the length of the training dataset # Select a random number from 0 to n_training for i in np.arange(0, W_grid * L_grid): # create evenly spaces variables index = np.random.randint(0,n_training) axes[i].imshow(X_test[index]) axes[i].set_title(y_test[index],fontsize = 15) axes[i].axis('off') plt.subplots_adjust(hspace = 0.4) ###Output _____no_output_____ ###Markdown TASK 3: IMPORT SAGEMAKER/BOTO3, CREATE A SESSION, DEFINE S3 AND ROLE ###Code # Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python # Boto3 allows Python developer to write software that makes use of services like Amazon S3 and Amazon EC2 import sagemaker import boto3 # Let's create a Sagemaker session sagemaker_session = sagemaker.Session() # Let's define the S3 bucket and prefix that we want to use in this session bucket = 'sagemaker-practical' # bucket named 'sagemaker-practical' was created beforehand prefix = 'traffic-sign-classifier' # prefix is the subfolder within the bucket. # Let's get the execution role for the notebook instance. # This is the IAM role that you created when you created your notebook instance. You pass the role to the training job. # Note that AWS Identity and Access Management (IAM) role that Amazon SageMaker can assume to perform tasks on your behalf (for example, reading training results, called model artifacts, from the S3 bucket and writing training results to Amazon S3). role = sagemaker.get_execution_role() print(role) ###Output arn:aws:iam::880968264155:role/service-role/AmazonSageMaker-ExecutionRole-20210726T142831 ###Markdown TASK 4: UPLOAD THE DATA TO S3 ###Code # Create directory to store the training and validation data import os os.makedirs("./data", exist_ok = True) # Save several arrays into a single file in uncompressed .npz format # Read more here: https://numpy.org/devdocs/reference/generated/numpy.savez.html np.savez('./data/training', image = X_train, label = y_train) np.savez('./data/validation', image = X_test, label = y_test) # Upload the training and validation data to S3 bucket prefix = 'traffic-sign' training_input_path = sagemaker_session.upload_data('data/training.npz', key_prefix = prefix + '/training') validation_input_path = sagemaker_session.upload_data('data/validation.npz', key_prefix = prefix + '/validation') print(training_input_path) print(validation_input_path) ###Output s3://sagemaker-us-east-2-880968264155/traffic-sign/training/training.npz s3://sagemaker-us-east-2-880968264155/traffic-sign/validation/validation.npz ###Markdown TASK 5: TRAIN THE CNN LENET MODEL USING SAGEMAKER The model consists of the following layers: - STEP 1: THE FIRST CONVOLUTIONAL LAYER 1 - Input = 32x32x3 - Output = 28x28x6 - Output = (Input-filter+1)/Stride* => (32-5+1)/1=28 - Used a 5x5 Filter with input depth of 3 and output depth of 6 - Apply a RELU Activation function to the output - pooling for input, Input = 28x28x6 and Output = 14x14x6 * Stride is the amount by which the kernel is shifted when the kernel is passed over the image.- STEP 2: THE SECOND CONVOLUTIONAL LAYER 2 - Input = 14x14x6 - Output = 10x10x16 - Layer 2: Convolutional layer with Output = 10x10x16 - Output = (Input-filter+1)/strides => 10 = 14-5+1/1 - Apply a RELU Activation function to the output - Pooling with Input = 10x10x16 and Output = 5x5x16- STEP 3: FLATTENING THE NETWORK - Flatten the network with Input = 5x5x16 and Output = 400- STEP 4: FULLY CONNECTED LAYER - Layer 3: Fully Connected layer with Input = 400 and Output = 120 - Apply a RELU Activation function to the output- STEP 5: ANOTHER FULLY CONNECTED LAYER - Layer 4: Fully Connected Layer with Input = 120 and Output = 84 - Apply a RELU Activation function to the output- STEP 6: FULLY CONNECTED LAYER - Layer 5: Fully Connected layer with Input = 84 and Output = 43 ###Code !pygmentize train-cnn.py from sagemaker.tensorflow import TensorFlow # To Train a TensorFlow model, we will use TensorFlow estimator from the Sagemaker SDK # entry_point: a script that will run in a container. This script will include model description and training. # role: a role that's obtained The role assigned to the running notebook. # train_instance_count: number of container instances used to train the model. # train_instance_type: instance type! # framwork_version: version of Tensorflow # py_version: Python version. # script_mode: allows for running script in the container. # hyperparameters: indicate the hyperparameters for the training job such as epochs and learning rate tf_estimator = TensorFlow(entry_point='train-cnn.py', role=role, train_instance_count=1, train_instance_type='ml.c4.2xlarge', framework_version='1.12', py_version='py3', script_mode=True, hyperparameters={ 'epochs': 2 , 'batch-size': 32, 'learning-rate': 0.001} ) tf_estimator.fit({'training': training_input_path, 'validation': validation_input_path}) ###Output 2021-08-05 11:25:05 Starting - Starting the training job... 2021-08-05 11:25:28 Starting - Launching requested ML instancesProfilerReport-1628162705: InProgress ... 2021-08-05 11:25:54 Starting - Preparing the instances for training......... 2021-08-05 11:27:34 Downloading - Downloading input data 2021-08-05 11:27:34 Training - Training image download completed. Training in progress..2021-08-05 11:27:37,800 sagemaker-containers INFO Imported framework sagemaker_tensorflow_container.training 2021-08-05 11:27:37,805 sagemaker-containers INFO No GPUs detected (normal if no gpus installed) 2021-08-05 11:27:38,120 sagemaker-containers INFO No GPUs detected (normal if no gpus installed) 2021-08-05 11:27:38,136 sagemaker-containers INFO No GPUs detected (normal if no gpus installed) 2021-08-05 11:27:38,148 sagemaker-containers INFO Invoking user script  Training Env:  { "additional_framework_parameters": {}, "channel_input_dirs": { "training": "/opt/ml/input/data/training", "validation": "/opt/ml/input/data/validation" }, "current_host": "algo-1", "framework_module": "sagemaker_tensorflow_container.training:main", "hosts": [ "algo-1" ], "hyperparameters": { "batch-size": 32, "learning-rate": 0.001, "model_dir": "s3://sagemaker-us-east-2-880968264155/sagemaker-tensorflow-scriptmode-2021-08-05-11-25-05-288/model", "epochs": 2 }, "input_config_dir": "/opt/ml/input/config", "input_data_config": { "training": { "TrainingInputMode": "File", "S3DistributionType": "FullyReplicated", "RecordWrapperType": "None" }, "validation": { "TrainingInputMode": "File", "S3DistributionType": "FullyReplicated", "RecordWrapperType": "None" } }, "input_dir": "/opt/ml/input", "is_master": true, "job_name": "sagemaker-tensorflow-scriptmode-2021-08-05-11-25-05-288", "log_level": 20, "master_hostname": "algo-1", "model_dir": "/opt/ml/model", "module_dir": "s3://sagemaker-us-east-2-880968264155/sagemaker-tensorflow-scriptmode-2021-08-05-11-25-05-288/source/sourcedir.tar.gz", "module_name": "train-cnn", "network_interface_name": "eth0", "num_cpus": 8, "num_gpus": 0, "output_data_dir": "/opt/ml/output/data", "output_dir": "/opt/ml/output", "output_intermediate_dir": "/opt/ml/output/intermediate", "resource_config": { "current_host": "algo-1", "hosts": [ "algo-1" ], "network_interface_name": "eth0" }, "user_entry_point": "train-cnn.py" }  Environment variables:  SM_HOSTS=["algo-1"] SM_NETWORK_INTERFACE_NAME=eth0 SM_HPS={"batch-size":32,"epochs":2,"learning-rate":0.001,"model_dir":"s3://sagemaker-us-east-2-880968264155/sagemaker-tensorflow-scriptmode-2021-08-05-11-25-05-288/model"} SM_USER_ENTRY_POINT=train-cnn.py SM_FRAMEWORK_PARAMS={} SM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"} SM_INPUT_DATA_CONFIG={"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"},"validation":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}} SM_OUTPUT_DATA_DIR=/opt/ml/output/data SM_CHANNELS=["training","validation"] SM_CURRENT_HOST=algo-1 SM_MODULE_NAME=train-cnn SM_LOG_LEVEL=20 SM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main SM_INPUT_DIR=/opt/ml/input SM_INPUT_CONFIG_DIR=/opt/ml/input/config SM_OUTPUT_DIR=/opt/ml/output SM_NUM_CPUS=8 SM_NUM_GPUS=0 SM_MODEL_DIR=/opt/ml/model SM_MODULE_DIR=s3://sagemaker-us-east-2-880968264155/sagemaker-tensorflow-scriptmode-2021-08-05-11-25-05-288/source/sourcedir.tar.gz SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{"training":"/opt/ml/input/data/training","validation":"/opt/ml/input/data/validation"},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1"],"hyperparameters":{"batch-size":32,"epochs":2,"learning-rate":0.001,"model_dir":"s3://sagemaker-us-east-2-880968264155/sagemaker-tensorflow-scriptmode-2021-08-05-11-25-05-288/model"},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"},"validation":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":true,"job_name":"sagemaker-tensorflow-scriptmode-2021-08-05-11-25-05-288","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-2-880968264155/sagemaker-tensorflow-scriptmode-2021-08-05-11-25-05-288/source/sourcedir.tar.gz","module_name":"train-cnn","network_interface_name":"eth0","num_cpus":8,"num_gpus":0,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"train-cnn.py"} SM_USER_ARGS=["--batch-size","32","--epochs","2","--learning-rate","0.001","--model_dir","s3://sagemaker-us-east-2-880968264155/sagemaker-tensorflow-scriptmode-2021-08-05-11-25-05-288/model"] SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate SM_CHANNEL_TRAINING=/opt/ml/input/data/training SM_CHANNEL_VALIDATION=/opt/ml/input/data/validation SM_HP_BATCH-SIZE=32 SM_HP_LEARNING-RATE=0.001 SM_HP_MODEL_DIR=s3://sagemaker-us-east-2-880968264155/sagemaker-tensorflow-scriptmode-2021-08-05-11-25-05-288/model SM_HP_EPOCHS=2 PYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/lib/python36.zip:/usr/lib/python3.6:/usr/lib/python3.6/lib-dynload:/usr/local/lib/python3.6/dist-packages:/usr/lib/python3/dist-packages  Invoking script with the following command:  /usr/bin/python train-cnn.py --batch-size 32 --epochs 2 --learning-rate 0.001 --model_dir s3://sagemaker-us-east-2-880968264155/sagemaker-tensorflow-scriptmode-2021-08-05-11-25-05-288/model  _________________________________________________________________ Layer (type) Output Shape Param #  ================================================================= conv2d (Conv2D) (None, 28, 28, 6) 456  _________________________________________________________________ average_pooling2d (AveragePo (None, 14, 14, 6) 0  _________________________________________________________________ conv2d_1 (Conv2D) (None, 10, 10, 16) 2416  _________________________________________________________________ average_pooling2d_1 (Average (None, 5, 5, 16) 0  _________________________________________________________________ flatten (Flatten) (None, 400) 0  _________________________________________________________________ dense (Dense) (None, 120) 48120  _________________________________________________________________ dense_1 (Dense) (None, 84) 10164  _________________________________________________________________ dense_2 (Dense) (None, 43) 3655  ================================================================= Total params: 64,811 Trainable params: 64,811 Non-trainable params: 0 _________________________________________________________________ None Train on 34799 samples, validate on 12630 samples Epoch 1/2  - 13s - loss: 1.2610 - acc: 0.6516 - val_loss: 0.8519 - val_acc: 0.7852 Epoch 2/2  - 14s - loss: 0.3550 - acc: 0.9017 - val_loss: 0.7190 - val_acc: 0.8298 Validation loss : 0.7189541301176082 Validation accuracy: 0.8298495645666538 WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/simple_save.py:85: calling SavedModelBuilder.add_meta_graph_and_variables (from tensorflow.python.saved_model.builder_impl) with legacy_init_op is deprecated and will be removed in a future version. Instructions for updating: Pass your op to the equivalent parameter main_op instead. 2021-08-05 11:28:09,710 sagemaker-containers INFO Reporting training SUCCESS 2021-08-05 11:28:29 Uploading - Uploading generated training model 2021-08-05 11:28:29 Completed - Training job completed Training seconds: 64 Billable seconds: 64 ###Markdown TASK 7: DEPLOY THE MODEL WITHOUT ACCELERATORS ###Code # Deploying the model import time tf_endpoint_name = 'trafficsignclassifier-' + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime()) tf_predictor = tf_estimator.deploy(initial_instance_count = 1, instance_type = 'ml.t2.medium', endpoint_name = tf_endpoint_name) # Making predictions from the end point %matplotlib inline import random import matplotlib.pyplot as plt #Pre-processing the images num_samples = 5 indices = random.sample(range(X_test.shape[0] - 1), num_samples) images = X_test[indices]/255 labels = y_test[indices] for i in range(num_samples): plt.subplot(1,num_samples,i+1) plt.imshow(images[i]) plt.title(labels[i]) plt.axis('off') # Making predictions prediction = tf_predictor.predict(images.reshape(num_samples, 32, 32, 3))['predictions'] prediction = np.array(prediction) predicted_label = prediction.argmax(axis=1) print('Predicted labels are: {}'.format(predicted_label)) # Deleting the end-point tf_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown MINI CHALLENGE (TAKE HOME) - Try to improve the model accuracy by experimenting with Dropout, adding more convolutional layers, and changing the size of the filters EXCELLENT JOB MINI CHALLENGE SOLUTIONS ###Code # Select a random number index = np.random.randint(0, n_training) # read and display an image with the selected index axes[i].imshow( X_test[index]) axes[i].set_title(y_test[index], fontsize = 15) axes[i].axis('off') plt.subplots_adjust(hspace=0.4) ###Output _____no_output_____
_notebooks/2020-08-02-sign-language-multiclass.ipynb
###Markdown "Redes convolucionales imágenes de lenguaje de signos"> (SPANISH) Resolucion de la competicion Kaggle de lenguaje de signos - toc: true - badges: true - comments: true- categories: ["Computer Vision"]- image: images/kaggle.png IntroducciónSe resolverá la competición de Kaggle sobre imágenes de lenguaje de signos (clasificación multiclase), disponible en [este enlace](https://www.kaggle.com/datamunge/sign-language-mnist). El dataset de lenguajes de signos surgió como una evolución de MNIST y Fashion MNIST, pero siguiendo la misma filosofía y formato: píxeles 28x28 en blanco y negro, con 26 etiquetas (0-25) correspondiente a letras A-Z (no hay casos para 9=J 25=Z por ser movimientos). El dataset de training contiene 27,455 muestras y el de pruebas 7,172. ###Code import csv import numpy as np import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator from os import getcwd # You will need to write code that will read the file passed # into this function. The first line contains the column headers # so you should ignore it # Each successive line contians 785 comma separated values between 0 and 255 # The first value is the label # The rest are the pixel values for that picture # The function will return 2 np.array types. One with all the labels # One with all the images # # Tips: # If you read a full line (as 'row') then row[0] has the label # and row[1:785] has the 784 pixel values # Take a look at np.array_split to turn the 784 pixels into 28x28 # You are reading in strings, but need the values to be floats # Check out np.array().astype for a conversion def get_data(filename): # np.array of shape (data length, labels & pixels) my_arr = np.loadtxt(filename, delimiter=',', skiprows=1) # get label & image arrays labels = my_arr[:,0].astype('int') images = my_arr[:,1:] # reshape image from 784 to (28, 28) images = images.astype('float').reshape(images.shape[0], 28, 28) # just in case to avoid memory problem my_arr = None return images, labels path_sign_mnist_train = f"{getcwd()}/../tmp2/sign_mnist_train.csv" path_sign_mnist_test = f"{getcwd()}/../tmp2/sign_mnist_test.csv" training_images, training_labels = get_data(path_sign_mnist_train) testing_images, testing_labels = get_data(path_sign_mnist_test) # Keep these print(training_images.shape) print(training_labels.shape) print(testing_images.shape) print(testing_labels.shape) # Their output should be: # (27455, 28, 28) # (27455,) # (7172, 28, 28) # (7172,) # In this section you will have to add another dimension to the data # So, for example, if your array is (10000, 28, 28) # You will need to make it (10000, 28, 28, 1) # Hint: np.expand_dims training_images = np.expand_dims(training_images, axis=-1) testing_images = np.expand_dims(testing_images, axis=-1) # Create an ImageDataGenerator and do Image Augmentation train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') validation_datagen = ImageDataGenerator( rescale=1./255) # Keep These print(training_images.shape) print(testing_images.shape) # Their output should be: # (27455, 28, 28, 1) # (7172, 28, 28, 1) # Define the model # Use no more than 2 Conv2D and 2 MaxPooling2D model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(128, (3,3), activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(32, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # Flatten the results to feed into a DNN tf.keras.layers.Flatten(), # 512 neuron hidden layer tf.keras.layers.Dense(512, activation='relu'), tf.keras.layers.Dense(25, activation='softmax') ]) # Compile Model. model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy']) # Train the Model from tensorflow.keras.utils import to_categorical training_labels1 = to_categorical(training_labels) testing_labels1 = to_categorical(testing_labels) train_generator = train_datagen.flow(training_images, training_labels1, batch_size=500) validation_generator = validation_datagen.flow(testing_images, testing_labels1, batch_size=500) history = model.fit_generator(train_generator, validation_data=validation_generator, steps_per_epoch=100, epochs=3, validation_steps=30, verbose=2) model.evaluate(testing_images, testing_labels1, verbose=0) # Plot the chart for accuracy and loss on both training and validation %matplotlib inline import matplotlib.pyplot as plt acc = history.history['accuracy' ] val_acc = history.history['val_accuracy' ] loss = history.history['loss' ] val_loss = history.history['val_loss' ] epochs = range(len(acc)) plt.plot(epochs, acc, 'r', label='Training accuracy') plt.plot(epochs, val_acc, 'b', label='Validation accuracy') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'r', label='Training Loss') plt.plot(epochs, val_loss, 'b', label='Validation Loss') plt.title('Training and validation loss') plt.legend() plt.show() ###Output _____no_output_____
notebooks/test-case-for-CSIC.ipynb
###Markdown Jupyter Settings ###Code import warnings warnings.simplefilter(action='ignore', category=FutureWarning) %load_ext autoreload %autoreload 2 ###Output _____no_output_____ ###Markdown Read data ###Code import pandas as pd DATASET = '../data/CSIC/csic-for-extractor.csv' df = pd.read_csv(DATASET, sep=',', dtype={'text':str, 'type':str}, low_memory=False) df.loc[df.type != '99999', 'type'] = 'malicious' df.loc[df.type == '99999', 'type'] = 'normal' df.rename(columns={"type": "target"},inplace=True) df[df.target == 'malicious'].shape df[df.target == 'normal'].shape df.head(10) import sys sys.path.append("..") ###Output _____no_output_____ ###Markdown Train model ###Code import lime from tpe_model import text_preprocess from tpe_model import text_model_generator df, label_map = text_preprocess(df) print(label_map) tmg = text_model_generator(df) model = tmg.model_trainer() ###Output _____no_output_____ ###Markdown Explain single sample ###Code from tpe_core import get_instance_explained from lime.lime_text import LimeTextExplainer # Warning The pickle module is not secure. Only unpickle data you trust.Reference: https://docs.python.org/3/library/pickle.html # import pickle # with open("model.test", 'rb') as f: # model = pickle.load(f, encoding='bytes') labels = list(label_map.values()) get_instance_explained(df, 30633, model, label_map, 'malicious') get_instance_explained(df, 0, model, label_map, 'normal') ###Output _____no_output_____ ###Markdown Generate signature rules in batch and verification ###Code from tpe_rule_validation import rule_matching_evaluation match_result, rules_tobe_validate, matched_rules = rule_matching_evaluation(df , seed_num=2000 , rein_num=2000 , eval_num=1000 , model=model , label_map=label_map , refer_label='malicious' , lime_flag = True , scan_flag=True , content_direction='backward' , xcol_name='text' , n_cores=20) # show a case print('A match case...') rule_index = 1 rule_num = matched_rules.iloc[[rule_index]].index[0] print('rule_num is %d' % rule_num) print(matched_rules.loc[[rule_num]]['rule_strings']) print('----------------------------------------------------------------------') pd.options.display.max_colwidth = 1000 print(match_result.loc[match_result.rule_num == rule_num]['text']) print('Total matched number %d' % match_result.loc[match_result.rule_num == rule_num].shape[0]) matched_rules.shape[0] ###Output _____no_output_____ ###Markdown Backups Generate lime rules in batch ###Code df_malicious = df[df['target'] == label_map['malicious']].sample(1000, random_state=1) df_malicious from tpe_core import get_rules rules_seed = get_rules(df_malicious, model, label_map, 'malicious', scan_flag=False) rules_seed ###Output _____no_output_____
demo/quantization/quantization_end_to_end.ipynb
###Markdown Nvidia GPU INT-8 quantization on any transformers model (encoder based) For some context and explanations, please check our documentation here: [https://els-rd.github.io/transformer-deploy/quantization/quantization_intro/](https://els-rd.github.io/transformer-deploy/quantization/quantization_intro/). Project setup Dependencies installation Your machine should have Nvidia CUDA 11.X, TensorRT 8.2.1 and cuBLAS installed. It's said to be tricky to install, in my experience, just follow Nvidia download page instructions **and nothing else**, it should work out of the box. Nvidia Docker image could be a good choice too. ###Code #! pip3 install git+ssh://[email protected]/ELS-RD/transformer-deploy #! pip3 install datasets sklearn #! pip3 install git+ssh://[email protected]/NVIDIA/TensorRT#egg=pytorch-quantization\&subdirectory=tools/pytorch-quantization/ ###Output Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Collecting datasets Downloading datasets-1.18.4-py3-none-any.whl (312 kB)  |████████████████████████████████| 312 kB 12.2 MB/s eta 0:00:01 [?25hCollecting sklearn Downloading sklearn-0.0.tar.gz (1.1 kB) Collecting dill Downloading dill-0.3.4-py2.py3-none-any.whl (86 kB)  |████████████████████████████████| 86 kB 70.0 MB/s eta 0:00:01 [?25hRequirement already satisfied: requests>=2.19.0 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from datasets) (2.27.1) Requirement already satisfied: numpy>=1.17 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from datasets) (1.21.5) Requirement already satisfied: packaging in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from datasets) (21.3) Collecting pyarrow!=4.0.0,>=3.0.0 Downloading pyarrow-7.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (26.7 MB)  |████████████████████████████████| 26.7 MB 20.6 MB/s eta 0:00:01 [?25hCollecting xxhash Downloading xxhash-3.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (211 kB)  |████████████████████████████████| 211 kB 59.3 MB/s eta 0:00:01 [?25hCollecting fsspec[http]>=2021.05.0 Downloading fsspec-2022.2.0-py3-none-any.whl (134 kB)  |████████████████████████████████| 134 kB 73.4 MB/s eta 0:00:01 [?25hCollecting responses<0.19 Downloading responses-0.18.0-py3-none-any.whl (38 kB) Collecting pandas Downloading pandas-1.4.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB)  |████████████████████████████████| 11.7 MB 90.8 MB/s eta 0:00:01 [?25hRequirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from datasets) (0.4.0) Collecting aiohttp Downloading aiohttp-3.8.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.2 MB)  |████████████████████████████████| 1.2 MB 100.2 MB/s eta 0:00:01 [?25hCollecting multiprocess Downloading multiprocess-0.70.12.2-py39-none-any.whl (128 kB)  |████████████████████████████████| 128 kB 78.2 MB/s eta 0:00:01 [?25hRequirement already satisfied: tqdm>=4.62.1 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from datasets) (4.62.3) Requirement already satisfied: pyyaml in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (6.0) Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (4.0.1) Requirement already satisfied: filelock in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (3.4.2) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from packaging->datasets) (3.0.7) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from requests>=2.19.0->datasets) (1.26.8) Requirement already satisfied: certifi>=2017.4.17 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from requests>=2.19.0->datasets) (2021.10.8) Requirement already satisfied: idna<4,>=2.5 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from requests>=2.19.0->datasets) (3.3) Requirement already satisfied: charset-normalizer~=2.0.0 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from requests>=2.19.0->datasets) (2.0.11) Requirement already satisfied: scikit-learn in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from sklearn) (1.0.2) Requirement already satisfied: attrs>=17.3.0 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from aiohttp->datasets) (21.4.0) Collecting multidict<7.0,>=4.5 Downloading multidict-6.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (114 kB)  |████████████████████████████████| 114 kB 96.7 MB/s eta 0:00:01 [?25hCollecting aiosignal>=1.1.2 Downloading aiosignal-1.2.0-py3-none-any.whl (8.2 kB) Collecting frozenlist>=1.1.1 Downloading frozenlist-1.3.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (156 kB)  |████████████████████████████████| 156 kB 86.6 MB/s eta 0:00:01 [?25hCollecting yarl<2.0,>=1.0 Downloading yarl-1.7.2-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (304 kB)  |████████████████████████████████| 304 kB 95.1 MB/s eta 0:00:01 [?25hCollecting async-timeout<5.0,>=4.0.0a3 Downloading async_timeout-4.0.2-py3-none-any.whl (5.8 kB) Requirement already satisfied: python-dateutil>=2.8.1 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from pandas->datasets) (2.8.2) Collecting pytz>=2020.1 Downloading pytz-2021.3-py2.py3-none-any.whl (503 kB)  |████████████████████████████████| 503 kB 68.0 MB/s eta 0:00:01 [?25hRequirement already satisfied: six>=1.5 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from python-dateutil>=2.8.1->pandas->datasets) (1.16.0) Requirement already satisfied: joblib>=0.11 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from scikit-learn->sklearn) (1.1.0) Requirement already satisfied: scipy>=1.1.0 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from scikit-learn->sklearn) (1.8.0) Requirement already satisfied: threadpoolctl>=2.0.0 in /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages (from scikit-learn->sklearn) (3.1.0) Building wheels for collected packages: sklearn Building wheel for sklearn (setup.py) ... [?25ldone [?25h Created wheel for sklearn: filename=sklearn-0.0-py2.py3-none-any.whl size=1309 sha256=3ac0638e4032964c7ba9d014645ce13935d501fa40fff5bf77431ac4cffa5e14 Stored in directory: /tmp/pip-ephem-wheel-cache-m57vxdol/wheels/e4/7b/98/b6466d71b8d738a0c547008b9eb39bf8676d1ff6ca4b22af1c Successfully built sklearn Installing collected packages: multidict, frozenlist, yarl, async-timeout, aiosignal, pytz, fsspec, dill, aiohttp, xxhash, responses, pyarrow, pandas, multiprocess, sklearn, datasets Successfully installed aiohttp-3.8.1 aiosignal-1.2.0 async-timeout-4.0.2 datasets-1.18.4 dill-0.3.4 frozenlist-1.3.0 fsspec-2022.2.0 multidict-6.0.2 multiprocess-0.70.12.2 pandas-1.4.1 pyarrow-7.0.0 pytz-2021.3 responses-0.18.0 sklearn-0.0 xxhash-3.0.0 yarl-1.7.2 WARNING: You are using pip version 21.1.2; however, version 22.0.4 is available. You should consider upgrading via the '/home/geantvert/.local/share/virtualenvs/fast_transformer/bin/python -m pip install --upgrade pip' command. ###Markdown Check the GPU is enabled and usable. ###Code ! nvidia-smi import logging import os from collections import OrderedDict from typing import Dict, List from typing import OrderedDict as OD from typing import Union import datasets import numpy as np import tensorrt as trt import torch import transformers from datasets import load_dataset, load_metric from tensorrt.tensorrt import IExecutionContext, Logger, Runtime from transformers import ( AutoModelForSequenceClassification, AutoTokenizer, IntervalStrategy, PreTrainedModel, PreTrainedTokenizer, Trainer, TrainingArguments, ) from transformer_deploy.backends.ort_utils import ( cpu_quantization, create_model_for_provider, optimize_onnx, ) from transformer_deploy.backends.pytorch_utils import convert_to_onnx from transformer_deploy.backends.trt_utils import build_engine, get_binding_idxs, infer_tensorrt from transformer_deploy.benchmarks.utils import print_timings, track_infer_time from transformer_deploy.QDQModels.calibration_utils import QATCalibrate ###Output _____no_output_____ ###Markdown Set logging to `error` level to ease readability of this `notebook` on Github. ###Code log_level = logging.ERROR logging.getLogger().setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() trt_logger: Logger = trt.Logger(trt.Logger.ERROR) transformers.logging.set_verbosity_error() ###Output _____no_output_____ ###Markdown Preprocess data This part is inspired from an [official Notebooks from Hugging Face](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb).There is nothing special to do. Define the task: ###Code model_name = "roberta-base" task = "mnli" num_labels = 3 batch_size = 32 max_seq_len = 256 validation_key = "validation_matched" timings: Dict[str, List[float]] = dict() runtime: Runtime = trt.Runtime(trt_logger) profile_index = 0 ###Output _____no_output_____ ###Markdown Preprocess data (task specific): ###Code def preprocess_function(examples): return tokenizer( examples["premise"], examples["hypothesis"], truncation=True, padding="max_length", max_length=max_seq_len ) def compute_metrics(eval_pred): predictions, labels = eval_pred if task != "stsb": predictions = np.argmax(predictions, axis=1) else: predictions = predictions[:, 0] return metric.compute(predictions=predictions, references=labels) def convert_tensor(data: OD[str, List[List[int]]], output: str) -> OD[str, Union[np.ndarray, torch.Tensor]]: input: OD[str, Union[np.ndarray, torch.Tensor]] = OrderedDict() for k in ["input_ids", "attention_mask", "token_type_ids"]: if k in data: v = data[k] if output == "torch": value = torch.tensor(v, dtype=torch.long, device="cuda") elif output == "np": value = np.asarray(v, dtype=np.int32) else: raise Exception(f"unknown output type: {output}") input[k] = value return input def measure_accuracy(infer, tensor_type: str) -> float: outputs = list() for start_index in range(0, len(encoded_dataset[validation_key]), batch_size): end_index = start_index + batch_size data = encoded_dataset[validation_key][start_index:end_index] inputs: OD[str, np.ndarray] = convert_tensor(data=data, output=tensor_type) output = infer(inputs)[0] if tensor_type == "torch": output = output.detach().cpu().numpy() output = np.argmax(output, axis=1).astype(int).tolist() outputs.extend(output) return np.mean(np.array(outputs) == np.array(validation_labels)) def get_trainer(model: PreTrainedModel) -> Trainer: trainer = Trainer( model, args, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset[validation_key], tokenizer=tokenizer, compute_metrics=compute_metrics, ) transformers.logging.set_verbosity_error() return trainer tokenizer: PreTrainedTokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) dataset = load_dataset("glue", task) metric = load_metric("glue", task) encoded_dataset = dataset.map(preprocess_function, batched=True) validation_labels = [item["label"] for item in encoded_dataset[validation_key]] nb_step = 1000 strategy = IntervalStrategy.STEPS args = TrainingArguments( f"{model_name}-{task}", evaluation_strategy=strategy, eval_steps=nb_step, logging_steps=nb_step, save_steps=nb_step, save_strategy=strategy, learning_rate=1e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size * 2, num_train_epochs=1, fp16=True, group_by_length=True, weight_decay=0.01, load_best_model_at_end=True, metric_for_best_model="accuracy", report_to=[], ) ###Output _____no_output_____ ###Markdown (Standard) fine-tuning modelNow that our data are ready, we can download/fine tune the pretrained model. ###Code model_fp16: PreTrainedModel = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=num_labels) trainer = get_trainer(model_fp16) transformers.logging.set_verbosity_error() trainer.train() print(trainer.evaluate()) model_fp16.save_pretrained("model_trained_fp16") ###Output [INFO|trainer.py:457] 2022-03-09 19:15:29,964 >> Using amp half precision backend /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( ###Markdown Add quantization support to any modelThe idea is to take the source code of a specific model and add automatically `QDQ` nodes. QDQ nodes will be placed before and after an operation that we want to quantize, that’s inside these nodes that the information to perform the mapping between high precision and low precision number is stored.If you want to know more, check our documentation on: [https://els-rd.github.io/transformer-deploy/quantization/quantization_ast/](https://els-rd.github.io/transformer-deploy/quantization/quantization_ast/) ###Code for percentile in [99.9, 99.99, 99.999, 99.9999]: with QATCalibrate(method="histogram", percentile=percentile) as qat: model_q: PreTrainedModel = AutoModelForSequenceClassification.from_pretrained( "model_trained_fp16", num_labels=num_labels ) model_q = model_q.cuda() qat.setup_model_qat(model_q) # prepare quantizer to any model with torch.no_grad(): for start_index in range(0, 128, batch_size): end_index = start_index + batch_size data = encoded_dataset["train"][start_index:end_index] input_torch = { k: torch.tensor(v, dtype=torch.long, device="cuda") for k, v in data.items() if k in ["input_ids", "attention_mask", "token_type_ids"] } model_q(**input_torch) trainer = get_trainer(model_q) print(f"percentile: {percentile}") print(trainer.evaluate()) ###Output [INFO|trainer.py:457] 2022-03-09 20:04:30,756 >> Using amp half precision backend ###Markdown As you can see, the chosen percentile value has a high impact on the final accuracy.For the rest of the notebook, we apply the `99.999` percentile. ###Code with QATCalibrate(method="histogram", percentile=99.999) as qat: model_q: PreTrainedModel = AutoModelForSequenceClassification.from_pretrained( "model_trained_fp16", num_labels=num_labels ) model_q = model_q.cuda() qat.setup_model_qat(model_q) # prepare quantizer to any model with torch.no_grad(): for start_index in range(0, 128, batch_size): end_index = start_index + batch_size data = encoded_dataset["train"][start_index:end_index] input_torch = { k: torch.tensor(v, dtype=torch.long, device="cuda") for k, v in data.items() if k in ["input_ids", "attention_mask", "token_type_ids"] } model_q(**input_torch) trainer = get_trainer(model_q) print(trainer.evaluate()) ###Output [INFO|trainer.py:457] 2022-03-09 20:21:33,073 >> Using amp half precision backend ###Markdown Per layer quantization analysisBelow we will run a sensitivity analysis, by enabling quantization of one layer at a time and measuring the accuracy. That way we will be able to detect if the quantization of a specific layer has a larger cost on accuracy than other layers. ###Code from pytorch_quantization import nn as quant_nn for i in range(12): layer_name = f"layer.{i}" print(layer_name) for name, module in model_q.named_modules(): if isinstance(module, quant_nn.TensorQuantizer): if layer_name in name: module.enable_quant() else: module.disable_quant() trainer.evaluate() print("----") ###Output layer.0 {'eval_loss': 0.34956735372543335, 'eval_accuracy': 0.86571574121243, 'eval_runtime': 20.7064, 'eval_samples_per_second': 474.009, 'eval_steps_per_second': 7.437} ---- layer.1 {'eval_loss': 0.3523275852203369, 'eval_accuracy': 0.8649006622516556, 'eval_runtime': 25.4843, 'eval_samples_per_second': 385.14, 'eval_steps_per_second': 6.043} ---- layer.2 {'eval_loss': 0.356509268283844, 'eval_accuracy': 0.8622516556291391, 'eval_runtime': 20.7496, 'eval_samples_per_second': 473.021, 'eval_steps_per_second': 7.422} ---- layer.3 {'eval_loss': 0.36036217212677, 'eval_accuracy': 0.8617422312786551, 'eval_runtime': 20.7815, 'eval_samples_per_second': 472.296, 'eval_steps_per_second': 7.41} ---- layer.4 {'eval_loss': 0.35000357031822205, 'eval_accuracy': 0.8643912379011717, 'eval_runtime': 20.7921, 'eval_samples_per_second': 472.053, 'eval_steps_per_second': 7.407} ---- layer.5 {'eval_loss': 0.354992538690567, 'eval_accuracy': 0.8644931227712684, 'eval_runtime': 20.7938, 'eval_samples_per_second': 472.016, 'eval_steps_per_second': 7.406} ---- layer.6 {'eval_loss': 0.35205718874931335, 'eval_accuracy': 0.8645950076413652, 'eval_runtime': 20.7918, 'eval_samples_per_second': 472.061, 'eval_steps_per_second': 7.407} ---- layer.7 {'eval_loss': 0.35065746307373047, 'eval_accuracy': 0.8655119714722364, 'eval_runtime': 20.8011, 'eval_samples_per_second': 471.849, 'eval_steps_per_second': 7.403} ---- layer.8 {'eval_loss': 0.3491470217704773, 'eval_accuracy': 0.8659195109526235, 'eval_runtime': 20.8112, 'eval_samples_per_second': 471.621, 'eval_steps_per_second': 7.4} ---- layer.9 {'eval_loss': 0.3492998480796814, 'eval_accuracy': 0.8659195109526235, 'eval_runtime': 20.8695, 'eval_samples_per_second': 470.303, 'eval_steps_per_second': 7.379} ---- layer.10 {'eval_loss': 0.3501480221748352, 'eval_accuracy': 0.866225165562914, 'eval_runtime': 20.8698, 'eval_samples_per_second': 470.296, 'eval_steps_per_second': 7.379} ---- layer.11 {'eval_loss': 0.3497083783149719, 'eval_accuracy': 0.866225165562914, 'eval_runtime': 20.9345, 'eval_samples_per_second': 468.843, 'eval_steps_per_second': 7.356} ---- ###Markdown It seems that quantization of layers 2 to 6 has the largest accuracy impact. Operator quantization analysisBelow we will run a sensitivity analysis, by enabling quantization of one operator type at a time and measuring the accuracy. That way we will be able to detect if a specific operator has a larger cost on accuracy. On Roberta we only quantize `matmul` and `LayerNorm`, so we test both candidates. ###Code for op in ["matmul", "layernorm"]: for name, module in model_q.named_modules(): if isinstance(module, quant_nn.TensorQuantizer): if op in name: module.enable_quant() else: module.disable_quant() print(op) trainer.evaluate() print("----") ###Output matmul {'eval_loss': 0.3494793176651001, 'eval_accuracy': 0.8654100866021396, 'eval_runtime': 26.892, 'eval_samples_per_second': 364.978, 'eval_steps_per_second': 5.727} ---- layernorm {'eval_loss': 0.3585323095321655, 'eval_accuracy': 0.8587875700458482, 'eval_runtime': 24.0982, 'eval_samples_per_second': 407.293, 'eval_steps_per_second': 6.391} ---- ###Markdown It appears that `LayerNorm` quantization has a significant accuracy cost.Our goal is to disable quantization for as few operations as possible while preserving accuracy as much as possible. Therefore we will try to only disable quantization for `LayerNorm` on Layers 2 to 6. ###Code disable_layer_names = ["layer.2", "layer.3", "layer.4", "layer.6"] for name, module in model_q.named_modules(): if isinstance(module, quant_nn.TensorQuantizer): if any([f"{l}.output.layernorm" in name for l in disable_layer_names]): print(f"disable {name}") module.disable_quant() else: module.enable_quant() trainer.evaluate() ###Output disable roberta.encoder.layer.2.output.layernorm_quantizer_0 disable roberta.encoder.layer.2.output.layernorm_quantizer_1 disable roberta.encoder.layer.3.output.layernorm_quantizer_0 disable roberta.encoder.layer.3.output.layernorm_quantizer_1 disable roberta.encoder.layer.4.output.layernorm_quantizer_0 disable roberta.encoder.layer.4.output.layernorm_quantizer_1 disable roberta.encoder.layer.6.output.layernorm_quantizer_0 disable roberta.encoder.layer.6.output.layernorm_quantizer_1 {'eval_loss': 0.3617263436317444, 'eval_accuracy': 0.8614365766683647, 'eval_runtime': 46.0379, 'eval_samples_per_second': 213.194, 'eval_steps_per_second': 3.345} ###Markdown By just disabling quantization for a single operator on a few layers, we keep most of the performance boost (quantization) but retrieve more than 1 point of accuracy. It's also possible to perform an analysis per quantizer to get a smaller granularity but it's a bit slow to run.If we stop here, it's called a Post Training Quantization (PTQ). Below, we will try to retrieve even more accuracy. Quantization Aware Training (QAT) We retrain the model with 1/10 or 1/100 of the original learning rate. Our goal is to retrieve most of the original accuracy. ###Code args.learning_rate = 1e-7 trainer = get_trainer(model_q) trainer.train() print(trainer.evaluate()) model_q.save_pretrained("model-qat") ###Output [INFO|trainer.py:457] 2022-03-09 20:28:11,049 >> Using amp half precision backend /home/geantvert/.local/share/virtualenvs/fast_transformer/lib/python3.9/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( ###Markdown Export a `QDQ Pytorch` model to `ONNX`We need to enable fake quantization mode from Pytorch. ###Code data = encoded_dataset["train"][1:3] input_torch = convert_tensor(data, output="torch") convert_to_onnx( model_pytorch=model_q, output_path="model_qat.onnx", inputs_pytorch=input_torch, quantization=True, var_output_seq=False, ) del model_q QATCalibrate.restore() ###Output _____no_output_____ ###Markdown Benchmark Convert `ONNX` graph to `TensorRT` engine ###Code engine = build_engine( runtime=runtime, onnx_file_path="model_qat.onnx", logger=trt_logger, min_shape=(1, max_seq_len), optimal_shape=(batch_size, max_seq_len), max_shape=(batch_size, max_seq_len), workspace_size=10000 * 1024 * 1024, fp16=True, int8=True, ) # same as above, but from the terminal # !/usr/src/tensorrt/bin/trtexec --onnx=model_qat.onnx --shapes=input_ids:32x256,attention_mask:32x256 --best --workspace=10000 --saveEngine="test.plan" ###Output _____no_output_____ ###Markdown Prepare input and output buffer ###Code context: IExecutionContext = engine.create_execution_context() context.set_optimization_profile_async( profile_index=profile_index, stream_handle=torch.cuda.current_stream().cuda_stream ) input_binding_idxs, output_binding_idxs = get_binding_idxs(engine, profile_index) # type: List[int], List[int] data = encoded_dataset["train"][0:batch_size] input_torch: OD[str, torch.Tensor] = convert_tensor(data=data, output="torch") input_np: OD[str, np.ndarray] = convert_tensor(data=data, output="np") ###Output _____no_output_____ ###Markdown Inference on `TensorRT`We first check that inference is working correctly: ###Code tensorrt_output = infer_tensorrt( context=context, host_inputs=input_torch, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, ) print(tensorrt_output) ###Output [tensor([[ 1.2287, 1.3706, -2.4623], [ 2.4742, -0.7816, -1.8427], [ 2.4837, -0.3966, -2.2666], [ 2.9077, -0.3062, -2.9778], [ 2.3437, 0.0488, -2.6377], [ 3.7914, -1.1918, -3.1387], [-3.6134, 2.7432, 0.8490], [ 3.6679, -1.5787, -2.6408], [ 1.0155, -1.2787, 0.4250], [-3.4514, -0.4434, 4.2748], [ 3.5201, -1.1297, -2.8988], [-3.0225, -0.4062, 3.8606], [-2.7311, 3.5470, -0.4632], [-2.0741, 1.6613, 0.5798], [-0.4047, -0.8650, 1.6144], [ 2.8432, -1.3301, -1.8994], [ 3.7722, -0.9103, -3.3070], [-2.4204, -2.1432, 4.6537], [-3.1179, -1.3207, 4.6400], [-1.8794, 4.1075, -1.8630], [ 3.7726, -1.2056, -3.0701], [ 1.8645, 1.9744, -3.8743], [-3.1448, -1.2497, 4.5782], [ 3.5385, -0.2421, -3.6629], [ 3.7501, -1.6469, -2.7108], [-0.6568, 0.9046, -0.0228], [-3.2998, 0.0867, 3.3673], [-2.1030, 4.0461, -1.6705], [-3.7080, 0.4164, 3.4332], [ 3.6850, -0.9984, -3.0304], [ 3.4525, -0.5405, -3.2981], [ 3.6128, -0.9298, -3.0746]], device='cuda:0')] ###Markdown Measure of the accuracy: ###Code infer_trt = lambda inputs: infer_tensorrt( context=context, host_inputs=inputs, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, ) measure_accuracy(infer=infer_trt, tensor_type="torch") ###Output _____no_output_____ ###Markdown Latency measures: ###Code time_buffer = list() for _ in range(100): with track_infer_time(time_buffer): _ = infer_tensorrt( context=context, host_inputs=input_torch, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, ) print_timings(name="TensorRT (INT-8)", timings=time_buffer) del engine, context ###Output _____no_output_____ ###Markdown Pytorch baselineTime to get some numbers to compare with. GPU executionWe will measure vanilla Pytorch inference on both FP32 and FP16 precision on GPU, it will be our baseline: ###Code baseline_model = AutoModelForSequenceClassification.from_pretrained("model_trained_fp16", num_labels=num_labels) baseline_model = baseline_model.cuda() baseline_model = baseline_model.eval() data = encoded_dataset["train"][0:batch_size] input_torch: OD[str, torch.Tensor] = convert_tensor(data=data, output="torch") with torch.inference_mode(): for _ in range(30): _ = baseline_model(**input_torch) torch.cuda.synchronize() time_buffer = list() for _ in range(100): with track_infer_time(time_buffer): _ = baseline_model(**input_torch) torch.cuda.synchronize() print_timings(name="Pytorch (FP32)", timings=time_buffer) with torch.inference_mode(): with torch.cuda.amp.autocast(): for _ in range(30): _ = baseline_model(**input_torch) torch.cuda.synchronize() time_buffer = [] for _ in range(100): with track_infer_time(time_buffer): _ = baseline_model(**input_torch) torch.cuda.synchronize() print_timings(name="Pytorch (FP16)", timings=time_buffer) del baseline_model ###Output [Pytorch (FP16)] mean=58.73ms, sd=1.88ms, min=55.56ms, max=69.69ms, median=58.03ms, 95p=62.28ms, 99p=64.42ms ###Markdown CPU execution ###Code baseline_model = AutoModelForSequenceClassification.from_pretrained("model_trained_fp16", num_labels=num_labels) baseline_model = baseline_model.eval() data = encoded_dataset["train"][0:batch_size] input_torch: OD[str, torch.Tensor] = convert_tensor(data=data, output="torch") input_torch_cpu = {k: v.to("cpu") for k, v in input_torch.items()} torch.set_num_threads(os.cpu_count()) with torch.inference_mode(): for _ in range(3): _ = baseline_model(**input_torch_cpu) torch.cuda.synchronize() time_buffer = list() for _ in range(10): with track_infer_time(time_buffer): _ = baseline_model(**input_torch_cpu) torch.cuda.synchronize() print_timings(name="Pytorch (FP32) - CPU", timings=time_buffer) with torch.inference_mode(): with torch.cuda.amp.autocast(): for _ in range(3): _ = baseline_model(**input_torch_cpu) torch.cuda.synchronize() time_buffer = [] for _ in range(10): with track_infer_time(time_buffer): _ = baseline_model(**input_torch_cpu) torch.cuda.synchronize() print_timings(name="Pytorch (FP16) - CPU", timings=time_buffer) del baseline_model ###Output [Pytorch (FP16) - CPU] mean=4422.39ms, sd=170.45ms, min=4250.10ms, max=4744.97ms, median=4337.85ms, 95p=4727.72ms, 99p=4741.52ms ###Markdown Below, we will perform dynamic quantization on CPU. ###Code quantized_baseline_model = AutoModelForSequenceClassification.from_pretrained( "model_trained_fp16", num_labels=num_labels ) quantized_baseline_model = quantized_baseline_model.eval() quantized_baseline_model = torch.quantization.quantize_dynamic( quantized_baseline_model, {torch.nn.Linear}, dtype=torch.qint8 ) with torch.inference_mode(): for _ in range(3): _ = quantized_baseline_model(**input_torch_cpu) torch.cuda.synchronize() time_buffer = list() for _ in range(10): with track_infer_time(time_buffer): _ = quantized_baseline_model(**input_torch_cpu) torch.cuda.synchronize() print_timings(name="Pytorch (INT-8) - CPU", timings=time_buffer) ###Output [Pytorch (INT-8) - CPU] mean=3818.99ms, sd=137.98ms, min=3616.11ms, max=4049.00ms, median=3807.33ms, 95p=4024.45ms, 99p=4044.09ms ###Markdown TensorRT baseline Below we export our finetuned model, the purpose is to only check the performance on mixed precision (FP16, no quantization). ###Code baseline_model = AutoModelForSequenceClassification.from_pretrained("model_trained_fp16", num_labels=num_labels) baseline_model = baseline_model.cuda() convert_to_onnx( baseline_model, output_path="baseline.onnx", inputs_pytorch=input_torch, quantization=False, var_output_seq=False ) del baseline_model engine = build_engine( runtime=runtime, onnx_file_path="baseline.onnx", logger=trt_logger, min_shape=(batch_size, max_seq_len), optimal_shape=(batch_size, max_seq_len), max_shape=(batch_size, max_seq_len), workspace_size=10000 * 1024 * 1024, fp16=True, int8=False, ) input_torch: OD[str, np.ndarray] = convert_tensor(data=data, output="torch") context: IExecutionContext = engine.create_execution_context() context.set_optimization_profile_async( profile_index=profile_index, stream_handle=torch.cuda.current_stream().cuda_stream ) input_binding_idxs, output_binding_idxs = get_binding_idxs(engine, profile_index) # type: List[int], List[int] for _ in range(30): _ = infer_tensorrt( context=context, host_inputs=input_torch, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, ) time_buffer = list() for _ in range(100): with track_infer_time(time_buffer): _ = infer_tensorrt( context=context, host_inputs=input_torch, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, ) print_timings(name="TensorRT (FP16)", timings=time_buffer) del engine, context ###Output [TensorRT (FP16)] mean=32.36ms, sd=1.47ms, min=29.92ms, max=38.08ms, median=32.47ms, 95p=34.38ms, 99p=36.50ms ###Markdown ONNX Runtime baselineONNX Runtime is the go to inference solution from Microsoft.The recent 1.10 version of ONNX Runtime (with TensorRT support) is still a bit buggy on transformer models, that is why we use the 1.9.0 version in the measures below.As before, CPU quantization is dynamic.Function `` will set ONNX Runtime to use all cores available and enable any possible optimizations. ###Code optimize_onnx( onnx_path="baseline.onnx", onnx_optim_model_path="baseline-optimized.onnx", fp16=True, use_cuda=True, num_attention_heads=12, hidden_size=768, architecture="bert", ) cpu_quantization(input_model_path="baseline.onnx", output_model_path="baseline-quantized.onnx") labels = [item["label"] for item in encoded_dataset[validation_key]] data = encoded_dataset[validation_key][0:batch_size] inputs_onnx: OD[str, np.ndarray] = convert_tensor(data=data, output="np") model = create_model_for_provider(path="baseline-optimized.onnx", provider_to_use="CUDAExecutionProvider") output = model.run(None, inputs_onnx) data = encoded_dataset["train"][0:batch_size] inputs_onnx: OD[str, np.ndarray] = convert_tensor(data=data, output="np") for provider, model_path, benchmark_name, warmup, nb_inference in [ ("CUDAExecutionProvider", "baseline.onnx", "ONNX Runtime GPU (FP32)", 10, 100), ("CUDAExecutionProvider", "baseline-optimized.onnx", "ONNX Runtime GPU (FP16)", 10, 100), ("CPUExecutionProvider", "baseline.onnx", "ONNX Runtime CPU (FP32)", 3, 10), ("CPUExecutionProvider", "baseline-optimized.onnx", "ONNX Runtime CPU (FP16)", 3, 10), ("CPUExecutionProvider", "baseline-quantized.onnx", "ONNX Runtime CPU (INT-8)", 3, 10), ]: model = create_model_for_provider(path=model_path, provider_to_use=provider) for _ in range(warmup): _ = model.run(None, inputs_onnx) time_buffer = [] for _ in range(nb_inference): with track_infer_time(time_buffer): _ = model.run(None, inputs_onnx) print_timings(name=benchmark_name, timings=time_buffer) del model ###Output [ONNX Runtime GPU (FP32)] mean=82.50ms, sd=11.53ms, min=74.12ms, max=118.73ms, median=77.62ms, 95p=112.51ms, 99p=117.81ms [ONNX Runtime GPU (FP16)] mean=36.23ms, sd=4.01ms, min=33.47ms, max=55.75ms, median=35.35ms, 95p=46.89ms, 99p=53.95ms [ONNX Runtime CPU (FP32)] mean=4608.31ms, sd=468.16ms, min=3954.09ms, max=5226.20ms, median=4531.47ms, 95p=5189.71ms, 99p=5218.90ms [ONNX Runtime CPU (FP16)] mean=3987.95ms, sd=223.00ms, min=3755.70ms, max=4548.57ms, median=3907.74ms, 95p=4375.23ms, 99p=4513.90ms [ONNX Runtime CPU (INT-8)] mean=3506.39ms, sd=125.07ms, min=3425.41ms, max=3872.95ms, median=3463.65ms, 95p=3713.08ms, 99p=3840.98ms ###Markdown Measure of the accuracy with ONNX Runtime engine and CUDA provider: ###Code model = create_model_for_provider(path="baseline.onnx", provider_to_use="CUDAExecutionProvider") infer_ort = lambda tokens: model.run(None, tokens) measure_accuracy(infer=infer_ort, tensor_type="np") model = create_model_for_provider(path="baseline-optimized.onnx", provider_to_use="CUDAExecutionProvider") infer_ort = lambda tokens: model.run(None, tokens) measure_accuracy(infer=infer_ort, tensor_type="np") model = create_model_for_provider(path="baseline-quantized.onnx", provider_to_use="CPUExecutionProvider") infer_ort = lambda tokens: model.run(None, tokens) measure_accuracy(infer=infer_ort, tensor_type="np") del model ###Output _____no_output_____ ###Markdown Nvidia GPU INT-8 quantization on any transformers model (encoder based) Quantization is one of the most effective and generic approaches to make model inference faster.Basically, it replaces high precision float numbers in model tensors encoded in 32 or 16 bits by lower precision ones encoded in 8 bits or less:* it takes less memory* computation is easier / fasterIt can be applied to any model in theory, and, if done well, it should maintain accuracy.The purpose of this notebook is to show a process to perform quantization on any `transformer` architectures.Moreover, the library is designed to offer a simple API and still let advanced users tweak the algorithm.**TL;DR, we benchmarked Pytorch and Nvidia TensorRT, on both CPU and GPU, with/without quantization, our methods provide the fastest inference by large margin**.| Framework | Precision | Latency (ms) | Accuracy | Speedup | Hardware ||:--------------------------|-----------|--------------|----------|:-----------|:--------:|| Pytorch | FP32 | 4267 | 86.6 % | X 0.02 | CPU || Pytorch | FP16 | 4428 | 86.6 % | X 0.02 | CPU || Pytorch | INT-8 | 3300 | 85.9 % | X 0.02 | CPU || Pytorch | FP32 | 77 | 86.6 % | X 1 | GPU || Pytorch | FP16 | 56 | 86.6 % | X 1.38 | GPU || ONNX Runtime | FP32 | 76 | 86.6 % | X 1.01 | GPU || ONNX Runtime | FP16 | 34 | 86.6 % | X 2.26 | GPU || ONNX Runtime | FP32 | 4023 | 86.6 % | X 0.02 | CPU || ONNX Runtime | FP16 | 3957 | 86.6 % | X 0.02 | CPU || ONNX Runtime | INT-8 | 3336 | 86.5 % | X 0.02 | CPU || TensorRT | FP16 | 30 | 86.6 % | X 2.57 | GPU || TensorRT (**our method**) | **INT-8** | **17** | 86.2 % | **X 4.53** | **GPU** |> measures done on a Nvidia RTX 3090 GPU + 12 cores i7 Intel CPU (support AVX-2 instruction)>> `base` architecture flavor with batch of size 32 / seq len 256, similar results obtained for other sizes/seq len not included in the table.>> accuracy obtained after a single epoch, no LR search or any hyper parameter optimization A (very) short intro to INT-8 quantizationBasic idea behind model quantization is to replace tensors made of float numbers (usually encoded on 32 bits) by lower precision representation (integers encoded on 8 bits for Nvidia GPUs).Therefore computation is faster and model memory footprint is lower. Making tensor storage smaller makes memory transfer faster... and is also a source of computation acceleration.This approach is very interesting for its trade-off: you reduce inference time significantly, and it costs close to nothing in accuracy.Replacing float numbers by integers is done through a mapping.This step is called `calibration`, and its purpose is to compute for each tensor or each channel of a tensor (one of its dimensions) a range covering most weights and then define a scale and a distribution center to map float numbers to 8 bits integers.There are several ways to perform quantization, depending of how and when the `calibration` is performed:* dynamically: the mapping is done online, during the inference, there are some overhead but it's usually the easiest to leverage, end user has very few configuration to set,* statically, after training (`post training quantization` or `PTQ`): this way is efficient because quantization is done offline, before inference, but it may have an accuracy cost,* statically, after training (`quantization aware training` or `QAT`): like a PTQ followed by a second fine tuning. Same efficiency but usually slightly better accuracy.Nvidia GPUs don't support dynamic quantization, CPU supports all types of quantization. Compared to `PTQ`, `QAT` better preserves accuracy and should be preferred in most cases.During the quantization aware *training*:* in the inside, Pytorch will train with high precision float numbers,* on the outside, Pytorch will simulate that a quantization has already been applied and output results accordingly (for loss computation for instance)The simulation process is done through the add of quantization / dequantization nodes, most often called `QDQ`, it's an abbreviation you will see often in the quantization world.> Want to learn more about quantization?> > * You can check this [high quality blog post](https://leimao.github.io/article/Neural-Networks-Quantization/) for more information.> * The process is well described in this [Nvidia presentation](https://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf) Why this notebook?CPU quantization is supported out of the box by `Pytorch` and `ONNX Runtime`.**GPU quantization on the other side requires specific tools and process to be applied**.In the specific case of `transformer` models, few demos from Nvidia and Microsoft exist; they are all for the old vanilla Bert architecture.It doesn't support modern architectures out of the box, like `Albert`, `Roberta`, `Deberta` or `Electra`. Project setup Dependencies installation Your machine should have Nvidia CUDA 11.X, TensorRT 8.2.1 and cuBLAS installed. It's said to be tricky to install, in my experience, just follow Nvidia download page instructions **and nothing else**, it should work out of the box. Nvidia Docker image could be a good choice too. ###Code #! pip3 install git+ssh://[email protected]/ELS-RD/transformer-deploy #! pip3 install datasets sklearn #! pip3 install git+ssh://[email protected]/NVIDIA/TensorRT#egg=pytorch-quantization\&subdirectory=tools/pytorch-quantization/ ###Output _____no_output_____ ###Markdown Check the GPU is enabled and usable. ###Code ! nvidia-smi import logging import os from collections import OrderedDict from typing import Dict, List from typing import OrderedDict as OD from typing import Union import datasets import numpy as np import pycuda.autoinit import tensorrt as trt import torch import transformers from datasets import load_dataset, load_metric from pycuda._driver import Stream from tensorrt.tensorrt import IExecutionContext, Logger, Runtime from pytorch_quantization import nn as quant_nn from transformers import ( AutoModelForSequenceClassification, AutoTokenizer, IntervalStrategy, PreTrainedModel, PreTrainedTokenizer, Trainer, TrainingArguments, ) from transformer_deploy.backends.ort_utils import ( convert_to_onnx, convert_to_quant_onnx, cpu_quantization, create_model_for_provider, optimize_onnx, ) from transformer_deploy.backends.trt_utils import build_engine, get_binding_idxs, infer_tensorrt from transformer_deploy.benchmarks.utils import print_timings, track_infer_time from transformer_deploy.QDQModels.calibration_utils import QATCalibrate ###Output _____no_output_____ ###Markdown Set logging to `error` level to ease readability of this `notebook` on Github. ###Code log_level = logging.ERROR logging.getLogger().setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() trt_logger: Logger = trt.Logger(trt.Logger.ERROR) transformers.logging.set_verbosity_error() ###Output _____no_output_____ ###Markdown Preprocess data This part is inspired from an [official Notebooks from Hugging Face](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb).There is nothing special to do. Define the task: ###Code model_name = "roberta-base" task = "mnli" num_labels = 3 batch_size = 32 max_seq_len = 256 validation_key = "validation_matched" timings: Dict[str, List[float]] = dict() runtime: Runtime = trt.Runtime(trt_logger) profile_index = 0 ###Output _____no_output_____ ###Markdown Preprocess data (task specific): ###Code def preprocess_function(examples): return tokenizer( examples["premise"], examples["hypothesis"], truncation=True, padding="max_length", max_length=max_seq_len ) def compute_metrics(eval_pred): predictions, labels = eval_pred if task != "stsb": predictions = np.argmax(predictions, axis=1) else: predictions = predictions[:, 0] return metric.compute(predictions=predictions, references=labels) def convert_tensor(data: OD[str, List[List[int]]], output: str) -> OD[str, Union[np.ndarray, torch.Tensor]]: input: OD[str, Union[np.ndarray, torch.Tensor]] = OrderedDict() for k in ["input_ids", "attention_mask", "token_type_ids"]: if k in data: v = data[k] if output == "torch": value = torch.tensor(v, dtype=torch.long, device="cuda") elif output == "np": value = np.asarray(v, dtype=np.int32) else: raise Exception(f"unknown output type: {output}") input[k] = value return input def measure_accuracy(infer, int64: bool) -> float: outputs = list() for start_index in range(0, len(encoded_dataset[validation_key]), batch_size): end_index = start_index + batch_size data = encoded_dataset[validation_key][start_index:end_index] inputs: OD[str, np.ndarray] = convert_tensor(data=data, output="np") if int64: for k, v in inputs.items(): inputs[k] = v.astype(np.int64) output = infer(inputs) output = np.argmax(output[0], axis=1).astype(int).tolist() outputs.extend(output) return np.mean(np.array(outputs) == np.array(validation_labels)) def get_trainer(model: PreTrainedModel) -> Trainer: trainer = Trainer( model, args, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset[validation_key], tokenizer=tokenizer, compute_metrics=compute_metrics, ) transformers.logging.set_verbosity_error() return trainer tokenizer: PreTrainedTokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) dataset = load_dataset("glue", task) metric = load_metric("glue", task) encoded_dataset = dataset.map(preprocess_function, batched=True) validation_labels = [item["label"] for item in encoded_dataset[validation_key]] nb_step = 1000 strategy = IntervalStrategy.STEPS args = TrainingArguments( f"{model_name}-{task}", evaluation_strategy=strategy, eval_steps=nb_step, logging_steps=nb_step, save_steps=nb_step, save_strategy=strategy, learning_rate=1e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size * 2, num_train_epochs=1, fp16=True, group_by_length=True, weight_decay=0.01, load_best_model_at_end=True, metric_for_best_model="accuracy", report_to=[], ) ###Output _____no_output_____ ###Markdown (Standard) fine-tuning modelNow that our data are ready, we can download/fine tune the pretrained model. ###Code model_fp16: PreTrainedModel = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=num_labels) trainer = get_trainer(model_fp16) transformers.logging.set_verbosity_error() trainer.train() print(trainer.evaluate()) model_fp16.save_pretrained("model_trained_fp16") ###Output [INFO|trainer.py:439] 2021-12-27 09:19:51,063 >> Using amp half precision backend ###Markdown Add quantization support to any modelThe idea is to take the source code of a specific model and add automatically `QDQ` nodes. QDQ nodes will be placed before and after an operation that we want to quantize, that’s inside these nodes that the information to perform the mapping between high precision and low precision number is stored.That way, quantization will work out of the box for the final user.The process is based on Python AST modification, basically we parse the model source code in RAM, we convert it to a tree, then we patch the tree to add the QDQ nodes and we replace, still in RAM, the original module source code. Our library also offer the option to restore original behavior.In theory it works for any model. However, not related to quantization, some models are not fully compliant with `TensorRT` (unsupported operators, etc.).For those models, we rewrite some part of the source code, these patches are manually written but are applied to the model at run time (like the AST manipulation).> concrete examples on `Roberta` architecture: in HF library, there is a `cumsum` operator used during the position embedding generation. Something very simple. It takes as input an integer tensor and output an integer tensor. It happens that the `cumsum` operator from TensorRT supports float but not integer (https://github.com/onnx/onnx-tensorrt/blob/master/docs/operators.md). It leads to a crash during the model conversion with a strange error message. Converting the input to float tensor fixes the issue. The process below is:* Calibrate* Quantization Aware training (QAT)> there are many ways to get a QDQ model, you can modify Pytorch source code (including doing it at runtime like here), patch ONNX graph (this approach is used at Microsoft for instance but only support PTQ, not QAT as ONNX file can't be trained on Pytorch for now) or leverage the new FX Pytorch interface (it's a bit experimental and it seems to miss some feature to support Nvidia QAT library). Modifying the source code is the most straightforward, and doing it through AST is the least intrusive (no need to duplicate the work of HF). Post Training Quantization (PTQ)A PTQ is basically a fine tuned model where we add quantization nodes and that we calibrate.Calibration is a key step in the static quantization process. Its quality depends on the final accuracy (the inference speed will stay the same). Moreover, a good PTQ is a good basis for a good Quantization Aware Training (QAT).By calling `with QATCalibrate(...) as qat:`, the lib will patch transformer model AST (source code) in RAM, basically adding quantization support to each model. Calibration percentile grid searchOne of the things we try to guess during the calibration is what range of tensor values capture most of the information stored in the tensor. Indeed, a FP32 tensor can store at the same time very large and very small values, we obviously can't do the same with a 8-bits integer tensors and a scale. An 8-bits integer can only encode 255 values so we need to fix some limits and say, if a value is outside our limits, it just takes a maximum value instead of its real one. For instance, if we say our range is -1000 to +1000 and a tensor contains the value +4000, it will be replaced by the maximum value, +1000.As said before, we will use the histogram method to find the perfect range. We also need to choose a percentile. Usually, you will choose something very close to 100.If the percentile is too small, we put too many values outside the covered range. Values outside the range will be replaced by a single maximum value and you lose some granularity in model weights.If the percentile is too big, your range will be very large and because 8-bits signed integers can only encode values between -127 to +127, even when you use a scale you lose in granularity.Therefore, we launch a grid search on percentile hyper parameter. ###Code for percentile in [99.9, 99.99, 99.999, 99.9999]: with QATCalibrate(method="histogram", percentile=percentile) as qat: model_q: PreTrainedModel = AutoModelForSequenceClassification.from_pretrained( "model_trained_fp16", num_labels=num_labels ) model_q = model_q.cuda() qat.setup_model_qat(model_q) # prepare quantizer to any model with torch.no_grad(): for start_index in range(0, 128, batch_size): end_index = start_index + batch_size data = encoded_dataset["train"][start_index:end_index] input_torch = { k: torch.tensor(v, dtype=torch.long, device="cuda") for k, v in data.items() if k in ["input_ids", "attention_mask", "token_type_ids"] } model_q(**input_torch) trainer = get_trainer(model_q) print(f"percentile: {percentile}") print(trainer.evaluate()) ###Output [INFO|trainer.py:439] 2021-12-27 17:25:51,070 >> Using amp half precision backend ###Markdown As you can see, the chosen percentile value has a high impact on the final accuracy.For the rest of the notebook, we apply the `99.999` percentile. ###Code with QATCalibrate(method="histogram", percentile=99.999) as qat: model_q: PreTrainedModel = AutoModelForSequenceClassification.from_pretrained( "model_trained_fp16", num_labels=num_labels ) model_q = model_q.cuda() qat.setup_model_qat(model_q) # prepare quantizer to any model with torch.no_grad(): for start_index in range(0, 128, batch_size): end_index = start_index + batch_size data = encoded_dataset["train"][start_index:end_index] input_torch = { k: torch.tensor(v, dtype=torch.long, device="cuda") for k, v in data.items() if k in ["input_ids", "attention_mask", "token_type_ids"] } model_q(**input_torch) trainer = get_trainer(model_q) print(trainer.evaluate()) ###Output [INFO|trainer.py:439] 2021-12-28 13:52:09,215 >> Using amp half precision backend ###Markdown Per layer quantization analysisBelow we will run a sensitivity analysis, by enabling quantization of one layer at a time and measuring the accuracy. That way we will be able to detect if the quantization of a specific layer has a larger cost on accuracy than other layers. ###Code from pytorch_quantization import nn as quant_nn for i in range(12): layer_name = f"layer.{i}" print(layer_name) for name, module in model_q.named_modules(): if isinstance(module, quant_nn.TensorQuantizer): if layer_name in name: module.enable_quant() else: module.disable_quant() trainer.evaluate() print("----") ###Output layer.0 {'eval_loss': 0.35163024067878723, 'eval_accuracy': 0.8663270504330107, 'eval_runtime': 20.695, 'eval_samples_per_second': 474.27, 'eval_steps_per_second': 7.441} ---- layer.1 {'eval_loss': 0.3527306318283081, 'eval_accuracy': 0.8661232806928171, 'eval_runtime': 26.1334, 'eval_samples_per_second': 375.573, 'eval_steps_per_second': 5.893} ---- layer.2 {'eval_loss': 0.3557673394680023, 'eval_accuracy': 0.8629648497198166, 'eval_runtime': 21.1364, 'eval_samples_per_second': 464.366, 'eval_steps_per_second': 7.286} ---- layer.3 {'eval_loss': 0.3551430106163025, 'eval_accuracy': 0.8649006622516556, 'eval_runtime': 20.9252, 'eval_samples_per_second': 469.051, 'eval_steps_per_second': 7.36} ---- layer.4 {'eval_loss': 0.35053929686546326, 'eval_accuracy': 0.8649006622516556, 'eval_runtime': 21.05, 'eval_samples_per_second': 466.271, 'eval_steps_per_second': 7.316} ---- layer.5 {'eval_loss': 0.35701483488082886, 'eval_accuracy': 0.865206316861946, 'eval_runtime': 20.9236, 'eval_samples_per_second': 469.088, 'eval_steps_per_second': 7.36} ---- layer.6 {'eval_loss': 0.35283517837524414, 'eval_accuracy': 0.8649006622516556, 'eval_runtime': 20.8179, 'eval_samples_per_second': 471.469, 'eval_steps_per_second': 7.397} ---- layer.7 {'eval_loss': 0.35288652777671814, 'eval_accuracy': 0.866632705043301, 'eval_runtime': 20.7823, 'eval_samples_per_second': 472.277, 'eval_steps_per_second': 7.41} ---- layer.8 {'eval_loss': 0.35080182552337646, 'eval_accuracy': 0.8672440142638819, 'eval_runtime': 20.737, 'eval_samples_per_second': 473.308, 'eval_steps_per_second': 7.426} ---- layer.9 {'eval_loss': 0.3503498136997223, 'eval_accuracy': 0.8673458991339786, 'eval_runtime': 20.8899, 'eval_samples_per_second': 469.843, 'eval_steps_per_second': 7.372} ---- layer.10 {'eval_loss': 0.3510246276855469, 'eval_accuracy': 0.8658176260825268, 'eval_runtime': 20.8428, 'eval_samples_per_second': 470.905, 'eval_steps_per_second': 7.389} ---- layer.11 {'eval_loss': 0.3509054183959961, 'eval_accuracy': 0.8656138563423331, 'eval_runtime': 20.8451, 'eval_samples_per_second': 470.853, 'eval_steps_per_second': 7.388} ---- ###Markdown It seems that quantization of layers 2 to 6 has the largest accuracy impact. Operator quantization analysisBelow we will run a sensitivity analysis, by enabling quantization of one operator type at a time and measuring the accuracy. That way we will be able to detect if a specific operator has a larger cost on accuracy. On Roberta we only quantize `matmul` and `LayerNorm`, so we test both candidates. ###Code for op in ["matmul", "layernorm"]: for name, module in model_q.named_modules(): if isinstance(module, quant_nn.TensorQuantizer): if op in name: module.enable_quant() else: module.disable_quant() print(op) trainer.evaluate() print("----") ###Output matmul {'eval_loss': 0.35049352049827576, 'eval_accuracy': 0.8658176260825268, 'eval_runtime': 26.1972, 'eval_samples_per_second': 374.659, 'eval_steps_per_second': 5.878} ---- layernorm {'eval_loss': 0.35847699642181396, 'eval_accuracy': 0.8597045338767193, 'eval_runtime': 24.3004, 'eval_samples_per_second': 403.903, 'eval_steps_per_second': 6.337} ---- ###Markdown It appears that `LayerNorm` quantization has a significant accuracy cost.Our goal is to disable quantization for as few operations as possible while preserving accuracy as much as possible. Therefore we will try to only disable quantization for `LayerNorm` on Layers 2 to 6. ###Code disable_layer_names = ["layer.2", "layer.3", "layer.4", "layer.6"] for name, module in model_q.named_modules(): if isinstance(module, quant_nn.TensorQuantizer): if any([f"{l}.output.layernorm" in name for l in disable_layer_names]): print(f"disable {name}") module.disable_quant() else: module.enable_quant() trainer.evaluate() ###Output disable roberta.encoder.layer.2.output.layernorm_quantizer_0 disable roberta.encoder.layer.2.output.layernorm_quantizer_1 disable roberta.encoder.layer.3.output.layernorm_quantizer_0 disable roberta.encoder.layer.3.output.layernorm_quantizer_1 disable roberta.encoder.layer.4.output.layernorm_quantizer_0 disable roberta.encoder.layer.4.output.layernorm_quantizer_1 disable roberta.encoder.layer.6.output.layernorm_quantizer_0 disable roberta.encoder.layer.6.output.layernorm_quantizer_1 {'eval_loss': 0.3660135269165039, 'eval_accuracy': 0.8618441161487519, 'eval_runtime': 45.9324, 'eval_samples_per_second': 213.684, 'eval_steps_per_second': 3.353} ###Markdown By just disabling quantization for a single operator on a few layers, we keep most of the performance boost (quantization) but retrieve more than 1 point of accuracy. It's also possible to perform an analysis per quantizer to get a smaller granularity but it's a bit slow to run.If we stop here, it's called a Post Training Quantization (PTQ). Below, we will try to retrieve even more accuracy. Quantization Aware Training (QAT) We retrain the model with 1/10 or 1/100 of the original learning rate. Our goal is to retrieve most of the original accuracy. ###Code args.learning_rate = 1e-7 trainer = get_trainer(model_q) trainer.train() print(trainer.evaluate()) model_q.save_pretrained("model-qat") ###Output [INFO|trainer.py:439] 2021-12-28 13:54:41,146 >> Using amp half precision backend ###Markdown Export a `QDQ Pytorch` model to `ONNX`We need to enable fake quantization mode from Pytorch. ###Code data = encoded_dataset["train"][1:3] input_torch = convert_tensor(data, output="torch") convert_to_quant_onnx(model_pytorch=model_q, output_path="model_qat.onnx", inputs_pytorch=input_torch) del model_q QATCalibrate.restore() ###Output _____no_output_____ ###Markdown Benchmark Convert `ONNX` graph to `TensorRT` engine ###Code engine = build_engine( runtime=runtime, onnx_file_path="model_qat.onnx", logger=trt_logger, min_shape=(1, max_seq_len), optimal_shape=(batch_size, max_seq_len), max_shape=(batch_size, max_seq_len), workspace_size=10000 * 1024 * 1024, fp16=True, int8=True, ) # same as above, but from the terminal # !/usr/src/tensorrt/bin/trtexec --onnx=model_qat.onnx --shapes=input_ids:32x256,attention_mask:32x256 --best --workspace=10000 --saveEngine="test.plan" ###Output _____no_output_____ ###Markdown Prepare input and output buffer ###Code stream: Stream = pycuda.driver.Stream() context: IExecutionContext = engine.create_execution_context() context.set_optimization_profile_async(profile_index=profile_index, stream_handle=stream.handle) input_binding_idxs, output_binding_idxs = get_binding_idxs(engine, profile_index) # type: List[int], List[int] data = encoded_dataset["train"][0:batch_size] input_torch: OD[str, torch.Tensor] = convert_tensor(data=data, output="torch") input_np: OD[str, np.ndarray] = convert_tensor(data=data, output="np") ###Output _____no_output_____ ###Markdown Inference on `TensorRT`We first check that inference is working correctly: ###Code tensorrt_output = infer_tensorrt( context=context, host_inputs=input_np, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, stream=stream, ) print(tensorrt_output) ###Output [array([[ 0.11111109, 2.9936233 , -2.5243347 ], [ 3.2135723 , -0.4374885 , -2.4485767 ], [ 2.1678474 , -1.1477091 , -0.7798154 ], [ 1.8148003 , -0.2093072 , -1.416711 ], [ 2.3070638 , 0.27601779, -2.2818418 ], [ 4.1799006 , -0.83163625, -2.8492923 ], [-3.695277 , 2.3409832 , 1.4314314 ], [ 4.1796045 , -1.0709951 , -2.6119678 ], [-0.44781622, -1.4288648 , 1.888488 ], [-2.9845483 , -1.5895646 , 4.117529 ], [ 3.9293122 , -0.68528754, -2.9477124 ], [-2.516609 , 0.34680495, 2.2793124 ], [-3.0710464 , 3.3439813 , 0.08079423], [-2.2859852 , 1.9546673 , 0.37908432], [ 0.3999826 , -1.0603418 , 0.5099453 ], [ 2.9247677 , -0.6867883 , -1.7499886 ], [ 4.1125493 , -0.7771612 , -2.986419 ], [-2.58058 , -2.3291597 , 4.553415 ], [-3.215447 , -1.3902456 , 4.2499046 ], [-2.014185 , 4.117433 , -1.634403 ], [ 4.051285 , -0.64716065, -2.9019048 ], [ 3.742484 , -0.07188296, -3.272956 ], [-3.302061 , -1.0159078 , 3.9711204 ], [ 3.9316242 , -0.33764294, -3.209711 ], [ 3.9900765 , -1.5201662 , -2.1166122 ], [-1.2437494 , 1.410141 , -0.10993958], [-3.1267605 , -0.8212991 , 3.6917076 ], [-2.0607114 , 4.1098857 , -1.4996963 ], [-3.5770578 , -0.736545 , 3.9671996 ], [ 3.776105 , -0.60771704, -2.8707912 ], [ 3.5450761 , -0.14414684, -2.9718893 ], [ 3.4713674 , 0.12106885, -3.189211 ]], dtype=float32)] ###Markdown Measure of the accuracy: ###Code infer_trt = lambda inputs: infer_tensorrt( context=context, host_inputs=inputs, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, stream=stream, ) measure_accuracy(infer=infer_trt, int64=False) ###Output _____no_output_____ ###Markdown Latency measures: ###Code time_buffer = list() for _ in range(100): with track_infer_time(time_buffer): _ = infer_tensorrt( context=context, host_inputs=input_np, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, stream=stream, ) print_timings(name="TensorRT (INT-8)", timings=time_buffer) del engine, context ###Output _____no_output_____ ###Markdown Pytorch baselineTime to get some numbers to compare with. GPU executionWe will measure vanilla Pytorch inference on both FP32 and FP16 precision on GPU, it will be our baseline: ###Code baseline_model = AutoModelForSequenceClassification.from_pretrained("model_trained_fp16", num_labels=num_labels) baseline_model = baseline_model.cuda() baseline_model = baseline_model.eval() data = encoded_dataset["train"][0:batch_size] input_torch: OD[str, torch.Tensor] = convert_tensor(data=data, output="torch") with torch.inference_mode(): for _ in range(30): _ = baseline_model(**input_torch) torch.cuda.synchronize() time_buffer = list() for _ in range(100): with track_infer_time(time_buffer): _ = baseline_model(**input_torch) torch.cuda.synchronize() print_timings(name="Pytorch (FP32)", timings=time_buffer) with torch.inference_mode(): with torch.cuda.amp.autocast(): for _ in range(30): _ = baseline_model(**input_torch) torch.cuda.synchronize() time_buffer = [] for _ in range(100): with track_infer_time(time_buffer): _ = baseline_model(**input_torch) torch.cuda.synchronize() print_timings(name="Pytorch (FP16)", timings=time_buffer) del baseline_model ###Output [Pytorch (FP16)] mean=56.24ms, sd=0.67ms, min=55.53ms, max=59.61ms, median=56.05ms, 95p=57.80ms, 99p=58.18ms ###Markdown CPU execution ###Code baseline_model = AutoModelForSequenceClassification.from_pretrained("model_trained_fp16", num_labels=num_labels) baseline_model = baseline_model.eval() data = encoded_dataset["train"][0:batch_size] input_torch: OD[str, torch.Tensor] = convert_tensor(data=data, output="torch") input_torch_cpu = {k: v.to("cpu") for k, v in input_torch.items()} torch.set_num_threads(os.cpu_count()) with torch.inference_mode(): for _ in range(3): _ = baseline_model(**input_torch_cpu) torch.cuda.synchronize() time_buffer = list() for _ in range(10): with track_infer_time(time_buffer): _ = baseline_model(**input_torch_cpu) torch.cuda.synchronize() print_timings(name="Pytorch (FP32) - CPU", timings=time_buffer) with torch.inference_mode(): with torch.cuda.amp.autocast(): for _ in range(3): _ = baseline_model(**input_torch_cpu) torch.cuda.synchronize() time_buffer = [] for _ in range(10): with track_infer_time(time_buffer): _ = baseline_model(**input_torch_cpu) torch.cuda.synchronize() print_timings(name="Pytorch (FP16) - CPU", timings=time_buffer) del baseline_model ###Output [Pytorch (FP16) - CPU] mean=4428.94ms, sd=225.39ms, min=4148.26ms, max=4871.84ms, median=4404.70ms, 95p=4781.81ms, 99p=4853.83ms ###Markdown Below, we will perform dynamic quantization on CPU. ###Code quantized_baseline_model = AutoModelForSequenceClassification.from_pretrained( "model_trained_fp16", num_labels=num_labels ) quantized_baseline_model = quantized_baseline_model.eval() quantized_baseline_model = torch.quantization.quantize_dynamic( quantized_baseline_model, {torch.nn.Linear}, dtype=torch.qint8 ) with torch.inference_mode(): for _ in range(3): _ = quantized_baseline_model(**input_torch_cpu) torch.cuda.synchronize() time_buffer = list() for _ in range(10): with track_infer_time(time_buffer): _ = quantized_baseline_model(**input_torch_cpu) torch.cuda.synchronize() print_timings(name="Pytorch (INT-8) - CPU", timings=time_buffer) ###Output [Pytorch (INT-8) - CPU] mean=3299.66ms, sd=37.76ms, min=3274.33ms, max=3405.91ms, median=3285.20ms, 95p=3366.88ms, 99p=3398.10ms ###Markdown TensorRT baseline Below we export our finetuned model, the purpose is to only check the performance on mixed precision (FP16, no quantization). ###Code baseline_model = AutoModelForSequenceClassification.from_pretrained("model_trained_fp16", num_labels=num_labels) baseline_model = baseline_model.cuda() convert_to_onnx(baseline_model, output_path="baseline.onnx", inputs_pytorch=input_torch, opset=12) del baseline_model engine = build_engine( runtime=runtime, onnx_file_path="baseline.onnx", logger=trt_logger, min_shape=(batch_size, max_seq_len), optimal_shape=(batch_size, max_seq_len), max_shape=(batch_size, max_seq_len), workspace_size=10000 * 1024 * 1024, fp16=True, int8=False, ) input_np: OD[str, np.ndarray] = convert_tensor(data=data, output="np") stream: Stream = pycuda.driver.Stream() context: IExecutionContext = engine.create_execution_context() context.set_optimization_profile_async(profile_index=profile_index, stream_handle=stream.handle) input_binding_idxs, output_binding_idxs = get_binding_idxs(engine, profile_index) # type: List[int], List[int] for _ in range(30): _ = infer_tensorrt( context=context, host_inputs=input_np, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, stream=stream, ) time_buffer = list() for _ in range(100): with track_infer_time(time_buffer): _ = infer_tensorrt( context=context, host_inputs=input_np, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, stream=stream, ) print_timings(name="TensorRT (FP16)", timings=time_buffer) del engine, context ###Output [TensorRT (FP16)] mean=29.90ms, sd=0.82ms, min=29.30ms, max=33.41ms, median=29.69ms, 95p=31.85ms, 99p=32.79ms ###Markdown ONNX Runtime baselineONNX Runtime is the go to inference solution from Microsoft.The recent 1.10 version of ONNX Runtime (with TensorRT support) is still a bit buggy on transformer models, that is why we use the 1.9.0 version in the measures below.As before, CPU quantization is dynamic.Function `` will set ONNX Runtime to use all cores available and enable any possible optimizations. ###Code optimize_onnx( onnx_path="baseline.onnx", onnx_optim_model_path="baseline-optimized.onnx", fp16=True, use_cuda=True, ) cpu_quantization(input_model_path="baseline-optimized.onnx", output_model_path="baseline-quantized.onnx") labels = [item["label"] for item in encoded_dataset[validation_key]] data = encoded_dataset[validation_key][0:batch_size] inputs_onnx: OD[str, np.ndarray] = convert_tensor(data=data, output="np") for k, v in inputs_onnx.items(): inputs_onnx[k] = v.astype(np.int64) model = create_model_for_provider(path="baseline-optimized.onnx", provider_to_use="CUDAExecutionProvider") output = model.run(None, inputs_onnx) data = encoded_dataset["train"][0:batch_size] inputs_onnx: OD[str, np.ndarray] = convert_tensor(data=data, output="np") for k, v in inputs_onnx.items(): inputs_onnx[k] = v.astype(np.int64) for provider, model_path, benchmark_name, warmup, nb_inference in [ ("CUDAExecutionProvider", "baseline.onnx", "ONNX Runtime GPU (FP32)", 10, 100), ("CUDAExecutionProvider", "baseline-optimized.onnx", "ONNX Runtime GPU (FP16)", 10, 100), ("CPUExecutionProvider", "baseline.onnx", "ONNX Runtime CPU (FP32)", 3, 10), ("CPUExecutionProvider", "baseline-optimized.onnx", "ONNX Runtime CPU (FP16)", 3, 10), ("CPUExecutionProvider", "baseline-quantized.onnx", "ONNX Runtime CPU (INT-8)", 3, 10), ]: model = create_model_for_provider(path=model_path, provider_to_use=provider) for _ in range(warmup): _ = model.run(None, inputs_onnx) time_buffer = [] for _ in range(nb_inference): with track_infer_time(time_buffer): _ = model.run(None, inputs_onnx) print_timings(name=benchmark_name, timings=time_buffer) del model ###Output [ONNX Runtime GPU (FP32)] mean=76.38ms, sd=4.99ms, min=73.10ms, max=91.05ms, median=73.91ms, 95p=88.30ms, 99p=89.42ms [ONNX Runtime GPU (FP16)] mean=34.21ms, sd=1.68ms, min=33.23ms, max=41.80ms, median=33.70ms, 95p=38.87ms, 99p=40.63ms [ONNX Runtime CPU (FP32)] mean=4023.32ms, sd=92.76ms, min=3895.51ms, max=4267.63ms, median=4013.27ms, 95p=4170.44ms, 99p=4248.19ms [ONNX Runtime CPU (FP16)] mean=3956.61ms, sd=167.65ms, min=3709.88ms, max=4188.62ms, median=3914.53ms, 95p=4180.81ms, 99p=4187.06ms [ONNX Runtime CPU (INT-8)] mean=3336.29ms, sd=168.96ms, min=3170.64ms, max=3765.07ms, median=3299.52ms, 95p=3641.01ms, 99p=3740.26ms ###Markdown Measure of the accuracy with ONNX Runtime engine and CUDA provider: ###Code model = create_model_for_provider(path="baseline.onnx", provider_to_use="CUDAExecutionProvider") infer_ort = lambda tokens: model.run(None, tokens) measure_accuracy(infer=infer_ort, int64=True) model = create_model_for_provider(path="baseline-optimized.onnx", provider_to_use="CUDAExecutionProvider") infer_ort = lambda tokens: model.run(None, tokens) measure_accuracy(infer=infer_ort, int64=True) model = create_model_for_provider(path="baseline-quantized.onnx", provider_to_use="CPUExecutionProvider") infer_ort = lambda tokens: model.run(None, tokens) measure_accuracy(infer=infer_ort, int64=True) del model ###Output _____no_output_____ ###Markdown Nvidia GPU INT-8 quantization on any transformers model (encoder based) For some context and explanations, please check our documentation here: [https://els-rd.github.io/transformer-deploy/quantization/quantization_intro/](https://els-rd.github.io/transformer-deploy/quantization/quantization_intro/). Project setup Dependencies installation Your machine should have Nvidia CUDA 11.X, TensorRT 8.2.1 and cuBLAS installed. It's said to be tricky to install, in my experience, just follow Nvidia download page instructions **and nothing else**, it should work out of the box. Nvidia Docker image could be a good choice too. ###Code #! pip3 install git+ssh://[email protected]/ELS-RD/transformer-deploy #! pip3 install datasets sklearn #! pip3 install git+ssh://[email protected]/NVIDIA/TensorRT#egg=pytorch-quantization\&subdirectory=tools/pytorch-quantization/ ###Output _____no_output_____ ###Markdown Check the GPU is enabled and usable. ###Code ! nvidia-smi import logging import os from collections import OrderedDict from typing import Dict, List from typing import OrderedDict as OD from typing import Union import datasets import numpy as np import tensorrt as trt import torch import transformers from datasets import load_dataset, load_metric from tensorrt.tensorrt import IExecutionContext, Logger, Runtime from transformers import ( AutoModelForSequenceClassification, AutoTokenizer, IntervalStrategy, PreTrainedModel, PreTrainedTokenizer, Trainer, TrainingArguments, ) from transformer_deploy.backends.ort_utils import ( cpu_quantization, create_model_for_provider, optimize_onnx, ) from transformer_deploy.backends.pytorch_utils import convert_to_onnx from transformer_deploy.backends.trt_utils import build_engine, get_binding_idxs, infer_tensorrt from transformer_deploy.benchmarks.utils import print_timings, track_infer_time from transformer_deploy.QDQModels.calibration_utils import QATCalibrate ###Output _____no_output_____ ###Markdown Set logging to `error` level to ease readability of this `notebook` on Github. ###Code log_level = logging.ERROR logging.getLogger().setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() trt_logger: Logger = trt.Logger(trt.Logger.ERROR) transformers.logging.set_verbosity_error() ###Output _____no_output_____ ###Markdown Preprocess data This part is inspired from an [official Notebooks from Hugging Face](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb).There is nothing special to do. Define the task: ###Code model_name = "roberta-base" task = "mnli" num_labels = 3 batch_size = 32 max_seq_len = 256 validation_key = "validation_matched" timings: Dict[str, List[float]] = dict() runtime: Runtime = trt.Runtime(trt_logger) profile_index = 0 ###Output _____no_output_____ ###Markdown Preprocess data (task specific): ###Code def preprocess_function(examples): return tokenizer( examples["premise"], examples["hypothesis"], truncation=True, padding="max_length", max_length=max_seq_len ) def compute_metrics(eval_pred): predictions, labels = eval_pred if task != "stsb": predictions = np.argmax(predictions, axis=1) else: predictions = predictions[:, 0] return metric.compute(predictions=predictions, references=labels) def convert_tensor(data: OD[str, List[List[int]]], output: str) -> OD[str, Union[np.ndarray, torch.Tensor]]: input: OD[str, Union[np.ndarray, torch.Tensor]] = OrderedDict() for k in ["input_ids", "attention_mask", "token_type_ids"]: if k in data: v = data[k] if output == "torch": value = torch.tensor(v, dtype=torch.long, device="cuda") elif output == "np": value = np.asarray(v, dtype=np.int32) else: raise Exception(f"unknown output type: {output}") input[k] = value return input def measure_accuracy(infer, int64: bool) -> float: outputs = list() for start_index in range(0, len(encoded_dataset[validation_key]), batch_size): end_index = start_index + batch_size data = encoded_dataset[validation_key][start_index:end_index] inputs: OD[str, np.ndarray] = convert_tensor(data=data, output="np") if int64: for k, v in inputs.items(): inputs[k] = v.astype(np.int64) output = infer(inputs) output = np.argmax(output[0], axis=1).astype(int).tolist() outputs.extend(output) return np.mean(np.array(outputs) == np.array(validation_labels)) def get_trainer(model: PreTrainedModel) -> Trainer: trainer = Trainer( model, args, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset[validation_key], tokenizer=tokenizer, compute_metrics=compute_metrics, ) transformers.logging.set_verbosity_error() return trainer tokenizer: PreTrainedTokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) dataset = load_dataset("glue", task) metric = load_metric("glue", task) encoded_dataset = dataset.map(preprocess_function, batched=True) validation_labels = [item["label"] for item in encoded_dataset[validation_key]] nb_step = 1000 strategy = IntervalStrategy.STEPS args = TrainingArguments( f"{model_name}-{task}", evaluation_strategy=strategy, eval_steps=nb_step, logging_steps=nb_step, save_steps=nb_step, save_strategy=strategy, learning_rate=1e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size * 2, num_train_epochs=1, fp16=True, group_by_length=True, weight_decay=0.01, load_best_model_at_end=True, metric_for_best_model="accuracy", report_to=[], ) ###Output _____no_output_____ ###Markdown (Standard) fine-tuning modelNow that our data are ready, we can download/fine tune the pretrained model. ###Code model_fp16: PreTrainedModel = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=num_labels) trainer = get_trainer(model_fp16) transformers.logging.set_verbosity_error() trainer.train() print(trainer.evaluate()) model_fp16.save_pretrained("model_trained_fp16") ###Output [INFO|trainer.py:439] 2021-12-27 09:19:51,063 >> Using amp half precision backend ###Markdown Add quantization support to any modelThe idea is to take the source code of a specific model and add automatically `QDQ` nodes. QDQ nodes will be placed before and after an operation that we want to quantize, that’s inside these nodes that the information to perform the mapping between high precision and low precision number is stored.If you want to know more, check our documentation on: [https://els-rd.github.io/transformer-deploy/quantization/quantization_ast/](https://els-rd.github.io/transformer-deploy/quantization/quantization_ast/) ###Code for percentile in [99.9, 99.99, 99.999, 99.9999]: with QATCalibrate(method="histogram", percentile=percentile) as qat: model_q: PreTrainedModel = AutoModelForSequenceClassification.from_pretrained( "model_trained_fp16", num_labels=num_labels ) model_q = model_q.cuda() qat.setup_model_qat(model_q) # prepare quantizer to any model with torch.no_grad(): for start_index in range(0, 128, batch_size): end_index = start_index + batch_size data = encoded_dataset["train"][start_index:end_index] input_torch = { k: torch.tensor(v, dtype=torch.long, device="cuda") for k, v in data.items() if k in ["input_ids", "attention_mask", "token_type_ids"] } model_q(**input_torch) trainer = get_trainer(model_q) print(f"percentile: {percentile}") print(trainer.evaluate()) ###Output [INFO|trainer.py:439] 2021-12-27 17:25:51,070 >> Using amp half precision backend ###Markdown As you can see, the chosen percentile value has a high impact on the final accuracy.For the rest of the notebook, we apply the `99.999` percentile. ###Code with QATCalibrate(method="histogram", percentile=99.999) as qat: model_q: PreTrainedModel = AutoModelForSequenceClassification.from_pretrained( "model_trained_fp16", num_labels=num_labels ) model_q = model_q.cuda() qat.setup_model_qat(model_q) # prepare quantizer to any model with torch.no_grad(): for start_index in range(0, 128, batch_size): end_index = start_index + batch_size data = encoded_dataset["train"][start_index:end_index] input_torch = { k: torch.tensor(v, dtype=torch.long, device="cuda") for k, v in data.items() if k in ["input_ids", "attention_mask", "token_type_ids"] } model_q(**input_torch) trainer = get_trainer(model_q) print(trainer.evaluate()) ###Output [INFO|trainer.py:439] 2021-12-28 13:52:09,215 >> Using amp half precision backend ###Markdown Per layer quantization analysisBelow we will run a sensitivity analysis, by enabling quantization of one layer at a time and measuring the accuracy. That way we will be able to detect if the quantization of a specific layer has a larger cost on accuracy than other layers. ###Code from pytorch_quantization import nn as quant_nn for i in range(12): layer_name = f"layer.{i}" print(layer_name) for name, module in model_q.named_modules(): if isinstance(module, quant_nn.TensorQuantizer): if layer_name in name: module.enable_quant() else: module.disable_quant() trainer.evaluate() print("----") ###Output layer.0 {'eval_loss': 0.35163024067878723, 'eval_accuracy': 0.8663270504330107, 'eval_runtime': 20.695, 'eval_samples_per_second': 474.27, 'eval_steps_per_second': 7.441} ---- layer.1 {'eval_loss': 0.3527306318283081, 'eval_accuracy': 0.8661232806928171, 'eval_runtime': 26.1334, 'eval_samples_per_second': 375.573, 'eval_steps_per_second': 5.893} ---- layer.2 {'eval_loss': 0.3557673394680023, 'eval_accuracy': 0.8629648497198166, 'eval_runtime': 21.1364, 'eval_samples_per_second': 464.366, 'eval_steps_per_second': 7.286} ---- layer.3 {'eval_loss': 0.3551430106163025, 'eval_accuracy': 0.8649006622516556, 'eval_runtime': 20.9252, 'eval_samples_per_second': 469.051, 'eval_steps_per_second': 7.36} ---- layer.4 {'eval_loss': 0.35053929686546326, 'eval_accuracy': 0.8649006622516556, 'eval_runtime': 21.05, 'eval_samples_per_second': 466.271, 'eval_steps_per_second': 7.316} ---- layer.5 {'eval_loss': 0.35701483488082886, 'eval_accuracy': 0.865206316861946, 'eval_runtime': 20.9236, 'eval_samples_per_second': 469.088, 'eval_steps_per_second': 7.36} ---- layer.6 {'eval_loss': 0.35283517837524414, 'eval_accuracy': 0.8649006622516556, 'eval_runtime': 20.8179, 'eval_samples_per_second': 471.469, 'eval_steps_per_second': 7.397} ---- layer.7 {'eval_loss': 0.35288652777671814, 'eval_accuracy': 0.866632705043301, 'eval_runtime': 20.7823, 'eval_samples_per_second': 472.277, 'eval_steps_per_second': 7.41} ---- layer.8 {'eval_loss': 0.35080182552337646, 'eval_accuracy': 0.8672440142638819, 'eval_runtime': 20.737, 'eval_samples_per_second': 473.308, 'eval_steps_per_second': 7.426} ---- layer.9 {'eval_loss': 0.3503498136997223, 'eval_accuracy': 0.8673458991339786, 'eval_runtime': 20.8899, 'eval_samples_per_second': 469.843, 'eval_steps_per_second': 7.372} ---- layer.10 {'eval_loss': 0.3510246276855469, 'eval_accuracy': 0.8658176260825268, 'eval_runtime': 20.8428, 'eval_samples_per_second': 470.905, 'eval_steps_per_second': 7.389} ---- layer.11 {'eval_loss': 0.3509054183959961, 'eval_accuracy': 0.8656138563423331, 'eval_runtime': 20.8451, 'eval_samples_per_second': 470.853, 'eval_steps_per_second': 7.388} ---- ###Markdown It seems that quantization of layers 2 to 6 has the largest accuracy impact. Operator quantization analysisBelow we will run a sensitivity analysis, by enabling quantization of one operator type at a time and measuring the accuracy. That way we will be able to detect if a specific operator has a larger cost on accuracy. On Roberta we only quantize `matmul` and `LayerNorm`, so we test both candidates. ###Code for op in ["matmul", "layernorm"]: for name, module in model_q.named_modules(): if isinstance(module, quant_nn.TensorQuantizer): if op in name: module.enable_quant() else: module.disable_quant() print(op) trainer.evaluate() print("----") ###Output matmul {'eval_loss': 0.35049352049827576, 'eval_accuracy': 0.8658176260825268, 'eval_runtime': 26.1972, 'eval_samples_per_second': 374.659, 'eval_steps_per_second': 5.878} ---- layernorm {'eval_loss': 0.35847699642181396, 'eval_accuracy': 0.8597045338767193, 'eval_runtime': 24.3004, 'eval_samples_per_second': 403.903, 'eval_steps_per_second': 6.337} ---- ###Markdown It appears that `LayerNorm` quantization has a significant accuracy cost.Our goal is to disable quantization for as few operations as possible while preserving accuracy as much as possible. Therefore we will try to only disable quantization for `LayerNorm` on Layers 2 to 6. ###Code disable_layer_names = ["layer.2", "layer.3", "layer.4", "layer.6"] for name, module in model_q.named_modules(): if isinstance(module, quant_nn.TensorQuantizer): if any([f"{l}.output.layernorm" in name for l in disable_layer_names]): print(f"disable {name}") module.disable_quant() else: module.enable_quant() trainer.evaluate() ###Output disable roberta.encoder.layer.2.output.layernorm_quantizer_0 disable roberta.encoder.layer.2.output.layernorm_quantizer_1 disable roberta.encoder.layer.3.output.layernorm_quantizer_0 disable roberta.encoder.layer.3.output.layernorm_quantizer_1 disable roberta.encoder.layer.4.output.layernorm_quantizer_0 disable roberta.encoder.layer.4.output.layernorm_quantizer_1 disable roberta.encoder.layer.6.output.layernorm_quantizer_0 disable roberta.encoder.layer.6.output.layernorm_quantizer_1 {'eval_loss': 0.3660135269165039, 'eval_accuracy': 0.8618441161487519, 'eval_runtime': 45.9324, 'eval_samples_per_second': 213.684, 'eval_steps_per_second': 3.353} ###Markdown By just disabling quantization for a single operator on a few layers, we keep most of the performance boost (quantization) but retrieve more than 1 point of accuracy. It's also possible to perform an analysis per quantizer to get a smaller granularity but it's a bit slow to run.If we stop here, it's called a Post Training Quantization (PTQ). Below, we will try to retrieve even more accuracy. Quantization Aware Training (QAT) We retrain the model with 1/10 or 1/100 of the original learning rate. Our goal is to retrieve most of the original accuracy. ###Code args.learning_rate = 1e-7 trainer = get_trainer(model_q) trainer.train() print(trainer.evaluate()) model_q.save_pretrained("model-qat") ###Output [INFO|trainer.py:439] 2021-12-28 13:54:41,146 >> Using amp half precision backend ###Markdown Export a `QDQ Pytorch` model to `ONNX`We need to enable fake quantization mode from Pytorch. ###Code data = encoded_dataset["train"][1:3] input_torch = convert_tensor(data, output="torch") convert_to_onnx( model_pytorch=model_q, output_path="model_qat.onnx", inputs_pytorch=input_torch, quantization=True, var_output_seq=False, ) del model_q QATCalibrate.restore() ###Output _____no_output_____ ###Markdown Benchmark Convert `ONNX` graph to `TensorRT` engine ###Code engine = build_engine( runtime=runtime, onnx_file_path="model_qat.onnx", logger=trt_logger, min_shape=(1, max_seq_len), optimal_shape=(batch_size, max_seq_len), max_shape=(batch_size, max_seq_len), workspace_size=10000 * 1024 * 1024, fp16=True, int8=True, ) # same as above, but from the terminal # !/usr/src/tensorrt/bin/trtexec --onnx=model_qat.onnx --shapes=input_ids:32x256,attention_mask:32x256 --best --workspace=10000 --saveEngine="test.plan" ###Output _____no_output_____ ###Markdown Prepare input and output buffer ###Code context: IExecutionContext = engine.create_execution_context() context.set_optimization_profile_async( profile_index=profile_index, stream_handle=torch.cuda.current_stream().cuda_stream ) input_binding_idxs, output_binding_idxs = get_binding_idxs(engine, profile_index) # type: List[int], List[int] data = encoded_dataset["train"][0:batch_size] input_torch: OD[str, torch.Tensor] = convert_tensor(data=data, output="torch") input_np: OD[str, np.ndarray] = convert_tensor(data=data, output="np") ###Output _____no_output_____ ###Markdown Inference on `TensorRT`We first check that inference is working correctly: ###Code tensorrt_output = infer_tensorrt( context=context, host_inputs=input_np, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, ) print(tensorrt_output) ###Output [array([[ 0.11111109, 2.9936233 , -2.5243347 ], [ 3.2135723 , -0.4374885 , -2.4485767 ], [ 2.1678474 , -1.1477091 , -0.7798154 ], [ 1.8148003 , -0.2093072 , -1.416711 ], [ 2.3070638 , 0.27601779, -2.2818418 ], [ 4.1799006 , -0.83163625, -2.8492923 ], [-3.695277 , 2.3409832 , 1.4314314 ], [ 4.1796045 , -1.0709951 , -2.6119678 ], [-0.44781622, -1.4288648 , 1.888488 ], [-2.9845483 , -1.5895646 , 4.117529 ], [ 3.9293122 , -0.68528754, -2.9477124 ], [-2.516609 , 0.34680495, 2.2793124 ], [-3.0710464 , 3.3439813 , 0.08079423], [-2.2859852 , 1.9546673 , 0.37908432], [ 0.3999826 , -1.0603418 , 0.5099453 ], [ 2.9247677 , -0.6867883 , -1.7499886 ], [ 4.1125493 , -0.7771612 , -2.986419 ], [-2.58058 , -2.3291597 , 4.553415 ], [-3.215447 , -1.3902456 , 4.2499046 ], [-2.014185 , 4.117433 , -1.634403 ], [ 4.051285 , -0.64716065, -2.9019048 ], [ 3.742484 , -0.07188296, -3.272956 ], [-3.302061 , -1.0159078 , 3.9711204 ], [ 3.9316242 , -0.33764294, -3.209711 ], [ 3.9900765 , -1.5201662 , -2.1166122 ], [-1.2437494 , 1.410141 , -0.10993958], [-3.1267605 , -0.8212991 , 3.6917076 ], [-2.0607114 , 4.1098857 , -1.4996963 ], [-3.5770578 , -0.736545 , 3.9671996 ], [ 3.776105 , -0.60771704, -2.8707912 ], [ 3.5450761 , -0.14414684, -2.9718893 ], [ 3.4713674 , 0.12106885, -3.189211 ]], dtype=float32)] ###Markdown Measure of the accuracy: ###Code infer_trt = lambda inputs: infer_tensorrt( context=context, host_inputs=inputs, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, ) measure_accuracy(infer=infer_trt, int64=False) ###Output _____no_output_____ ###Markdown Latency measures: ###Code time_buffer = list() for _ in range(100): with track_infer_time(time_buffer): _ = infer_tensorrt( context=context, host_inputs=input_np, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, ) print_timings(name="TensorRT (INT-8)", timings=time_buffer) del engine, context ###Output _____no_output_____ ###Markdown Pytorch baselineTime to get some numbers to compare with. GPU executionWe will measure vanilla Pytorch inference on both FP32 and FP16 precision on GPU, it will be our baseline: ###Code baseline_model = AutoModelForSequenceClassification.from_pretrained("model_trained_fp16", num_labels=num_labels) baseline_model = baseline_model.cuda() baseline_model = baseline_model.eval() data = encoded_dataset["train"][0:batch_size] input_torch: OD[str, torch.Tensor] = convert_tensor(data=data, output="torch") with torch.inference_mode(): for _ in range(30): _ = baseline_model(**input_torch) torch.cuda.synchronize() time_buffer = list() for _ in range(100): with track_infer_time(time_buffer): _ = baseline_model(**input_torch) torch.cuda.synchronize() print_timings(name="Pytorch (FP32)", timings=time_buffer) with torch.inference_mode(): with torch.cuda.amp.autocast(): for _ in range(30): _ = baseline_model(**input_torch) torch.cuda.synchronize() time_buffer = [] for _ in range(100): with track_infer_time(time_buffer): _ = baseline_model(**input_torch) torch.cuda.synchronize() print_timings(name="Pytorch (FP16)", timings=time_buffer) del baseline_model ###Output [Pytorch (FP16)] mean=56.24ms, sd=0.67ms, min=55.53ms, max=59.61ms, median=56.05ms, 95p=57.80ms, 99p=58.18ms ###Markdown CPU execution ###Code baseline_model = AutoModelForSequenceClassification.from_pretrained("model_trained_fp16", num_labels=num_labels) baseline_model = baseline_model.eval() data = encoded_dataset["train"][0:batch_size] input_torch: OD[str, torch.Tensor] = convert_tensor(data=data, output="torch") input_torch_cpu = {k: v.to("cpu") for k, v in input_torch.items()} torch.set_num_threads(os.cpu_count()) with torch.inference_mode(): for _ in range(3): _ = baseline_model(**input_torch_cpu) torch.cuda.synchronize() time_buffer = list() for _ in range(10): with track_infer_time(time_buffer): _ = baseline_model(**input_torch_cpu) torch.cuda.synchronize() print_timings(name="Pytorch (FP32) - CPU", timings=time_buffer) with torch.inference_mode(): with torch.cuda.amp.autocast(): for _ in range(3): _ = baseline_model(**input_torch_cpu) torch.cuda.synchronize() time_buffer = [] for _ in range(10): with track_infer_time(time_buffer): _ = baseline_model(**input_torch_cpu) torch.cuda.synchronize() print_timings(name="Pytorch (FP16) - CPU", timings=time_buffer) del baseline_model ###Output [Pytorch (FP16) - CPU] mean=4428.94ms, sd=225.39ms, min=4148.26ms, max=4871.84ms, median=4404.70ms, 95p=4781.81ms, 99p=4853.83ms ###Markdown Below, we will perform dynamic quantization on CPU. ###Code quantized_baseline_model = AutoModelForSequenceClassification.from_pretrained( "model_trained_fp16", num_labels=num_labels ) quantized_baseline_model = quantized_baseline_model.eval() quantized_baseline_model = torch.quantization.quantize_dynamic( quantized_baseline_model, {torch.nn.Linear}, dtype=torch.qint8 ) with torch.inference_mode(): for _ in range(3): _ = quantized_baseline_model(**input_torch_cpu) torch.cuda.synchronize() time_buffer = list() for _ in range(10): with track_infer_time(time_buffer): _ = quantized_baseline_model(**input_torch_cpu) torch.cuda.synchronize() print_timings(name="Pytorch (INT-8) - CPU", timings=time_buffer) ###Output [Pytorch (INT-8) - CPU] mean=3299.66ms, sd=37.76ms, min=3274.33ms, max=3405.91ms, median=3285.20ms, 95p=3366.88ms, 99p=3398.10ms ###Markdown TensorRT baseline Below we export our finetuned model, the purpose is to only check the performance on mixed precision (FP16, no quantization). ###Code baseline_model = AutoModelForSequenceClassification.from_pretrained("model_trained_fp16", num_labels=num_labels) baseline_model = baseline_model.cuda() convert_to_onnx( baseline_model, output_path="baseline.onnx", inputs_pytorch=input_torch, quantization=False, var_output_seq=False ) del baseline_model engine = build_engine( runtime=runtime, onnx_file_path="baseline.onnx", logger=trt_logger, min_shape=(batch_size, max_seq_len), optimal_shape=(batch_size, max_seq_len), max_shape=(batch_size, max_seq_len), workspace_size=10000 * 1024 * 1024, fp16=True, int8=False, ) input_np: OD[str, np.ndarray] = convert_tensor(data=data, output="np") context: IExecutionContext = engine.create_execution_context() context.set_optimization_profile_async( profile_index=profile_index, stream_handle=torch.cuda.current_stream().cuda_stream ) input_binding_idxs, output_binding_idxs = get_binding_idxs(engine, profile_index) # type: List[int], List[int] for _ in range(30): _ = infer_tensorrt( context=context, host_inputs=input_np, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, ) time_buffer = list() for _ in range(100): with track_infer_time(time_buffer): _ = infer_tensorrt( context=context, host_inputs=input_np, input_binding_idxs=input_binding_idxs, output_binding_idxs=output_binding_idxs, ) print_timings(name="TensorRT (FP16)", timings=time_buffer) del engine, context ###Output [TensorRT (FP16)] mean=29.90ms, sd=0.82ms, min=29.30ms, max=33.41ms, median=29.69ms, 95p=31.85ms, 99p=32.79ms ###Markdown ONNX Runtime baselineONNX Runtime is the go to inference solution from Microsoft.The recent 1.10 version of ONNX Runtime (with TensorRT support) is still a bit buggy on transformer models, that is why we use the 1.9.0 version in the measures below.As before, CPU quantization is dynamic.Function `` will set ONNX Runtime to use all cores available and enable any possible optimizations. ###Code optimize_onnx( onnx_path="baseline.onnx", onnx_optim_model_path="baseline-optimized.onnx", fp16=True, use_cuda=True, ) cpu_quantization(input_model_path="baseline-optimized.onnx", output_model_path="baseline-quantized.onnx") labels = [item["label"] for item in encoded_dataset[validation_key]] data = encoded_dataset[validation_key][0:batch_size] inputs_onnx: OD[str, np.ndarray] = convert_tensor(data=data, output="np") for k, v in inputs_onnx.items(): inputs_onnx[k] = v.astype(np.int64) model = create_model_for_provider(path="baseline-optimized.onnx", provider_to_use="CUDAExecutionProvider") output = model.run(None, inputs_onnx) data = encoded_dataset["train"][0:batch_size] inputs_onnx: OD[str, np.ndarray] = convert_tensor(data=data, output="np") for k, v in inputs_onnx.items(): inputs_onnx[k] = v.astype(np.int64) for provider, model_path, benchmark_name, warmup, nb_inference in [ ("CUDAExecutionProvider", "baseline.onnx", "ONNX Runtime GPU (FP32)", 10, 100), ("CUDAExecutionProvider", "baseline-optimized.onnx", "ONNX Runtime GPU (FP16)", 10, 100), ("CPUExecutionProvider", "baseline.onnx", "ONNX Runtime CPU (FP32)", 3, 10), ("CPUExecutionProvider", "baseline-optimized.onnx", "ONNX Runtime CPU (FP16)", 3, 10), ("CPUExecutionProvider", "baseline-quantized.onnx", "ONNX Runtime CPU (INT-8)", 3, 10), ]: model = create_model_for_provider(path=model_path, provider_to_use=provider) for _ in range(warmup): _ = model.run(None, inputs_onnx) time_buffer = [] for _ in range(nb_inference): with track_infer_time(time_buffer): _ = model.run(None, inputs_onnx) print_timings(name=benchmark_name, timings=time_buffer) del model ###Output [ONNX Runtime GPU (FP32)] mean=76.38ms, sd=4.99ms, min=73.10ms, max=91.05ms, median=73.91ms, 95p=88.30ms, 99p=89.42ms [ONNX Runtime GPU (FP16)] mean=34.21ms, sd=1.68ms, min=33.23ms, max=41.80ms, median=33.70ms, 95p=38.87ms, 99p=40.63ms [ONNX Runtime CPU (FP32)] mean=4023.32ms, sd=92.76ms, min=3895.51ms, max=4267.63ms, median=4013.27ms, 95p=4170.44ms, 99p=4248.19ms [ONNX Runtime CPU (FP16)] mean=3956.61ms, sd=167.65ms, min=3709.88ms, max=4188.62ms, median=3914.53ms, 95p=4180.81ms, 99p=4187.06ms [ONNX Runtime CPU (INT-8)] mean=3336.29ms, sd=168.96ms, min=3170.64ms, max=3765.07ms, median=3299.52ms, 95p=3641.01ms, 99p=3740.26ms ###Markdown Measure of the accuracy with ONNX Runtime engine and CUDA provider: ###Code model = create_model_for_provider(path="baseline.onnx", provider_to_use="CUDAExecutionProvider") infer_ort = lambda tokens: model.run(None, tokens) measure_accuracy(infer=infer_ort, int64=True) model = create_model_for_provider(path="baseline-optimized.onnx", provider_to_use="CUDAExecutionProvider") infer_ort = lambda tokens: model.run(None, tokens) measure_accuracy(infer=infer_ort, int64=True) model = create_model_for_provider(path="baseline-quantized.onnx", provider_to_use="CPUExecutionProvider") infer_ort = lambda tokens: model.run(None, tokens) measure_accuracy(infer=infer_ort, int64=True) del model ###Output _____no_output_____
nbs/02b_grid_menu.ipynb
###Markdown Grid MenuThe current notebook develop a grid menu widget that allows clickable widgets to be displayed as grid. The next cell will design the `Grid` class that contain the settings for the `GridMenu` component. ###Code #exporti @attr.define(slots=False) class Grid: width: int height: int n_rows: Optional[int] = 3 n_cols: Optional[int] = 3 disp_number: int = 9 display_label: bool = False @property def num_items(self) -> int: row, col = self.area_adjusted(self.disp_number) return row * col def area_adjusted(self, n_total: int) -> Tuple[int, int]: """Returns the row and col automatic arranged""" if self.n_cols is None: if self.n_rows is None: # automatic arrange label_cols = 3 label_rows = ceil(n_total / label_cols) else: # calc cols to show all labels label_rows = self.n_rows label_cols = ceil(n_total / label_rows) else: if self.n_rows is None: # calc rows to show all labels label_cols = self.n_cols label_rows = ceil(n_total / label_cols) else: # user defined label_cols = self.n_cols label_rows = self.n_rows return label_rows, label_cols @pytest.fixture def grid_fixture() -> Grid: return Grid(width=300, height=300) %%ipytest def test_it_return_num_items(grid_fixture): assert grid_fixture.num_items == 9 %%ipytest def test_it_adjusts_area_missing_args(grid_fixture): grid_fixture.n_rows = None assert grid_fixture.area_adjusted(12) == (4, 3) ###Output _____no_output_____ ###Markdown The `GridMenu` doesn't have a `on_click` event listener, but grid elements itself should implement `on_click(ev)`, `reset_callbacks()` and `update(other: SameWidgetType)` methods to register/reset onclick callback function and update its internal values, respectively. Also grid element shoudl have a field name in order user can destinguish between grid children. ###Code #export class GridMenu(GridBox): debug_output = Output(layout={'border': '1px solid black'}) def __init__( self, grid: Grid, widgets: Optional[Iterable] = None, ): self.callback = None self.gap = 40 if grid.display_label else 15 self.grid = grid n_row, n_col = grid.area_adjusted(grid.disp_number) column = grid.width + self.gap row = grid.height + self.gap centered_settings = { 'grid_template_columns': " ".join([f'{(column)}px' for _ in range(n_col)]), 'grid_template_rows': " ".join([f'{row}px' for _ in range(n_row)]), 'justify_content': 'center', 'align_content': 'space-around' } super().__init__( layout=Layout(**centered_settings) ) if widgets: self.load(widgets) self.widgets = widgets def _fill_widgets(self, widgets: Iterable): if self.widgets is None: self.widgets = widgets self.children = self.widgets if self.callback: self.register_on_click() else: iter_state = iter(widgets) for widget in self.widgets: i_widget = next(iter_state, None) if i_widget: widget.update(i_widget) else: widget.clear() def _filter_widgets(self, widgets: Iterable) -> Iterable: """Limit the number of widgets to be rendered according to the grid's area""" widgets_list = list(widgets) # Iterable don't have len() num_widgets = len(widgets_list) row, col = self.grid.area_adjusted(num_widgets) num_items = row * col if num_widgets > num_items: warnings.warn("!! Not all labels shown. Check n_cols, n_rows args !!") return widgets_list[:num_items] return widgets @debug_output.capture(clear_output=False) def load(self, widgets: Iterable, callback: Optional[Callable] = None): widgets_filtered = self._filter_widgets(widgets) self._fill_widgets(widgets_filtered) if callback: self.on_click(callback) @debug_output.capture(clear_output=False) def on_click(self, callback: Callable): setattr(self, 'callback', callback) self.register_on_click() @debug_output.capture(clear_output=False) def register_on_click(self): if self.widgets: for widget in self.widgets: widget.reset_callbacks() widget.on_click( partial( self.callback, value=widget.value ) ) def clear(self): self.widgets = None self.children = tuple() ###Output _____no_output_____ ###Markdown We now can instantiate the grid menu and load widgets on it. For this example we're using the custom widget `ImageButton` to be displayed using the load function. ###Code from ipyannotator.custom_input.buttons import ImageButton, ImageButtonSetting from ipywidgets import HTML from IPython.display import display grid = Grid(width=50, height=75, n_cols=2, n_rows=2) grid_menu = GridMenu(grid) widgets = [] setting = ImageButtonSetting(im_path='../data/projects/capture1/pics/pink25x25.png') for i in range(4): widgets.append(ImageButton(setting)) grid_menu.load(widgets) grid_menu widgets = [] setting = ImageButtonSetting(im_path='../data/projects/capture1/pics/teal50x50_5.png') for i in range(2): widgets.append(ImageButton(setting)) grid_menu.load(widgets) ###Output _____no_output_____ ###Markdown While ipyevents implementation lacks `sender` or `source` in callback args, `functools.partial` used to back element `name` into return value. You can see example of on_click event handler `test_handler` below. name of the button is printed out on click. ###Code # hide h = HTML('Event info') display(h) def test_handler(event, value=None): event.update({'label_name': value}) h.value = event['label_name'] grid_menu.on_click(test_handler) #hide from ipyannotator.custom_input.buttons import ActionButton %%ipytest def test_it_doesnt_load_more_widgets_than_the_grid_area(): with warnings.catch_warnings(record=True) as w: grid = Grid(width=50, height=75, n_cols=1, n_rows=1) grid_menu = GridMenu(grid) widgets = [ActionButton() for _ in range(2)] grid_menu.load(widgets) assert len(grid_menu.widgets) == 1 assert bool(w) is True %%ipytest def test_it_doesnt_throw_warning_if_number_of_widgets_is_small(): with warnings.catch_warnings(record=True) as w: grid = Grid(width=100, height=100, n_rows=2, n_cols=2) grid_menu = GridMenu(grid) grid_menu._filter_widgets([1]) assert bool(w) is False #hide from nbdev.export import notebook2script notebook2script() ###Output _____no_output_____
notebooks/work-in-progress/py-js-py-js_demo.ipynb
###Markdown Demo of JS &harr; Python communication ###Code from IPython.display import HTML js=""" alert("Hello Javascript (created in python string") // Lots of pre-written stuff could go here - all generated from Python """ # This is Python, printing out the javascript into the browser window HTML('<script type="text/Javascript">%s</script>' % (js,)) # Nothing will appear be 'output' - but an annoying pop-up will... ###Output _____no_output_____ ###Markdown Create an HTML placeholder ###Code html=""" <input type="text" id="textinput" value="12"/> <input type="submit" id="textsubmit"> """ # This is Python, printing out the javascript into the browser window HTML(html) ###Output _____no_output_____ ###Markdown Create a Python function and Hook up the interactivity ###Code def recalculate_cell_in_python(v): if v % 2 == 0: return v/2 return v*3+1 # Lots more Python could go here # You can also have side-effects, etc # This python import will be 'visible' for the python code executed by the javascript callback # because that happens 'afterwards' as far as the Python kernel is concerned import json js=""" var kernel = IPython.notebook.kernel; $('#textsubmit').off('click').on('click', function(e) { var javascript_cell_value = $('#textinput').val(); var cmd=[ 'python_new_value = recalculate_cell_in_python('+javascript_cell_value+')', 'json.dumps( dict( v=python_new_value ) )' ].join(';'); kernel.execute(cmd, {iopub: {output: handle_python_output}}, {silent:false}); function handle_python_output(msg) { //console.log(msg); if( msg.msg_type == "error" ) { console.log("Javascript received Python error : ", msg.content); } else { // execute_result var res_str = msg.content.data["text/plain"]; // Take off surrounding quotes var res=JSON.parse( res_str.replace(/^['"](.*)['"]$/, "$1") ); $('#textinput').val( res.v ); } } return false; }); """ # Again,this is a Python cell, printing out the javascript into the browser window HTML('<script type="text/Javascript">%s</script>' % (js,)) ###Output _____no_output_____
Fraud-Detection.ipynb
###Markdown Research Question ContextFraud monitoring and prevention are the most challenging and costly financial businesses. For example, in 2018, $24.26 Billion was lost due to payment card fraud. Banks and financial houses try to reduce fraud by investing much money in software development. The United States leads as the most credit fraud-prone country, with 38.6% of reported card fraud losses in 2018. (Shiftprocessing, 2022)Analyzing the features of the transactions using machine learning like the logistic regression model could statistically significantly identify the fraudulent transactions, and the study results could be used as proof of concept to develop applications in the future for fraud monitoring and prevention. Newly released Federal Trade Commission data shows that consumers reported losing more than $5.8 billion to fraud in 2021, an increase of more than 70 percent over the previous year.(FTC.gov 2022)Machine learning can be a powerful and influential tool in one of the most challenging and restricted sectors and will drive to increasing the trust for more safe transactions and more financial revenue.Research QuestionThe Financial houses and Banks are still looking for tools to monitor and prevent fraud, and at the same time looking to measure the accuracy, efficiency, and effectiveness that could be summarized in one research question: "To what extent can transactions be identified as a fraud?"JustificationThe research question covers the financial houses’ and banks' actual needs and determines if the transactions could identify as fraud. This research question covers both requirements: the ability and accuracy of the same question.The research results that will answer the research question will provide the details that will help the decision-maker use the research model as a proof of concept.The research question presents the opportunity to the data analytics and data researcher to compare the results, like comparing the predicted data with the test data to define if the model can identify the fraudulent transactions and to what extent.HypothesisThe research hypothesis will be: "Fraudulent transactions can statistically significantly be identified from the provided dataset."The research will evaluate if it can statistically significantly identify fraud transactions. The evidence will be collected to confirm or reject the hypothesis from the logistic regression model as one of the machine learning models. The model evaluation will determine if the thesis can be validated or rejected. Data CollectionData DescriptionFinding a dataset for the historical financial transactions was not easy, including the necessity to answer the research question; any data related to the financial sector is hard to find. The dataset must include enough transactions to train and test the model.Some of the included transactions must be classified as fraud to be healthy data for training the model. The transaction features should identify the fraud transaction characteristics and properties and include a dependent variable that will train the model by labeling or classifying the fraud or non-fraud transactions.The research will use a dataset in a CSV file format named "Fraud Detection Classification" downloaded from Kaggle.com, covering all needed requirements.The dataset is available to the public, does not include any restriction data, includes 101613 transactions(rows) and ten columns. The dataset is a sample and an excellent example of answering the research question and analyzing the transactions. (Kaggle, 2022)The dataset display “Cash Out” and “Transfer” are only transaction types in the fraud scope.Advantage and Disadvantage of the used methodologyThe advantage of looking for datasets and working with public data allows the data scientist to find a non-restricted dataset and data like what they need to use in the initial research studies, build models, and improve the ability to increase the learning curve and build a proof of concepts. It will help to answer the research questions before using a restricted dataset whet requires authorization to use.The Disadvantage is the leak of control, the limited number of available variables, and the number of observations in the dataset. Maybe regarding the domain business, the financial sector is confidential, which causes fewer variables and observations. Furthermore, working with an un-trusted dataset, which the researchers could use to build initial models, cannot entirely rely on it.ChallengesThe challenges are related to finding, studying, understanding the data in a dataset that covers all the necessary to answer the research question. For example, to answer the research question, the dataset should includeEnough variables with types able to work with that types.Enough number of observations.The dependent variable will use as labels to classify the transactions.The variables’ names and descriptions were challenges to understand the business behind. Finding an easy source like CSV to collect data from is a challenge, influential, and will reduce the time for the research project. ###Code !pip install -r requirements.txt import pandas as pd import numpy as np pd.set_option('display.max_columns', None) from scipy import stats import statsmodels.api as sm from statsmodels.stats import diagnostic as diag from statsmodels.stats.outliers_influence import variance_inflation_factor import matplotlib.pyplot as plt from IPython.display import Image from IPython.core.display import HTML import seaborn as sns from scipy.stats import weibull_min import matplotlib as mpl from sklearn import linear_model from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error from pandas.plotting import scatter_matrix from sklearn.linear_model import LogisticRegression from sklearn.metrics import classification_report, confusion_matrix, \ precision_score, accuracy_score, recall_score, f1_score ###Output _____no_output_____ ###Markdown Data Preparation ###Code # load the data set. df = pd.read_csv(r"fraud_dataset_example.csv") df.head() df.describe() df.info() df_after_drop = df.drop(['nameOrig','nameDest','isFlaggedFraud','step'], axis = 1) # filter data df_after_drop = df_after_drop[df_after_drop.type.isin(['TRANSFER','CASH_OUT'])] display(df_after_drop.isnull().any()) df_after_dummies = pd.get_dummies(df_after_drop['type']) df_after_dummies.head() df_after_dummies = df_after_dummies.drop(['TRANSFER'], axis = 1) df_after_dummies = df_after_dummies.astype(float) df = pd.concat([df_after_dummies, df_after_drop], axis=1) df = df.drop(['type'], axis = 1) df.head() df.describe() df.info() #The bivariate visualizations scatter_matrix(df, alpha=0.4, figsize=(20, 20), diagonal='hist'); plt.show() ###Output _____no_output_____ ###Markdown Analysis: Logistic Regression ###Code #Initial Model. y = df['isFraud'] x = df.drop(['isFraud'], axis = 1) Xc = sm.add_constant(x) logistic_regression = sm.Logit(y,Xc) fitted_model = logistic_regression.fit() fitted_model.summary() print(fitted_model.summary()) # calculate the correlation matrix corr = x.corr() # display the correlation matrix display(corr) # plot the correlation heatmap sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns, cmap='RdBu') # define two data frames one before the drop and one after the drop Xc_before = Xc Xc_after = Xc.drop(['oldbalanceDest'], axis = 1) # the VFI does expect a constant term in the data, so we need to add one using the add_constant method X1 = sm.tools.add_constant(Xc_before) X2 = sm.tools.add_constant(Xc_after) # create the series for both series_before = pd.Series([variance_inflation_factor(X1.values, i) for i in range(X1.shape[1])], index=X1.columns) series_after = pd.Series([variance_inflation_factor(X2.values, i) for i in range(X2.shape[1])], index=X2.columns) # display the series print('VIF before drop') print('-'*100) display(series_before) print('VIF after drop') print('-'*100) display(series_after) # calculate the correlation matrix after reduce corr = Xc_after.corr() # display the correlation matrix display(corr) # plot the correlation heatmap sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns, cmap='RdBu') # Final Model logistic_regression = sm.Logit(y,Xc_after) fitted_model = logistic_regression.fit() fitted_model.summary() print(fitted_model.summary()) #Cross-validation X_train, X_test, y_train, y_test = train_test_split(Xc_after, y, test_size=0.20, random_state=210) #Evaluation model clf = LogisticRegression() clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print(classification_report(y_pred, y_test)) print('Accuracy Score: ' + str(accuracy_score(y_pred, y_test))) print('Precision Score: ' + str(precision_score(y_pred, y_test))) print('Recall Score: ' + str(recall_score(y_pred, y_test))) print('F1-Score: ' + str(f1_score(y_pred, y_test))) cm = confusion_matrix(y_pred, y_test) fig, ax = plt.subplots(figsize=(5, 5)) ax.imshow(cm) ax.grid(False) ax.xaxis.set(ticks=(0, 1), ticklabels=('Predicted 0s', 'Predicted 1s')) ax.yaxis.set(ticks=(0, 1), ticklabels=('True 0s', 'True 1s')) ax.set_ylim(1.5, -0.5) for i in range(2): for j in range(2): ax.text(j, i, cm[i, j], ha='center', va='center', color='red') plt.show() ###Output _____no_output_____ ###Markdown Extra Machine Learning models outside the research ###Code #Data Without reduce: #Cross-validation work based on original selected columns X_train, X_test, y_train, y_test = train_test_split(x, y, \ test_size=0.20, random_state=210) ###Output _____no_output_____ ###Markdown Random Forest ###Code #Import Random Forest Model from sklearn.ensemble import RandomForestClassifier #Create a Gaussian Classifier clf=RandomForestClassifier(n_estimators=100) #Train the model using the training sets y_pred=clf.predict(X_test) clf.fit(X_train, y_train) y_pred=clf.predict(X_test) acc2 = accuracy_score(y_test, y_pred) prec2 = precision_score(y_test, y_pred) rec2 = recall_score(y_test, y_pred) f12 = f1_score(y_test, y_pred) print(classification_report(y_pred, y_test)) print('Accuracy:%0.4f'%acc2,'\nPrecision:%0.4f'%prec2, \ '\nRecall:%0.4f'%rec2,'\nF1-score:%0.4f'%f12) cm = confusion_matrix(y_pred, y_test) fig, ax = plt.subplots(figsize=(5, 5)) ax.imshow(cm) ax.grid(False) ax.xaxis.set(ticks=(0, 1), ticklabels=('Predicted 0s', 'Predicted 1s')) ax.yaxis.set(ticks=(0, 1), ticklabels=('True 0s', 'True 1s')) ax.set_ylim(1.5, -0.5) for i in range(2): for j in range(2): ax.text(j, i, cm[i, j], ha='center', va='center', color='red') plt.show() ###Output _____no_output_____ ###Markdown Decision Tree Classifer ###Code # Import Decision Tree Classifier from sklearn.tree import DecisionTreeClassifier dt= DecisionTreeClassifier() # Train Decision Tree Classifer dt = dt.fit(X_train, y_train) #Predict the response for test dataset y_pred_dt = dt.predict(X_test) acc3 = accuracy_score(y_test, y_pred_dt) prec3 = precision_score(y_test, y_pred_dt) rec3 = recall_score(y_test, y_pred_dt) f13 = f1_score(y_test, y_pred_dt) print(classification_report(y_pred_dt, y_test)) print('Accuracy:%0.4f'%acc3,'\nPrecision:%0.4f'%prec3,'\nRecall:%0.4f'%rec3,\ '\nF1-score:%0.4f'%f13) cm = confusion_matrix(y_pred_dt, y_test) fig, ax = plt.subplots(figsize=(5, 5)) ax.imshow(cm) ax.grid(False) ax.xaxis.set(ticks=(0, 1), ticklabels=('Predicted 0s', 'Predicted 1s')) ax.yaxis.set(ticks=(0, 1), ticklabels=('True 0s', 'True 1s')) ax.set_ylim(1.5, -0.5) for i in range(2): for j in range(2): ax.text(j, i, cm[i, j], ha='center', va='center', color='red') plt.show() import cloudpickle as cp cp.dump(dt, open("DecisionTree.pkl", "wb")) cp.dump(clf, open("RandomForestClassifier.pkl", "wb")) exampleTran = { 'CASH_OUT': 0.0, 'amount': 181.0, 'oldbalanceOrg': 181.0, 'newbalanceOrig': 0.0, 'oldbalanceDest': 0.0, 'newbalanceDest': 0.0 } # fraud exampleTran2 = { 'CASH_OUT': 1.0, 'amount': 229133.94, 'oldbalanceOrg': 15325.0, 'newbalanceOrig': 0.0, 'oldbalanceDest': 5083.0, 'newbalanceDest': 51513.44 } # non-fraud def isFraudTran(tran): df = pd.DataFrame(tran, index=[0]) isFraud = dt.predict(df)[0] return { 'isFraud': isFraud } isFraudTran(exampleTran) isFraudTran(exampleTran2) import json from flask import Flask, jsonify, request import pandas as pd import cloudpickle as cp # Load your model. pipeline = cp.load(open('DecisionTree.pkl', 'rb')) def isFraudTran(tran): df = pd.DataFrame(tran, index=[0]) isFraud = pipeline.predict(df)[0] return str({ 'isFraud': isFraud }) tran = { 'CASH_OUT': 0.0, 'amount': 181.0, 'oldbalanceOrg': 181.0, 'newbalanceOrig': 0.0, 'oldbalanceDest': 0.0, 'newbalanceDest': 0.0 } isFraudTran(tran) ###Output _____no_output_____
lectures/timeseries/timeseries-1-live.ipynb
###Markdown Time Series ModelingIn this lecture, we'll do some **basic** work with time series modeling. Time series are surprisingly complicated objects to work with and model, and many people spend their careers considering statistical questions related to effective modeling of timeseries. In this set of lecture notes, we won't be able to go into too much detail, but we will highlight some of the key questions and approaches to addressing them. Note*I had originally intended to approach time series modeling from a deep learning perspective, using TensorFlow. This is possible; see [here](https://www.tensorflow.org/tutorials/structured_data/time_series) for an example. The general idea is actually pretty similar to what we used for text generation. However, a quick check indicated that contemporary best practice is still to use models developed in econometrics and statistics, as these tend to be more accurate and more interpretable.**Parts of these lecture notes are based on [this tutorial](https://towardsdatascience.com/an-end-to-end-project-on-time-series-analysis-and-forecasting-with-python-4835e6bf050b). For an overview of the functionality available in the statsmodels package for timeseries, take a look [here](https://www.statsmodels.org/stable/tsa.html). Here is a [nice overview](https://people.duke.edu/~rnau/411arim.htm) of basic ARIMA models, which can help give some interpretation for the meaning of the `order` parameter that we use below.* ###Code import sqlite3 import pandas as pd import numpy as np from matplotlib import pyplot as plt plt.style.use('seaborn-whitegrid') import statsmodels.api as sm ###Output _____no_output_____ ###Markdown Data: NOAA ClimateFor this lecture, we're actually going to go back to the NOAA climate data that we used early in the quarter. Using the database that we constructed in Week 2, I'm going to grab data for Amundsen-Scott weather station, which is in the deep Antarctic. ###Code with sqlite3.connect("../sql/temps.db") as conn: cmd = \ """ SELECT S.name, T.year, T.month, T.temp FROM temperatures T LEFT JOIN stations S ON T.id = S.id WHERE S.NAME == "AMUNDSEN_SCOTT" AND T.year > 2000 """ df = pd.read_sql_query(cmd, conn) ###Output _____no_output_____ ###Markdown Quick Data PrepThere's a bit of data preparation needed before we can do formal time series modeling. In particular, we need to make a **Date** column, and set it as the index for the timeseries that we care about. ###Code df["Date"] = df["Year"].astype(str) + "-" + df["Month"].astype(str) df["Date"] = pd.to_datetime(df["Date"]) df.head() ###Output _____no_output_____ ###Markdown The next thing we need to do is set the Date as the index for our dataframe. Finally, we are going to want to make predictions and test them, which means that we still perform a train/test split. I'm going to take the most recent 4 years as test data. Finally, let's take a look at our training data. Notice that there is considerable seasonal variation, on the order of 30 degrees Celsius, within each year. This can make it difficult to see trends. For example, would you say that the overall trend in this image is upward, downard, or neutral? It's very difficult to say! Let's now introduce an exploratory tool that can help us think about this kind of question. Time series DecompositionTime series decomposition is technique for exploratory data analysis that allows you to separate a time series into separate components, like this: $$\text{data} = \text{trend} + \text{seasonal} + \text{noise}$$Technically speaking, the above corresponds to an *additive* model. We can also use a multiplicative model: $$\text{data} = \text{trend} \times \text{seasonal} \times \text{noise}$$The choice of which model to use for decomposition can be a tricky one, but additive models are usually a sound place to start. ###Code # specifying period not necessary because we have the frequency defined # so this would also work: # decomposition = sm.tsa.seasonal_decompose(y, model='additive') ###Output _____no_output_____ ###Markdown Time Series ModelingIn this lecture, we'll do some **basic** work with time series modeling. Time series are surprisingly complicated objects to work with and model, and many people spend their careers considering statistical questions related to effective modeling of timeseries. In this set of lecture notes, we won't be able to go into too much detail, but we will highlight some of the key questions and approaches to addressing them. Note*I had originally intended to approach time series modeling from a deep learning perspective, using TensorFlow. This is possible; see [here](https://www.tensorflow.org/tutorials/structured_data/time_series) for an example. The general idea is actually pretty similar to what we used for text generation. However, a quick check indicated that contemporary best practice is still to use models developed in econometrics and statistics, as these tend to be more accurate and more interpretable.**Parts of these lecture notes are based on [this tutorial](https://towardsdatascience.com/an-end-to-end-project-on-time-series-analysis-and-forecasting-with-python-4835e6bf050b). For an overview of the functionality available in the statsmodels package for timeseries, take a look [here](https://www.statsmodels.org/stable/tsa.html). Here is a [nice overview](https://people.duke.edu/~rnau/411arim.htm) of basic ARIMA models, which can help give some interpretation for the meaning of the `order` parameter that we use below.* ###Code import sqlite3 import pandas as pd import numpy as np from matplotlib import pyplot as plt plt.style.use('seaborn-whitegrid') import statsmodels.api as sm ###Output _____no_output_____ ###Markdown Data: NOAA ClimateFor this lecture, we're actually going to go back to the NOAA climate data that we used early in the quarter. Using the database that we constructed in Week 2, I'm going to grab data for Amundsen-Scott weather station, which is in the deep Antarctic. ###Code with sqlite3.connect("../sql/temps.db") as conn: cmd = \ """ SELECT S.name, T.year, T.month, T.temp FROM temperatures T LEFT JOIN stations S ON T.id = S.id WHERE S.NAME == "AMUNDSEN_SCOTT" AND T.year > 2000 """ df = pd.read_sql_query(cmd, conn) ###Output _____no_output_____ ###Markdown Quick Data PrepThere's a bit of data preparation needed before we can do formal time series modeling. In particular, we need to make a **Date** column, and set it as the index for the timeseries that we care about. ###Code df["Date"] = df["Year"].astype(str) + "-" + df["Month"].astype(str) df["Date"] = pd.to_datetime(df["Date"]) df.head() ###Output _____no_output_____ ###Markdown The next thing we need to do is set the Date as the index for our dataframe. Finally, we are going to want to make predictions and test them, which means that we still perform a train/test split. I'm going to take the most recent 4 years as test data. Finally, let's take a look at our training data. Notice that there is considerable seasonal variation, on the order of 30 degrees Celsius, within each year. This can make it difficult to see trends. For example, would you say that the overall trend in this image is upward, downard, or neutral? It's very difficult to say! Let's now introduce an exploratory tool that can help us think about this kind of question. Time series DecompositionTime series decomposition is technique for exploratory data analysis that allows you to separate a time series into separate components, like this: $$\text{data} = \text{trend} + \text{seasonal} + \text{noise}$$Technically speaking, the above corresponds to an *additive* model. We can also use a multiplicative model: $$\text{data} = \text{trend} \times \text{seasonal} \times \text{noise}$$The choice of which model to use for decomposition can be a tricky one, but additive models are usually a sound place to start. ###Code # specifying period not necessary because we have the frequency defined # so this would also work: # decomposition = sm.tsa.seasonal_decompose(y, model='additive') ###Output _____no_output_____
docs/docs/colab-notebook/orca/quickstart/pytorch_lenet_mnist.ipynb
###Markdown ![image.png](data:image/png;base64,/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAUDBAQEAwUEBAQFBQUGBwwIBwcHBw8LCwkMEQ8SEhEPERETFhwXExQaFRERGCEYGh0dHx8fExciJCIeJBweHx7/2wBDAQUFBQcGBw4ICA4eFBEUHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh7/wAARCABNAI0DASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD7LrzPT/i1p958WpvAy2O2NZHgjvjPw8yrkptxxyGXOeorrfiDr0XhnwbqmuyEFrW3Zogf4pDwg/FiK+WW8OajpHw10T4lwM51E6w0zuTyU3fu2P1dG/77r2crwNLEQlKr192P+J6/16niZpj6uHnGNLp70v8ACnY+gP2hfiafhR4AXxUNGGr5vYrX7P8AafJ++GO7dtbpt6Y710Hwx8baL8QPBVh4q0Gbfa3aZaNj88Eg+/G47Mp4/IjgivC/25dVt9b/AGY9P1i0IMF5qVnOnPQMkhx+HT8K84+Gt/q/7OmseF9du5bm7+HXjfTrSa6Yjd9humiUs3Hdck/7UeRyUrx3FxbT3R7UZKSTWzPozQvjRp+q/HrVPhPHol3HeafE8j3rTKY3Coj8L1/jH5VN+0Z8XLP4Q+DrbWpdN/tW7vLsW1tZ/aPJ3/KWdi21sBQB26kV4d8Npobj/goV4puYJUlhlsZHjkRgyupt4CGBHUEHOaZ8W7WD42/te2Pw8lkaTQPDVhMLwoxwJSm52/B2hT/gJpDPqD4aeLbLxx4C0bxZp6eXBqdqk3lltxifo6E9yrAr+Fcd8IvjNp/xD8beKfDFpol3Yy+HpWilmlmVlmIlePIA5HKZ5ryv9gjxDeadbeKvhRrRKaj4fv3lijbqEL7JVHsJFB/7aVmfsV/8l7+Lv/X6/wD6VzUAeufDf456d4p+Kmr/AA41Pw/e+H9d00SYS5nR1nMZG4IV/wBkhx6rk1j+Pf2kNM0D4jap4J0TwnqniW90q2ee9ks5kVY/LQvIvIJJUYB/2jt61xX7bXhTUPC+r6J8cvCMyWesaTcRW962B+8BJWJyP4sZMbDurDsK3P2Gvh22j+Crj4i60/2nX/FJM4mc7nS2LEjJ/vO2Xb/gPpQBiN+2ZpiXq2T/AA38RrdPysBlQSN9F25PQ16T4d+OVpq0XhJpPC+p2UniMkJHPIoa3xcND8wIyc7d3HY15N49z/w8U8Jcn/jyj7/9O9xXffG8Z+PfgEHPL24/8ma78uw8K9ZxntZv7k2efmVedCipw3ul97Ol+O/x48I/CdYbTUkuNT1q4TzINNtSN+zON7seEUkEDqT2B5rkPhB+0u/jX4g6d4O1f4fap4duNUEjWc0s+9HCIznIZEOMKeRnnFec/s8WFr4+/bA8f+JvE0aXlzo885sYZhuETLP5MbYP9xFwPQkHrX2HdWFndTW01xbQzS2snmwO6BmifBBZSeVOCRx2JrgPQPn/AMdftQ22m+Pb3wn4K8Cax40n01mW+msnIVChw+wKjlgp4LHAyOM9a9b+EHjyy+I/ge18U2Gm6hp0M7vH5N7HtcMh2tgjhlzkbh6HoQRXyVc6b8Vf2Z/iN4k8T6X4dTxF4S1SUyT3IUsvlb2dd7L80LruIJIKnPfjH1N8C/iXoHxR8FLr+hQyWnlymC7s5cb7eYAMVyOGBDAhh1z2OQADvaKKKAPF/wBpaHxDrseieE9D0u+uI7m4E11PHAzRJztQMwGAASzHPoKgv/2edCXSp0tNb1p7pYW8lZJU8oyAHbkbemcd69b8Wa7Y+GfDOpeIdT837FptrJdT+Um5tiAk4Hc4HSvD/wDhsH4Qf89Nf/8ABd/9lXpU80r0aUKVF8qV/m/M8yplVCtVlUqrmb/BeR5n8T9H8a6v+y5N4RHhXXZr/TtegeCBLCVnaBlkJ2gDJCtuyR03CvoTRPAmneL/ANnXQvBfivT5Ujl0G0hmjkTbNbSrCuGAPKujD9CDxmuH/wCGwfhB/wA9Nf8A/Bd/9nR/w2D8IP8Anpr/AP4Lv/s65MTX9vVlVta+tjrwtD6vRjSve3U8V+BXgrxj8JP2gtefVtF1PUxo+h3rWs9vaySJfKqKYVjIByWAAC9RyO1aXwJ/ZsvfiFpus+MfiVd+J/D+rX2oyFIYlFvLID8zyOJEJwXYgdPu19g+BvEll4v8K2HiTTre9gsr+LzbdbuLypGQk7W25OARyPUEGsP4mfFfwH8OrcP4r8QW1nO67orRMy3Eg9RGuWx7nA96wOg+aLD4Xa98Df2m/C+p+E7HxH4g8NahGIb+7+ztO0SysY5RK0agAKdkgyOg9qwPh5rXxM+E/wAXPH2r6f8ACTxF4gh1jUZ1R1tZ40CrcSMHVhGwYENXo+qftseB4rpo9O8K+IbuJTgSSNFFu9wNzfrW/wCC/wBr74Wa3cpa6sNW8PSM20SXsAeHP+/GWx9SAKAD9omTxP8AED9kz7XH4T1O21vUHtJpNIjgklnhInGVK7Q3AGTwK9I/Zzsb3Tfgb4PsNRtJ7O7g0uJJoJ4ykkbDOQynkH2Ndpo+p6frGmw6lpV9bX1lOu+G4t5RJHIPUMODVugD5a8beGvEU/7efhjxDBoOqS6PDaRrLfpaubdD5E4wZMbRyQOvcV23xi0fV7741+Cb+z0u9uLW3aHz54oGZIsXGTuYDA455r2+vLfih8fvhl8PbmSx1nXRdanGcPYaennzIfRsEKh9mYGunCYl4apzpX0a+9WOXF4VYmnyN21T+53PGvjF8P8A4h/C7403Pxi+FmmPrNlqJZtV02KMyMC+DKCi/MyMQHDLkq3bGM9P8Lfjl8R/iH8RNH0aL4Y3vh7RQZG1W9uEllAxE+xQzIgQF9vqx6cVhy/tteDhORF4N8QPDn77Swq2P93J/nXe/Dv9qP4UeLrqKxk1S40C9lICRatGIkZs4wJQSn5kVzHUeZ678aPjT4Th8R+D/G3wzudc1O8edNNvbGFzaGOTKqoCo3mxjPHIbHDc816B+xF8N9d+H/wyu5PEts9lqWsXgufsj/fgiVAqBx2Y/MSOwIB5yK94jdJUDowZWAKlTkEHvT6ACiiigDO8T6Jp/iPw7qGg6rG0thqFu9tcIrlC0bghgCORweorxv8A4ZM+Cv8A0L99/wCDSf8A+Kr3WigD548T/sz/AAE8OeHr/XtX0e9t7Cwge4uJDqk/yooJP8XJ7AdyQK+TvgL8OrT4sfG37Dp+mSWHheCdr27h8xnMForfLCXPJZuEz15Y9q9s/wCCgfxSMstr8K9FnJOUutYMZySesMBx+Dkf7nvXtP7JXwvX4Z/C2CO/gEevattvNTLDDRkj5Ifoinn/AGi1AGH+1f8AGyD4TeGrbw74ZW3/AOElvoMWqBQUsIB8olK9M8YRenBJ4GD4P8CP2cPEXxXc+PPiNrGoWmm6g3noWbfe6gCfvlnzsQ9iQSR0AGDXN+HbZvjz+13I+ps02l3OoyTSqScCxt87Y/YMqqv1cmvsXxn8YNF8F+NIvCtzo84tIIo/PuYjgQKy5GyMDLALjp9ADitqGHqV5ONNXaV/kjGviKdCKlUdk3b5kWi/s2/BjS7NbdPBNndEDDS3c0szt7ks3H4AVy3xD/ZJ+F+v2UreH7a58MagV/dy2srSw7u26Jycj/dKmtL4gWvjH4oabKnh+8sbDSbVopEia4Ia5ZwSN8qEqNqFG2DIBcZYkYHf/Dx9V0SztfCfiO7W71C3tVe3uxuxdRjAYZPJeMkA+qlG7kBypQVFVFPVvbqvMmNabrSpuGiW/R+R8PeHNf8AiR+y38Tzouro93otwwkmtUcm2voc482En7kg9eCCMMCK+/8Awj4g0vxT4asPEOi3K3OnX8CzwSDup7EdiDkEdiCK8q/bF8C2fjb4I6tdJCj6locbajZSryRsGZUz6Mgbj1C+leG/sY/FSXw38GfiBYXcnmDw5aNq2no/I+dWBT6eYEP1c1gdBv8A7ZHx/wBS07VZvhn4BupYr7iLVL+3JMqM3/LvERyG5G5hyM7Rg5qh8Dv2P473T4dc+KV3dJNOBIukW0mxkzz++k5O71Vends8Vx/7CHg8eNfi/qnjXXs3v9igXW6UZ8y9mZtrn1IxI312ntX35QB5Xb/s7fBiC1FsngHTGQDG52ld/wDvotn9a8v+LX7HfhHU9PnvPh/dT6DqagtHazytNaSn+6S2XT65Ye1fUlFAHwT+zr8ZPFHwf8cH4afEn7TFosdwLZkujl9Lc9HU94TkEgcYIZe4P3qjK6B0YMpGQQcgivkj/gop4CtZ/DWlfEK0hRL20nWwvWUYMkL7jGW/3WBH0f2FeofsW+L7jxd8BdKe9mM15pMj6ZM7NksIsGMn/tmyD8KAPaKKKKACuP8AjL470/4c/DvVPFeobX+yx7baAnBnnbiOMfU9fQAntXYV8Cftj+PL/wCJ/wAXrH4b+FN15Z6Zdi0ijiORc37nY7fRfuA9vnPQ0AM/Y+8Cah8VPjFf/EbxXuvLLTLv7bPJIPlub5zuRPov3yO2EHQ197ajG8un3EURw7xOq/UggVynwV8Baf8ADf4c6Z4Usdjtbx77qcDH2i4bmST8TwPRQB2rsz0oA/PH9gaRLL9oh7a7+WeXS7uBFbr5gZGI+uEavqP4mfEbRNB+KVvpdz4ITWL23hULdLGrXJMikqkK7SW6469SQPf5U+Omkav8Df2nk8V6TARaTXx1fTichJUdj50BPbBZ0I67WU96+5/h94g8JfEPw/pnjXREtLwSR4jleJTPav8AxRMeqMpOCM+44INdGGq06Um6kbqzW9jmxVKpVilTlZ3T2ueK2ngHVPEngKSztdestLa3uUuJbRpz9luFkDFZncEjzASY+OP3RB3EAjubvwbq91pGk+DLfxhqkl/Y2yzXF4oj2Wi7GRQp2eZmTLIAWzsDk/w5vfGH4TW3jCIXWj3EWmam0gNwW3eTcqM8uinG8E5DYzyQfbr/AIe+GYfCfhe10lZ2up40BuLl87pnwBnkk4AAUDsqgdq3lXthIRU9U27W2879fQ540b4ucnDRxSvffyt09TzTw94M1D4b/Bzxy3iLVbe5il065l8qJmMcarA4J+bHLZGeOw618W/BCxu7r4ffFiW3VikXhdN+P+vuF/8A0GN/1r6R/bz+L1jp3hiX4ZaJdLLquobW1Qxtn7Nbg7hGx7O5A47KDn7wrT/Yt+FCWPwN1e48RWpSTxnEwkjZcMLIoyR/i293Hsy1zYivPEVHUnuzrw9CGHpqnDZHN/8ABNO7tzpPjWxDKLhbi0lI7lCsgH6g/nX2BX5u/CHxHqX7O/7Q15pniSOUWCSNp+phVPzwMQY7hB3AwrjuVJHU1+jGk6hZarp1vqOnXUN3Z3MaywzwuGSRCMhgR1BrE2LVFFITigDwr9u+5gg/Zx1iKYgPc3dpFDnu4mV+P+Ao1cr/AME4baeP4R67cuCIZtccR577YYgT+teT/twfFWHx74vsPAXhSX+0NP0q4PmyW/zi6vW+QKmPvBASoI6szY4AJ+t/2dvArfDv4Q6J4ZnC/bo4jPfEHObiQ7nGe+0kLn0WgD0GiiigDyD9rH4or8M/hdc3FlOE17VN1ppig/MjEfPN9EXn/eKjvX5//Bz4iXPw38ajxZbaLYaxqEcTpAb4uRCz8NINpBLYyMn+8a/Vi5s7W5Km4t4Ziv3TJGGx9M1F/ZWm/wDQPtP+/C/4UAfDn/DbXjv/AKFLw3+c/wD8XR/w2147/wChS8N/nP8A/F19x/2Vpv8A0D7T/vwv+FH9lab/ANA+0/78L/hQB4b4bsdP/ad/Z7tb3xjp9tp17NcT/ZJ7IEm0kjcoHXeSSCB8yk4I9OCPmS+8NfHD9mzxLPqWkm5OlO3zXltEZ7C6QdPNT+Bsf3sMOcHvX6K28EVvH5cMaRoOiooUfkKe6hlKsAQRggjrQB8UaH+3BqkVqE1rwDZ3VwF5ktNRaFSf91kfH51znjP9rX4m+MlOieC9Eh0OS5yi/Yle7vGz2RsYB9wufQivs/VPhl8O9UnNxqPgbw1dzE5Mkulwlj9Tt5rW0Dwz4d0BCmhaFpelKRgiztI4c/8AfIFAHx3+zv8Asta3qutxeMPizFLFb+b9oXSrh99xdyZzuuDztUnkqTubvgdftiKNIkWONVVFACqBgADsKcKKAPF/2mfgNpXxZ0qO9tJotN8T2cZW1vWX5JU5PlS45K56MOVJPUEg/JmgeL/jl+zhqTaNfWNxFpPmEizv4zNYyknlopFOFJ6/Kw68jNfo3UN5a295bvb3UEU8LjDRyIGVh7g8GgD40tv247lbIrcfDqF7oDG6PVyqE/Qxk/rXBeMvjz8ZfjNK/hbwtpk1naXI2SWWixO8sintJMeQvrjaMda+3J/hV8NJ7n7TN8P/AAs82c7zpMOc/wDfNdNpGk6ZpFr9l0rTrOwt/wDnlbQLEn5KAKAPm39lX9mdfBF5B4x8ceRc+IYxus7KMh4rEn+Mt0eX0xwvYk4I+nqKKACiiigD/9k=) --- Copyright 2018 Analytics Zoo Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ###Output _____no_output_____ ###Markdown **Environment Preparation** **Install Java 8** Run the cell on the **Google Colab** to install jdk 1.8. **Note:** if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer). ###Code # Install jdk8 !apt-get install openjdk-8-jdk-headless -qq > /dev/null import os # Set environment variable JAVA_HOME. os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" !update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java !java -version ###Output _____no_output_____ ###Markdown **Install Analytics Zoo** [Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) is needed to prepare the Python environment for running this example. **Note**: The following code cell is specific for setting up conda environment on Colab; for general conda installation, please refer to the [install guide](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) for more details. ###Code # Install Miniconda !wget https://repo.continuum.io/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh !chmod +x Miniconda3-4.5.4-Linux-x86_64.sh !./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local # Update Conda !conda install --channel defaults conda python=3.6 --yes !conda update --channel defaults --all --yes # Append to the sys.path import sys _ = (sys.path .append("/usr/local/lib/python3.6/site-packages")) os.environ['PYTHONHOME']="/usr/local" ###Output _____no_output_____ ###Markdown You can install the latest pre-release version using `pip install --pre analytics-zoo`. ###Code # Install latest pre-release version of Analytics Zoo # Installing Analytics Zoo from pip will automatically install pyspark, bigdl, and their dependencies. !pip install --pre analytics-zoo # Install python dependencies !pip install torch==1.7.1 torchvision==0.8.2 !pip install six cloudpickle !pip install jep==3.9.0 ###Output _____no_output_____ ###Markdown **Distributed PyTorch using Orca APIs** In this guide we will describe how to scale out PyTorch (v1.5+) programs using Orca in 4 simple steps. ###Code # import necesary libraries and modules from __future__ import print_function import os import argparse from zoo.orca import init_orca_context, stop_orca_context from zoo.orca import OrcaContext ###Output _____no_output_____ ###Markdown **Step 1: Init Orca Context** ###Code # recommended to set it to True when running Analytics Zoo in Jupyter notebook. OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook). cluster_mode = "local" if cluster_mode == "local": init_orca_context(cores=1, memory="2g") # run in local mode elif cluster_mode == "k8s": init_orca_context(cluster_mode="k8s", num_nodes=2, cores=4) # run on K8s cluster elif cluster_mode == "yarn": init_orca_context( cluster_mode="yarn-client", cores=4, num_nodes=2, memory="2g", driver_memory="10g", driver_cores=1, conf={"spark.rpc.message.maxSize": "1024", "spark.task.maxFailures": "1", "spark.driver.extraJavaOptions": "-Dbigdl.failure.retryTimes=1"}) # run on Hadoop YARN cluster ###Output _____no_output_____ ###Markdown This is the only place where you need to specify local or distributed mode. View [Orca Context](https://analytics-zoo.readthedocs.io/en/latest/doc/Orca/Overview/orca-context.html) for more details. **Note**: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster. **Step 2: Define the Model** You may define your model, loss and optimizer in the same way as in any standard (single node) PyTorch program. ###Code import torch import torch.nn as nn import torch.nn.functional as F class LeNet(nn.Module): def __init__(self): super(LeNet, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5, 1) self.conv2 = nn.Conv2d(20, 50, 5, 1) self.fc1 = nn.Linear(4*4*50, 500) self.fc2 = nn.Linear(500, 10) def forward(self, x): x = F.relu(self.conv1(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 2) x = x.view(-1, 4*4*50) x = F.relu(self.fc1(x)) x = self.fc2(x) return F.log_softmax(x, dim=1) model = LeNet() model.train() criterion = nn.NLLLoss() lr = 0.001 adam = torch.optim.Adam(model.parameters(), lr) ###Output _____no_output_____ ###Markdown **Step 3: Define Train Dataset** You can define the dataset using standard [Pytorch DataLoader](https://pytorch.org/docs/stable/data.html). Orca also supports a data creator function or [Orca SparkXShards](./data). ###Code import torch from torchvision import datasets, transforms torch.manual_seed(0) dir='./dataset' batch_size=320 test_batch_size=320 train_loader = torch.utils.data.DataLoader( datasets.MNIST(dir, train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size= batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader( datasets.MNIST(dir, train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=test_batch_size, shuffle=False) ###Output _____no_output_____ ###Markdown **Step 4: Fit with Orca Estimator** First, Create an Estimator. ###Code from zoo.orca.learn.pytorch import Estimator est = Estimator.from_torch(model=model, optimizer=adam, loss=criterion) ###Output _____no_output_____ ###Markdown Next, fit and evaluate using the Estimator. ###Code from zoo.orca.learn.metrics import Accuracy from zoo.orca.learn.trigger import EveryEpoch est.fit(data=train_loader, epochs=1, validation_data=test_loader, validation_metrics=[Accuracy()], checkpoint_trigger=EveryEpoch()) ###Output _____no_output_____ ###Markdown Finally, evaluate using the Estimator. ###Code result = est.evaluate(data=test_loader, validation_metrics=[Accuracy()]) for r in result: print(str(r)) ###Output _____no_output_____ ###Markdown The accuracy of this model has reached 98%. ###Code # stop orca context when program finishes stop_orca_context() ###Output _____no_output_____ ###Markdown ![image.png](data:image/png;base64,/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAUDBAQEAwUEBAQFBQUGBwwIBwcHBw8LCwkMEQ8SEhEPERETFhwXExQaFRERGCEYGh0dHx8fExciJCIeJBweHx7/2wBDAQUFBQcGBw4ICA4eFBEUHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh7/wAARCABNAI0DASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD7LrzPT/i1p958WpvAy2O2NZHgjvjPw8yrkptxxyGXOeorrfiDr0XhnwbqmuyEFrW3Zogf4pDwg/FiK+WW8OajpHw10T4lwM51E6w0zuTyU3fu2P1dG/77r2crwNLEQlKr192P+J6/16niZpj6uHnGNLp70v8ACnY+gP2hfiafhR4AXxUNGGr5vYrX7P8AafJ++GO7dtbpt6Y710Hwx8baL8QPBVh4q0Gbfa3aZaNj88Eg+/G47Mp4/IjgivC/25dVt9b/AGY9P1i0IMF5qVnOnPQMkhx+HT8K84+Gt/q/7OmseF9du5bm7+HXjfTrSa6Yjd9humiUs3Hdck/7UeRyUrx3FxbT3R7UZKSTWzPozQvjRp+q/HrVPhPHol3HeafE8j3rTKY3Coj8L1/jH5VN+0Z8XLP4Q+DrbWpdN/tW7vLsW1tZ/aPJ3/KWdi21sBQB26kV4d8Npobj/goV4puYJUlhlsZHjkRgyupt4CGBHUEHOaZ8W7WD42/te2Pw8lkaTQPDVhMLwoxwJSm52/B2hT/gJpDPqD4aeLbLxx4C0bxZp6eXBqdqk3lltxifo6E9yrAr+Fcd8IvjNp/xD8beKfDFpol3Yy+HpWilmlmVlmIlePIA5HKZ5ryv9gjxDeadbeKvhRrRKaj4fv3lijbqEL7JVHsJFB/7aVmfsV/8l7+Lv/X6/wD6VzUAeufDf456d4p+Kmr/AA41Pw/e+H9d00SYS5nR1nMZG4IV/wBkhx6rk1j+Pf2kNM0D4jap4J0TwnqniW90q2ee9ks5kVY/LQvIvIJJUYB/2jt61xX7bXhTUPC+r6J8cvCMyWesaTcRW962B+8BJWJyP4sZMbDurDsK3P2Gvh22j+Crj4i60/2nX/FJM4mc7nS2LEjJ/vO2Xb/gPpQBiN+2ZpiXq2T/AA38RrdPysBlQSN9F25PQ16T4d+OVpq0XhJpPC+p2UniMkJHPIoa3xcND8wIyc7d3HY15N49z/w8U8Jcn/jyj7/9O9xXffG8Z+PfgEHPL24/8ma78uw8K9ZxntZv7k2efmVedCipw3ul97Ol+O/x48I/CdYbTUkuNT1q4TzINNtSN+zON7seEUkEDqT2B5rkPhB+0u/jX4g6d4O1f4fap4duNUEjWc0s+9HCIznIZEOMKeRnnFec/s8WFr4+/bA8f+JvE0aXlzo885sYZhuETLP5MbYP9xFwPQkHrX2HdWFndTW01xbQzS2snmwO6BmifBBZSeVOCRx2JrgPQPn/AMdftQ22m+Pb3wn4K8Cax40n01mW+msnIVChw+wKjlgp4LHAyOM9a9b+EHjyy+I/ge18U2Gm6hp0M7vH5N7HtcMh2tgjhlzkbh6HoQRXyVc6b8Vf2Z/iN4k8T6X4dTxF4S1SUyT3IUsvlb2dd7L80LruIJIKnPfjH1N8C/iXoHxR8FLr+hQyWnlymC7s5cb7eYAMVyOGBDAhh1z2OQADvaKKKAPF/wBpaHxDrseieE9D0u+uI7m4E11PHAzRJztQMwGAASzHPoKgv/2edCXSp0tNb1p7pYW8lZJU8oyAHbkbemcd69b8Wa7Y+GfDOpeIdT837FptrJdT+Um5tiAk4Hc4HSvD/wDhsH4Qf89Nf/8ABd/9lXpU80r0aUKVF8qV/m/M8yplVCtVlUqrmb/BeR5n8T9H8a6v+y5N4RHhXXZr/TtegeCBLCVnaBlkJ2gDJCtuyR03CvoTRPAmneL/ANnXQvBfivT5Ujl0G0hmjkTbNbSrCuGAPKujD9CDxmuH/wCGwfhB/wA9Nf8A/Bd/9nR/w2D8IP8Anpr/AP4Lv/s65MTX9vVlVta+tjrwtD6vRjSve3U8V+BXgrxj8JP2gtefVtF1PUxo+h3rWs9vaySJfKqKYVjIByWAAC9RyO1aXwJ/ZsvfiFpus+MfiVd+J/D+rX2oyFIYlFvLID8zyOJEJwXYgdPu19g+BvEll4v8K2HiTTre9gsr+LzbdbuLypGQk7W25OARyPUEGsP4mfFfwH8OrcP4r8QW1nO67orRMy3Eg9RGuWx7nA96wOg+aLD4Xa98Df2m/C+p+E7HxH4g8NahGIb+7+ztO0SysY5RK0agAKdkgyOg9qwPh5rXxM+E/wAXPH2r6f8ACTxF4gh1jUZ1R1tZ40CrcSMHVhGwYENXo+qftseB4rpo9O8K+IbuJTgSSNFFu9wNzfrW/wCC/wBr74Wa3cpa6sNW8PSM20SXsAeHP+/GWx9SAKAD9omTxP8AED9kz7XH4T1O21vUHtJpNIjgklnhInGVK7Q3AGTwK9I/Zzsb3Tfgb4PsNRtJ7O7g0uJJoJ4ykkbDOQynkH2Ndpo+p6frGmw6lpV9bX1lOu+G4t5RJHIPUMODVugD5a8beGvEU/7efhjxDBoOqS6PDaRrLfpaubdD5E4wZMbRyQOvcV23xi0fV7741+Cb+z0u9uLW3aHz54oGZIsXGTuYDA455r2+vLfih8fvhl8PbmSx1nXRdanGcPYaennzIfRsEKh9mYGunCYl4apzpX0a+9WOXF4VYmnyN21T+53PGvjF8P8A4h/C7403Pxi+FmmPrNlqJZtV02KMyMC+DKCi/MyMQHDLkq3bGM9P8Lfjl8R/iH8RNH0aL4Y3vh7RQZG1W9uEllAxE+xQzIgQF9vqx6cVhy/tteDhORF4N8QPDn77Swq2P93J/nXe/Dv9qP4UeLrqKxk1S40C9lICRatGIkZs4wJQSn5kVzHUeZ678aPjT4Th8R+D/G3wzudc1O8edNNvbGFzaGOTKqoCo3mxjPHIbHDc816B+xF8N9d+H/wyu5PEts9lqWsXgufsj/fgiVAqBx2Y/MSOwIB5yK94jdJUDowZWAKlTkEHvT6ACiiigDO8T6Jp/iPw7qGg6rG0thqFu9tcIrlC0bghgCORweorxv8A4ZM+Cv8A0L99/wCDSf8A+Kr3WigD548T/sz/AAE8OeHr/XtX0e9t7Cwge4uJDqk/yooJP8XJ7AdyQK+TvgL8OrT4sfG37Dp+mSWHheCdr27h8xnMForfLCXPJZuEz15Y9q9s/wCCgfxSMstr8K9FnJOUutYMZySesMBx+Dkf7nvXtP7JXwvX4Z/C2CO/gEevattvNTLDDRkj5Ifoinn/AGi1AGH+1f8AGyD4TeGrbw74ZW3/AOElvoMWqBQUsIB8olK9M8YRenBJ4GD4P8CP2cPEXxXc+PPiNrGoWmm6g3noWbfe6gCfvlnzsQ9iQSR0AGDXN+HbZvjz+13I+ps02l3OoyTSqScCxt87Y/YMqqv1cmvsXxn8YNF8F+NIvCtzo84tIIo/PuYjgQKy5GyMDLALjp9ADitqGHqV5ONNXaV/kjGviKdCKlUdk3b5kWi/s2/BjS7NbdPBNndEDDS3c0szt7ks3H4AVy3xD/ZJ+F+v2UreH7a58MagV/dy2srSw7u26Jycj/dKmtL4gWvjH4oabKnh+8sbDSbVopEia4Ia5ZwSN8qEqNqFG2DIBcZYkYHf/Dx9V0SztfCfiO7W71C3tVe3uxuxdRjAYZPJeMkA+qlG7kBypQVFVFPVvbqvMmNabrSpuGiW/R+R8PeHNf8AiR+y38Tzouro93otwwkmtUcm2voc482En7kg9eCCMMCK+/8Awj4g0vxT4asPEOi3K3OnX8CzwSDup7EdiDkEdiCK8q/bF8C2fjb4I6tdJCj6locbajZSryRsGZUz6Mgbj1C+leG/sY/FSXw38GfiBYXcnmDw5aNq2no/I+dWBT6eYEP1c1gdBv8A7ZHx/wBS07VZvhn4BupYr7iLVL+3JMqM3/LvERyG5G5hyM7Rg5qh8Dv2P473T4dc+KV3dJNOBIukW0mxkzz++k5O71Vends8Vx/7CHg8eNfi/qnjXXs3v9igXW6UZ8y9mZtrn1IxI312ntX35QB5Xb/s7fBiC1FsngHTGQDG52ld/wDvotn9a8v+LX7HfhHU9PnvPh/dT6DqagtHazytNaSn+6S2XT65Ye1fUlFAHwT+zr8ZPFHwf8cH4afEn7TFosdwLZkujl9Lc9HU94TkEgcYIZe4P3qjK6B0YMpGQQcgivkj/gop4CtZ/DWlfEK0hRL20nWwvWUYMkL7jGW/3WBH0f2FeofsW+L7jxd8BdKe9mM15pMj6ZM7NksIsGMn/tmyD8KAPaKKKKACuP8AjL470/4c/DvVPFeobX+yx7baAnBnnbiOMfU9fQAntXYV8Cftj+PL/wCJ/wAXrH4b+FN15Z6Zdi0ijiORc37nY7fRfuA9vnPQ0AM/Y+8Cah8VPjFf/EbxXuvLLTLv7bPJIPlub5zuRPov3yO2EHQ197ajG8un3EURw7xOq/UggVynwV8Baf8ADf4c6Z4Usdjtbx77qcDH2i4bmST8TwPRQB2rsz0oA/PH9gaRLL9oh7a7+WeXS7uBFbr5gZGI+uEavqP4mfEbRNB+KVvpdz4ITWL23hULdLGrXJMikqkK7SW6469SQPf5U+Omkav8Df2nk8V6TARaTXx1fTichJUdj50BPbBZ0I67WU96+5/h94g8JfEPw/pnjXREtLwSR4jleJTPav8AxRMeqMpOCM+44INdGGq06Um6kbqzW9jmxVKpVilTlZ3T2ueK2ngHVPEngKSztdestLa3uUuJbRpz9luFkDFZncEjzASY+OP3RB3EAjubvwbq91pGk+DLfxhqkl/Y2yzXF4oj2Wi7GRQp2eZmTLIAWzsDk/w5vfGH4TW3jCIXWj3EWmam0gNwW3eTcqM8uinG8E5DYzyQfbr/AIe+GYfCfhe10lZ2up40BuLl87pnwBnkk4AAUDsqgdq3lXthIRU9U27W2879fQ540b4ucnDRxSvffyt09TzTw94M1D4b/Bzxy3iLVbe5il065l8qJmMcarA4J+bHLZGeOw618W/BCxu7r4ffFiW3VikXhdN+P+vuF/8A0GN/1r6R/bz+L1jp3hiX4ZaJdLLquobW1Qxtn7Nbg7hGx7O5A47KDn7wrT/Yt+FCWPwN1e48RWpSTxnEwkjZcMLIoyR/i293Hsy1zYivPEVHUnuzrw9CGHpqnDZHN/8ABNO7tzpPjWxDKLhbi0lI7lCsgH6g/nX2BX5u/CHxHqX7O/7Q15pniSOUWCSNp+phVPzwMQY7hB3AwrjuVJHU1+jGk6hZarp1vqOnXUN3Z3MaywzwuGSRCMhgR1BrE2LVFFITigDwr9u+5gg/Zx1iKYgPc3dpFDnu4mV+P+Ao1cr/AME4baeP4R67cuCIZtccR577YYgT+teT/twfFWHx74vsPAXhSX+0NP0q4PmyW/zi6vW+QKmPvBASoI6szY4AJ+t/2dvArfDv4Q6J4ZnC/bo4jPfEHObiQ7nGe+0kLn0WgD0GiiigDyD9rH4or8M/hdc3FlOE17VN1ppig/MjEfPN9EXn/eKjvX5//Bz4iXPw38ajxZbaLYaxqEcTpAb4uRCz8NINpBLYyMn+8a/Vi5s7W5Km4t4Ziv3TJGGx9M1F/ZWm/wDQPtP+/C/4UAfDn/DbXjv/AKFLw3+c/wD8XR/w2147/wChS8N/nP8A/F19x/2Vpv8A0D7T/vwv+FH9lab/ANA+0/78L/hQB4b4bsdP/ad/Z7tb3xjp9tp17NcT/ZJ7IEm0kjcoHXeSSCB8yk4I9OCPmS+8NfHD9mzxLPqWkm5OlO3zXltEZ7C6QdPNT+Bsf3sMOcHvX6K28EVvH5cMaRoOiooUfkKe6hlKsAQRggjrQB8UaH+3BqkVqE1rwDZ3VwF5ktNRaFSf91kfH51znjP9rX4m+MlOieC9Eh0OS5yi/Yle7vGz2RsYB9wufQivs/VPhl8O9UnNxqPgbw1dzE5Mkulwlj9Tt5rW0Dwz4d0BCmhaFpelKRgiztI4c/8AfIFAHx3+zv8Asta3qutxeMPizFLFb+b9oXSrh99xdyZzuuDztUnkqTubvgdftiKNIkWONVVFACqBgADsKcKKAPF/2mfgNpXxZ0qO9tJotN8T2cZW1vWX5JU5PlS45K56MOVJPUEg/JmgeL/jl+zhqTaNfWNxFpPmEizv4zNYyknlopFOFJ6/Kw68jNfo3UN5a295bvb3UEU8LjDRyIGVh7g8GgD40tv247lbIrcfDqF7oDG6PVyqE/Qxk/rXBeMvjz8ZfjNK/hbwtpk1naXI2SWWixO8sintJMeQvrjaMda+3J/hV8NJ7n7TN8P/AAs82c7zpMOc/wDfNdNpGk6ZpFr9l0rTrOwt/wDnlbQLEn5KAKAPm39lX9mdfBF5B4x8ceRc+IYxus7KMh4rEn+Mt0eX0xwvYk4I+nqKKACiiigD/9k=)--- Copyright 2018 Analytics Zoo Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ###Output _____no_output_____ ###Markdown **Environment Preparation** **Install Java 8**Run the cell on the **Google Colab** to install jdk 1.8.**Note:** if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer). ###Code # Install jdk8 !apt-get install openjdk-8-jdk-headless -qq > /dev/null import os # Set environment variable JAVA_HOME. os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" !update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java !java -version ###Output _____no_output_____ ###Markdown **Install Analytics Zoo** [Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) is needed to prepare the Python environment for running this example. **Note**: The following code cell is specific for setting up conda environment on Colab; for general conda installation, please refer to the [install guide](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) for more details. ###Code import sys # Set current python version python_version = f"3.7.10" # Install Miniconda !wget https://repo.continuum.io/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh !chmod +x Miniconda3-4.5.4-Linux-x86_64.sh !./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local # Update Conda !conda install --channel defaults conda python=$python_version --yes !conda update --channel defaults --all --yes # Append to the sys.path _ = (sys.path .append(f"/usr/local/lib/python3.7/site-packages")) os.environ['PYTHONHOME']="/usr/local" ###Output _____no_output_____ ###Markdown You can install the latest pre-release version using `pip install --pre --upgrade analytics-zoo`. ###Code # Install latest pre-release version of Analytics Zoo # Installing Analytics Zoo from pip will automatically install pyspark, bigdl, and their dependencies. !pip install --pre --upgrade analytics-zoo # Install python dependencies !pip install torch==1.7.1 torchvision==0.8.2 !pip install six cloudpickle !pip install jep==3.9.0 ###Output _____no_output_____ ###Markdown **Distributed PyTorch using Orca APIs**In this guide we will describe how to scale out PyTorch programs using Orca in 4 simple steps. ###Code # import necesary libraries and modules from __future__ import print_function import os import argparse from zoo.orca import init_orca_context, stop_orca_context from zoo.orca import OrcaContext ###Output _____no_output_____ ###Markdown **Step 1: Init Orca Context** ###Code # recommended to set it to True when running Analytics Zoo in Jupyter notebook. OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook). cluster_mode = "local" if cluster_mode == "local": init_orca_context(cores=1, memory="2g") # run in local mode elif cluster_mode == "k8s": init_orca_context(cluster_mode="k8s", num_nodes=2, cores=4) # run on K8s cluster elif cluster_mode == "yarn": init_orca_context( cluster_mode="yarn-client", cores=4, num_nodes=2, memory="2g", driver_memory="10g", driver_cores=1, conf={"spark.rpc.message.maxSize": "1024", "spark.task.maxFailures": "1", "spark.driver.extraJavaOptions": "-Dbigdl.failure.retryTimes=1"}) # run on Hadoop YARN cluster ###Output _____no_output_____ ###Markdown This is the only place where you need to specify local or distributed mode. View [Orca Context](https://analytics-zoo.readthedocs.io/en/latest/doc/Orca/Overview/orca-context.html) for more details.**Note**: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster. **Step 2: Define the Model**You may define your model, loss and optimizer in the same way as in any standard (single node) PyTorch program. ###Code import torch import torch.nn as nn import torch.nn.functional as F class LeNet(nn.Module): def __init__(self): super(LeNet, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5, 1) self.conv2 = nn.Conv2d(20, 50, 5, 1) self.fc1 = nn.Linear(4*4*50, 500) self.fc2 = nn.Linear(500, 10) def forward(self, x): x = F.relu(self.conv1(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 2) x = x.view(-1, 4*4*50) x = F.relu(self.fc1(x)) x = self.fc2(x) return F.log_softmax(x, dim=1) model = LeNet() model.train() criterion = nn.NLLLoss() lr = 0.001 adam = torch.optim.Adam(model.parameters(), lr) ###Output _____no_output_____ ###Markdown **Step 3: Define Train Dataset**You can define the dataset using standard [Pytorch DataLoader](https://pytorch.org/docs/stable/data.html). ###Code import torch from torchvision import datasets, transforms torch.manual_seed(0) dir='/tmp/dataset' batch_size=320 test_batch_size=320 train_loader = torch.utils.data.DataLoader( datasets.MNIST(dir, train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size= batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader( datasets.MNIST(dir, train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=test_batch_size, shuffle=False) ###Output _____no_output_____ ###Markdown **Step 4: Fit with Orca Estimator** First, Create an Estimator. ###Code from zoo.orca.learn.pytorch import Estimator from zoo.orca.learn.metrics import Accuracy est = Estimator.from_torch(model=model, optimizer=adam, loss=criterion, metrics=[Accuracy()]) ###Output _____no_output_____ ###Markdown Next, fit and evaluate using the Estimator. ###Code from zoo.orca.learn.trigger import EveryEpoch est.fit(data=train_loader, epochs=1, validation_data=test_loader, checkpoint_trigger=EveryEpoch()) ###Output _____no_output_____ ###Markdown Finally, evaluate using the Estimator. ###Code result = est.evaluate(data=test_loader) for r in result: print(r, ":", result[r]) ###Output _____no_output_____ ###Markdown The accuracy of this model has reached 98%. ###Code # stop orca context when program finishes stop_orca_context() ###Output _____no_output_____ ###Markdown ![image.png](data:image/png;base64,/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAUDBAQEAwUEBAQFBQUGBwwIBwcHBw8LCwkMEQ8SEhEPERETFhwXExQaFRERGCEYGh0dHx8fExciJCIeJBweHx7/2wBDAQUFBQcGBw4ICA4eFBEUHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh7/wAARCABNAI0DASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD7LrzPT/i1p958WpvAy2O2NZHgjvjPw8yrkptxxyGXOeorrfiDr0XhnwbqmuyEFrW3Zogf4pDwg/FiK+WW8OajpHw10T4lwM51E6w0zuTyU3fu2P1dG/77r2crwNLEQlKr192P+J6/16niZpj6uHnGNLp70v8ACnY+gP2hfiafhR4AXxUNGGr5vYrX7P8AafJ++GO7dtbpt6Y710Hwx8baL8QPBVh4q0Gbfa3aZaNj88Eg+/G47Mp4/IjgivC/25dVt9b/AGY9P1i0IMF5qVnOnPQMkhx+HT8K84+Gt/q/7OmseF9du5bm7+HXjfTrSa6Yjd9humiUs3Hdck/7UeRyUrx3FxbT3R7UZKSTWzPozQvjRp+q/HrVPhPHol3HeafE8j3rTKY3Coj8L1/jH5VN+0Z8XLP4Q+DrbWpdN/tW7vLsW1tZ/aPJ3/KWdi21sBQB26kV4d8Npobj/goV4puYJUlhlsZHjkRgyupt4CGBHUEHOaZ8W7WD42/te2Pw8lkaTQPDVhMLwoxwJSm52/B2hT/gJpDPqD4aeLbLxx4C0bxZp6eXBqdqk3lltxifo6E9yrAr+Fcd8IvjNp/xD8beKfDFpol3Yy+HpWilmlmVlmIlePIA5HKZ5ryv9gjxDeadbeKvhRrRKaj4fv3lijbqEL7JVHsJFB/7aVmfsV/8l7+Lv/X6/wD6VzUAeufDf456d4p+Kmr/AA41Pw/e+H9d00SYS5nR1nMZG4IV/wBkhx6rk1j+Pf2kNM0D4jap4J0TwnqniW90q2ee9ks5kVY/LQvIvIJJUYB/2jt61xX7bXhTUPC+r6J8cvCMyWesaTcRW962B+8BJWJyP4sZMbDurDsK3P2Gvh22j+Crj4i60/2nX/FJM4mc7nS2LEjJ/vO2Xb/gPpQBiN+2ZpiXq2T/AA38RrdPysBlQSN9F25PQ16T4d+OVpq0XhJpPC+p2UniMkJHPIoa3xcND8wIyc7d3HY15N49z/w8U8Jcn/jyj7/9O9xXffG8Z+PfgEHPL24/8ma78uw8K9ZxntZv7k2efmVedCipw3ul97Ol+O/x48I/CdYbTUkuNT1q4TzINNtSN+zON7seEUkEDqT2B5rkPhB+0u/jX4g6d4O1f4fap4duNUEjWc0s+9HCIznIZEOMKeRnnFec/s8WFr4+/bA8f+JvE0aXlzo885sYZhuETLP5MbYP9xFwPQkHrX2HdWFndTW01xbQzS2snmwO6BmifBBZSeVOCRx2JrgPQPn/AMdftQ22m+Pb3wn4K8Cax40n01mW+msnIVChw+wKjlgp4LHAyOM9a9b+EHjyy+I/ge18U2Gm6hp0M7vH5N7HtcMh2tgjhlzkbh6HoQRXyVc6b8Vf2Z/iN4k8T6X4dTxF4S1SUyT3IUsvlb2dd7L80LruIJIKnPfjH1N8C/iXoHxR8FLr+hQyWnlymC7s5cb7eYAMVyOGBDAhh1z2OQADvaKKKAPF/wBpaHxDrseieE9D0u+uI7m4E11PHAzRJztQMwGAASzHPoKgv/2edCXSp0tNb1p7pYW8lZJU8oyAHbkbemcd69b8Wa7Y+GfDOpeIdT837FptrJdT+Um5tiAk4Hc4HSvD/wDhsH4Qf89Nf/8ABd/9lXpU80r0aUKVF8qV/m/M8yplVCtVlUqrmb/BeR5n8T9H8a6v+y5N4RHhXXZr/TtegeCBLCVnaBlkJ2gDJCtuyR03CvoTRPAmneL/ANnXQvBfivT5Ujl0G0hmjkTbNbSrCuGAPKujD9CDxmuH/wCGwfhB/wA9Nf8A/Bd/9nR/w2D8IP8Anpr/AP4Lv/s65MTX9vVlVta+tjrwtD6vRjSve3U8V+BXgrxj8JP2gtefVtF1PUxo+h3rWs9vaySJfKqKYVjIByWAAC9RyO1aXwJ/ZsvfiFpus+MfiVd+J/D+rX2oyFIYlFvLID8zyOJEJwXYgdPu19g+BvEll4v8K2HiTTre9gsr+LzbdbuLypGQk7W25OARyPUEGsP4mfFfwH8OrcP4r8QW1nO67orRMy3Eg9RGuWx7nA96wOg+aLD4Xa98Df2m/C+p+E7HxH4g8NahGIb+7+ztO0SysY5RK0agAKdkgyOg9qwPh5rXxM+E/wAXPH2r6f8ACTxF4gh1jUZ1R1tZ40CrcSMHVhGwYENXo+qftseB4rpo9O8K+IbuJTgSSNFFu9wNzfrW/wCC/wBr74Wa3cpa6sNW8PSM20SXsAeHP+/GWx9SAKAD9omTxP8AED9kz7XH4T1O21vUHtJpNIjgklnhInGVK7Q3AGTwK9I/Zzsb3Tfgb4PsNRtJ7O7g0uJJoJ4ykkbDOQynkH2Ndpo+p6frGmw6lpV9bX1lOu+G4t5RJHIPUMODVugD5a8beGvEU/7efhjxDBoOqS6PDaRrLfpaubdD5E4wZMbRyQOvcV23xi0fV7741+Cb+z0u9uLW3aHz54oGZIsXGTuYDA455r2+vLfih8fvhl8PbmSx1nXRdanGcPYaennzIfRsEKh9mYGunCYl4apzpX0a+9WOXF4VYmnyN21T+53PGvjF8P8A4h/C7403Pxi+FmmPrNlqJZtV02KMyMC+DKCi/MyMQHDLkq3bGM9P8Lfjl8R/iH8RNH0aL4Y3vh7RQZG1W9uEllAxE+xQzIgQF9vqx6cVhy/tteDhORF4N8QPDn77Swq2P93J/nXe/Dv9qP4UeLrqKxk1S40C9lICRatGIkZs4wJQSn5kVzHUeZ678aPjT4Th8R+D/G3wzudc1O8edNNvbGFzaGOTKqoCo3mxjPHIbHDc816B+xF8N9d+H/wyu5PEts9lqWsXgufsj/fgiVAqBx2Y/MSOwIB5yK94jdJUDowZWAKlTkEHvT6ACiiigDO8T6Jp/iPw7qGg6rG0thqFu9tcIrlC0bghgCORweorxv8A4ZM+Cv8A0L99/wCDSf8A+Kr3WigD548T/sz/AAE8OeHr/XtX0e9t7Cwge4uJDqk/yooJP8XJ7AdyQK+TvgL8OrT4sfG37Dp+mSWHheCdr27h8xnMForfLCXPJZuEz15Y9q9s/wCCgfxSMstr8K9FnJOUutYMZySesMBx+Dkf7nvXtP7JXwvX4Z/C2CO/gEevattvNTLDDRkj5Ifoinn/AGi1AGH+1f8AGyD4TeGrbw74ZW3/AOElvoMWqBQUsIB8olK9M8YRenBJ4GD4P8CP2cPEXxXc+PPiNrGoWmm6g3noWbfe6gCfvlnzsQ9iQSR0AGDXN+HbZvjz+13I+ps02l3OoyTSqScCxt87Y/YMqqv1cmvsXxn8YNF8F+NIvCtzo84tIIo/PuYjgQKy5GyMDLALjp9ADitqGHqV5ONNXaV/kjGviKdCKlUdk3b5kWi/s2/BjS7NbdPBNndEDDS3c0szt7ks3H4AVy3xD/ZJ+F+v2UreH7a58MagV/dy2srSw7u26Jycj/dKmtL4gWvjH4oabKnh+8sbDSbVopEia4Ia5ZwSN8qEqNqFG2DIBcZYkYHf/Dx9V0SztfCfiO7W71C3tVe3uxuxdRjAYZPJeMkA+qlG7kBypQVFVFPVvbqvMmNabrSpuGiW/R+R8PeHNf8AiR+y38Tzouro93otwwkmtUcm2voc482En7kg9eCCMMCK+/8Awj4g0vxT4asPEOi3K3OnX8CzwSDup7EdiDkEdiCK8q/bF8C2fjb4I6tdJCj6locbajZSryRsGZUz6Mgbj1C+leG/sY/FSXw38GfiBYXcnmDw5aNq2no/I+dWBT6eYEP1c1gdBv8A7ZHx/wBS07VZvhn4BupYr7iLVL+3JMqM3/LvERyG5G5hyM7Rg5qh8Dv2P473T4dc+KV3dJNOBIukW0mxkzz++k5O71Vends8Vx/7CHg8eNfi/qnjXXs3v9igXW6UZ8y9mZtrn1IxI312ntX35QB5Xb/s7fBiC1FsngHTGQDG52ld/wDvotn9a8v+LX7HfhHU9PnvPh/dT6DqagtHazytNaSn+6S2XT65Ye1fUlFAHwT+zr8ZPFHwf8cH4afEn7TFosdwLZkujl9Lc9HU94TkEgcYIZe4P3qjK6B0YMpGQQcgivkj/gop4CtZ/DWlfEK0hRL20nWwvWUYMkL7jGW/3WBH0f2FeofsW+L7jxd8BdKe9mM15pMj6ZM7NksIsGMn/tmyD8KAPaKKKKACuP8AjL470/4c/DvVPFeobX+yx7baAnBnnbiOMfU9fQAntXYV8Cftj+PL/wCJ/wAXrH4b+FN15Z6Zdi0ijiORc37nY7fRfuA9vnPQ0AM/Y+8Cah8VPjFf/EbxXuvLLTLv7bPJIPlub5zuRPov3yO2EHQ197ajG8un3EURw7xOq/UggVynwV8Baf8ADf4c6Z4Usdjtbx77qcDH2i4bmST8TwPRQB2rsz0oA/PH9gaRLL9oh7a7+WeXS7uBFbr5gZGI+uEavqP4mfEbRNB+KVvpdz4ITWL23hULdLGrXJMikqkK7SW6469SQPf5U+Omkav8Df2nk8V6TARaTXx1fTichJUdj50BPbBZ0I67WU96+5/h94g8JfEPw/pnjXREtLwSR4jleJTPav8AxRMeqMpOCM+44INdGGq06Um6kbqzW9jmxVKpVilTlZ3T2ueK2ngHVPEngKSztdestLa3uUuJbRpz9luFkDFZncEjzASY+OP3RB3EAjubvwbq91pGk+DLfxhqkl/Y2yzXF4oj2Wi7GRQp2eZmTLIAWzsDk/w5vfGH4TW3jCIXWj3EWmam0gNwW3eTcqM8uinG8E5DYzyQfbr/AIe+GYfCfhe10lZ2up40BuLl87pnwBnkk4AAUDsqgdq3lXthIRU9U27W2879fQ540b4ucnDRxSvffyt09TzTw94M1D4b/Bzxy3iLVbe5il065l8qJmMcarA4J+bHLZGeOw618W/BCxu7r4ffFiW3VikXhdN+P+vuF/8A0GN/1r6R/bz+L1jp3hiX4ZaJdLLquobW1Qxtn7Nbg7hGx7O5A47KDn7wrT/Yt+FCWPwN1e48RWpSTxnEwkjZcMLIoyR/i293Hsy1zYivPEVHUnuzrw9CGHpqnDZHN/8ABNO7tzpPjWxDKLhbi0lI7lCsgH6g/nX2BX5u/CHxHqX7O/7Q15pniSOUWCSNp+phVPzwMQY7hB3AwrjuVJHU1+jGk6hZarp1vqOnXUN3Z3MaywzwuGSRCMhgR1BrE2LVFFITigDwr9u+5gg/Zx1iKYgPc3dpFDnu4mV+P+Ao1cr/AME4baeP4R67cuCIZtccR577YYgT+teT/twfFWHx74vsPAXhSX+0NP0q4PmyW/zi6vW+QKmPvBASoI6szY4AJ+t/2dvArfDv4Q6J4ZnC/bo4jPfEHObiQ7nGe+0kLn0WgD0GiiigDyD9rH4or8M/hdc3FlOE17VN1ppig/MjEfPN9EXn/eKjvX5//Bz4iXPw38ajxZbaLYaxqEcTpAb4uRCz8NINpBLYyMn+8a/Vi5s7W5Km4t4Ziv3TJGGx9M1F/ZWm/wDQPtP+/C/4UAfDn/DbXjv/AKFLw3+c/wD8XR/w2147/wChS8N/nP8A/F19x/2Vpv8A0D7T/vwv+FH9lab/ANA+0/78L/hQB4b4bsdP/ad/Z7tb3xjp9tp17NcT/ZJ7IEm0kjcoHXeSSCB8yk4I9OCPmS+8NfHD9mzxLPqWkm5OlO3zXltEZ7C6QdPNT+Bsf3sMOcHvX6K28EVvH5cMaRoOiooUfkKe6hlKsAQRggjrQB8UaH+3BqkVqE1rwDZ3VwF5ktNRaFSf91kfH51znjP9rX4m+MlOieC9Eh0OS5yi/Yle7vGz2RsYB9wufQivs/VPhl8O9UnNxqPgbw1dzE5Mkulwlj9Tt5rW0Dwz4d0BCmhaFpelKRgiztI4c/8AfIFAHx3+zv8Asta3qutxeMPizFLFb+b9oXSrh99xdyZzuuDztUnkqTubvgdftiKNIkWONVVFACqBgADsKcKKAPF/2mfgNpXxZ0qO9tJotN8T2cZW1vWX5JU5PlS45K56MOVJPUEg/JmgeL/jl+zhqTaNfWNxFpPmEizv4zNYyknlopFOFJ6/Kw68jNfo3UN5a295bvb3UEU8LjDRyIGVh7g8GgD40tv247lbIrcfDqF7oDG6PVyqE/Qxk/rXBeMvjz8ZfjNK/hbwtpk1naXI2SWWixO8sintJMeQvrjaMda+3J/hV8NJ7n7TN8P/AAs82c7zpMOc/wDfNdNpGk6ZpFr9l0rTrOwt/wDnlbQLEn5KAKAPm39lX9mdfBF5B4x8ceRc+IYxus7KMh4rEn+Mt0eX0xwvYk4I+nqKKACiiigD/9k=)--- Copyright 2018 Analytics Zoo Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ###Output _____no_output_____ ###Markdown **Environment Preparation** **Install Java 8**Run the cell on the **Google Colab** to install jdk 1.8.**Note:** if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer). ###Code # Install jdk8 !apt-get install openjdk-8-jdk-headless -qq > /dev/null import os # Set environment variable JAVA_HOME. os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" !update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java !java -version ###Output _____no_output_____ ###Markdown **Install Analytics Zoo** [Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) is needed to prepare the Python environment for running this example. **Note**: The following code cell is specific for setting up conda environment on Colab; for general conda installation, please refer to the [install guide](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) for more details. ###Code import sys # Set current python version python_version = f"3.7.10" # Install Miniconda !wget https://repo.continuum.io/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh !chmod +x Miniconda3-4.5.4-Linux-x86_64.sh !./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local # Update Conda !conda install --channel defaults conda python=$python_version --yes !conda update --channel defaults --all --yes # Append to the sys.path _ = (sys.path .append(f"/usr/local/lib/python3.7/site-packages")) os.environ['PYTHONHOME']="/usr/local" ###Output _____no_output_____ ###Markdown You can install the latest pre-release version using `pip install --pre --upgrade analytics-zoo`. ###Code # Install latest pre-release version of Analytics Zoo # Installing Analytics Zoo from pip will automatically install pyspark, bigdl, and their dependencies. !pip install --pre --upgrade analytics-zoo # Install python dependencies !pip install torch==1.7.1 torchvision==0.8.2 !pip install six cloudpickle !pip install jep==3.9.0 ###Output _____no_output_____ ###Markdown **Distributed PyTorch using Orca APIs**In this guide we will describe how to scale out PyTorch programs using Orca in 4 simple steps. ###Code # import necesary libraries and modules from __future__ import print_function import os import argparse from zoo.orca import init_orca_context, stop_orca_context from zoo.orca import OrcaContext ###Output _____no_output_____ ###Markdown **Step 1: Init Orca Context** ###Code # recommended to set it to True when running Analytics Zoo in Jupyter notebook. OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook). cluster_mode = "local" if cluster_mode == "local": init_orca_context(cores=1, memory="2g") # run in local mode elif cluster_mode == "k8s": init_orca_context(cluster_mode="k8s", num_nodes=2, cores=4) # run on K8s cluster elif cluster_mode == "yarn": init_orca_context( cluster_mode="yarn-client", cores=4, num_nodes=2, memory="2g", driver_memory="10g", driver_cores=1, conf={"spark.rpc.message.maxSize": "1024", "spark.task.maxFailures": "1", "spark.driver.extraJavaOptions": "-Dbigdl.failure.retryTimes=1"}) # run on Hadoop YARN cluster ###Output _____no_output_____ ###Markdown This is the only place where you need to specify local or distributed mode. View [Orca Context](https://analytics-zoo.readthedocs.io/en/latest/doc/Orca/Overview/orca-context.html) for more details.**Note**: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster. **Step 2: Define the Model**You may define your model, loss and optimizer in the same way as in any standard (single node) PyTorch program. ###Code import torch import torch.nn as nn import torch.nn.functional as F class LeNet(nn.Module): def __init__(self): super(LeNet, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5, 1) self.conv2 = nn.Conv2d(20, 50, 5, 1) self.fc1 = nn.Linear(4*4*50, 500) self.fc2 = nn.Linear(500, 10) def forward(self, x): x = F.relu(self.conv1(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 2) x = x.view(-1, 4*4*50) x = F.relu(self.fc1(x)) x = self.fc2(x) return F.log_softmax(x, dim=1) model = LeNet() model.train() criterion = nn.NLLLoss() lr = 0.001 adam = torch.optim.Adam(model.parameters(), lr) ###Output _____no_output_____ ###Markdown **Step 3: Define Train Dataset**You can define the dataset using standard [Pytorch DataLoader](https://pytorch.org/docs/stable/data.html). ###Code !wget www.di.ens.fr/~lelarge/MNIST.tar.gz !tar -zxvf MNIST.tar.gz import torch from torchvision import datasets, transforms torch.manual_seed(0) dir='./' batch_size=320 test_batch_size=320 train_loader = torch.utils.data.DataLoader( datasets.MNIST(dir, train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size= batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader( datasets.MNIST(dir, train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=test_batch_size, shuffle=False) ###Output _____no_output_____ ###Markdown **Step 4: Fit with Orca Estimator** First, Create an Estimator. ###Code from zoo.orca.learn.pytorch import Estimator from zoo.orca.learn.metrics import Accuracy est = Estimator.from_torch(model=model, optimizer=adam, loss=criterion, metrics=[Accuracy()]) ###Output _____no_output_____ ###Markdown Next, fit and evaluate using the Estimator. ###Code from zoo.orca.learn.trigger import EveryEpoch est.fit(data=train_loader, epochs=1, validation_data=test_loader, checkpoint_trigger=EveryEpoch()) ###Output _____no_output_____ ###Markdown Finally, evaluate using the Estimator. ###Code result = est.evaluate(data=test_loader) for r in result: print(r, ":", result[r]) ###Output _____no_output_____ ###Markdown The accuracy of this model has reached 98%. ###Code # stop orca context when program finishes stop_orca_context() ###Output _____no_output_____ ###Markdown ![image.png](data:image/png;base64,/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAUDBAQEAwUEBAQFBQUGBwwIBwcHBw8LCwkMEQ8SEhEPERETFhwXExQaFRERGCEYGh0dHx8fExciJCIeJBweHx7/2wBDAQUFBQcGBw4ICA4eFBEUHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh7/wAARCABNAI0DASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD7LrzPT/i1p958WpvAy2O2NZHgjvjPw8yrkptxxyGXOeorrfiDr0XhnwbqmuyEFrW3Zogf4pDwg/FiK+WW8OajpHw10T4lwM51E6w0zuTyU3fu2P1dG/77r2crwNLEQlKr192P+J6/16niZpj6uHnGNLp70v8ACnY+gP2hfiafhR4AXxUNGGr5vYrX7P8AafJ++GO7dtbpt6Y710Hwx8baL8QPBVh4q0Gbfa3aZaNj88Eg+/G47Mp4/IjgivC/25dVt9b/AGY9P1i0IMF5qVnOnPQMkhx+HT8K84+Gt/q/7OmseF9du5bm7+HXjfTrSa6Yjd9humiUs3Hdck/7UeRyUrx3FxbT3R7UZKSTWzPozQvjRp+q/HrVPhPHol3HeafE8j3rTKY3Coj8L1/jH5VN+0Z8XLP4Q+DrbWpdN/tW7vLsW1tZ/aPJ3/KWdi21sBQB26kV4d8Npobj/goV4puYJUlhlsZHjkRgyupt4CGBHUEHOaZ8W7WD42/te2Pw8lkaTQPDVhMLwoxwJSm52/B2hT/gJpDPqD4aeLbLxx4C0bxZp6eXBqdqk3lltxifo6E9yrAr+Fcd8IvjNp/xD8beKfDFpol3Yy+HpWilmlmVlmIlePIA5HKZ5ryv9gjxDeadbeKvhRrRKaj4fv3lijbqEL7JVHsJFB/7aVmfsV/8l7+Lv/X6/wD6VzUAeufDf456d4p+Kmr/AA41Pw/e+H9d00SYS5nR1nMZG4IV/wBkhx6rk1j+Pf2kNM0D4jap4J0TwnqniW90q2ee9ks5kVY/LQvIvIJJUYB/2jt61xX7bXhTUPC+r6J8cvCMyWesaTcRW962B+8BJWJyP4sZMbDurDsK3P2Gvh22j+Crj4i60/2nX/FJM4mc7nS2LEjJ/vO2Xb/gPpQBiN+2ZpiXq2T/AA38RrdPysBlQSN9F25PQ16T4d+OVpq0XhJpPC+p2UniMkJHPIoa3xcND8wIyc7d3HY15N49z/w8U8Jcn/jyj7/9O9xXffG8Z+PfgEHPL24/8ma78uw8K9ZxntZv7k2efmVedCipw3ul97Ol+O/x48I/CdYbTUkuNT1q4TzINNtSN+zON7seEUkEDqT2B5rkPhB+0u/jX4g6d4O1f4fap4duNUEjWc0s+9HCIznIZEOMKeRnnFec/s8WFr4+/bA8f+JvE0aXlzo885sYZhuETLP5MbYP9xFwPQkHrX2HdWFndTW01xbQzS2snmwO6BmifBBZSeVOCRx2JrgPQPn/AMdftQ22m+Pb3wn4K8Cax40n01mW+msnIVChw+wKjlgp4LHAyOM9a9b+EHjyy+I/ge18U2Gm6hp0M7vH5N7HtcMh2tgjhlzkbh6HoQRXyVc6b8Vf2Z/iN4k8T6X4dTxF4S1SUyT3IUsvlb2dd7L80LruIJIKnPfjH1N8C/iXoHxR8FLr+hQyWnlymC7s5cb7eYAMVyOGBDAhh1z2OQADvaKKKAPF/wBpaHxDrseieE9D0u+uI7m4E11PHAzRJztQMwGAASzHPoKgv/2edCXSp0tNb1p7pYW8lZJU8oyAHbkbemcd69b8Wa7Y+GfDOpeIdT837FptrJdT+Um5tiAk4Hc4HSvD/wDhsH4Qf89Nf/8ABd/9lXpU80r0aUKVF8qV/m/M8yplVCtVlUqrmb/BeR5n8T9H8a6v+y5N4RHhXXZr/TtegeCBLCVnaBlkJ2gDJCtuyR03CvoTRPAmneL/ANnXQvBfivT5Ujl0G0hmjkTbNbSrCuGAPKujD9CDxmuH/wCGwfhB/wA9Nf8A/Bd/9nR/w2D8IP8Anpr/AP4Lv/s65MTX9vVlVta+tjrwtD6vRjSve3U8V+BXgrxj8JP2gtefVtF1PUxo+h3rWs9vaySJfKqKYVjIByWAAC9RyO1aXwJ/ZsvfiFpus+MfiVd+J/D+rX2oyFIYlFvLID8zyOJEJwXYgdPu19g+BvEll4v8K2HiTTre9gsr+LzbdbuLypGQk7W25OARyPUEGsP4mfFfwH8OrcP4r8QW1nO67orRMy3Eg9RGuWx7nA96wOg+aLD4Xa98Df2m/C+p+E7HxH4g8NahGIb+7+ztO0SysY5RK0agAKdkgyOg9qwPh5rXxM+E/wAXPH2r6f8ACTxF4gh1jUZ1R1tZ40CrcSMHVhGwYENXo+qftseB4rpo9O8K+IbuJTgSSNFFu9wNzfrW/wCC/wBr74Wa3cpa6sNW8PSM20SXsAeHP+/GWx9SAKAD9omTxP8AED9kz7XH4T1O21vUHtJpNIjgklnhInGVK7Q3AGTwK9I/Zzsb3Tfgb4PsNRtJ7O7g0uJJoJ4ykkbDOQynkH2Ndpo+p6frGmw6lpV9bX1lOu+G4t5RJHIPUMODVugD5a8beGvEU/7efhjxDBoOqS6PDaRrLfpaubdD5E4wZMbRyQOvcV23xi0fV7741+Cb+z0u9uLW3aHz54oGZIsXGTuYDA455r2+vLfih8fvhl8PbmSx1nXRdanGcPYaennzIfRsEKh9mYGunCYl4apzpX0a+9WOXF4VYmnyN21T+53PGvjF8P8A4h/C7403Pxi+FmmPrNlqJZtV02KMyMC+DKCi/MyMQHDLkq3bGM9P8Lfjl8R/iH8RNH0aL4Y3vh7RQZG1W9uEllAxE+xQzIgQF9vqx6cVhy/tteDhORF4N8QPDn77Swq2P93J/nXe/Dv9qP4UeLrqKxk1S40C9lICRatGIkZs4wJQSn5kVzHUeZ678aPjT4Th8R+D/G3wzudc1O8edNNvbGFzaGOTKqoCo3mxjPHIbHDc816B+xF8N9d+H/wyu5PEts9lqWsXgufsj/fgiVAqBx2Y/MSOwIB5yK94jdJUDowZWAKlTkEHvT6ACiiigDO8T6Jp/iPw7qGg6rG0thqFu9tcIrlC0bghgCORweorxv8A4ZM+Cv8A0L99/wCDSf8A+Kr3WigD548T/sz/AAE8OeHr/XtX0e9t7Cwge4uJDqk/yooJP8XJ7AdyQK+TvgL8OrT4sfG37Dp+mSWHheCdr27h8xnMForfLCXPJZuEz15Y9q9s/wCCgfxSMstr8K9FnJOUutYMZySesMBx+Dkf7nvXtP7JXwvX4Z/C2CO/gEevattvNTLDDRkj5Ifoinn/AGi1AGH+1f8AGyD4TeGrbw74ZW3/AOElvoMWqBQUsIB8olK9M8YRenBJ4GD4P8CP2cPEXxXc+PPiNrGoWmm6g3noWbfe6gCfvlnzsQ9iQSR0AGDXN+HbZvjz+13I+ps02l3OoyTSqScCxt87Y/YMqqv1cmvsXxn8YNF8F+NIvCtzo84tIIo/PuYjgQKy5GyMDLALjp9ADitqGHqV5ONNXaV/kjGviKdCKlUdk3b5kWi/s2/BjS7NbdPBNndEDDS3c0szt7ks3H4AVy3xD/ZJ+F+v2UreH7a58MagV/dy2srSw7u26Jycj/dKmtL4gWvjH4oabKnh+8sbDSbVopEia4Ia5ZwSN8qEqNqFG2DIBcZYkYHf/Dx9V0SztfCfiO7W71C3tVe3uxuxdRjAYZPJeMkA+qlG7kBypQVFVFPVvbqvMmNabrSpuGiW/R+R8PeHNf8AiR+y38Tzouro93otwwkmtUcm2voc482En7kg9eCCMMCK+/8Awj4g0vxT4asPEOi3K3OnX8CzwSDup7EdiDkEdiCK8q/bF8C2fjb4I6tdJCj6locbajZSryRsGZUz6Mgbj1C+leG/sY/FSXw38GfiBYXcnmDw5aNq2no/I+dWBT6eYEP1c1gdBv8A7ZHx/wBS07VZvhn4BupYr7iLVL+3JMqM3/LvERyG5G5hyM7Rg5qh8Dv2P473T4dc+KV3dJNOBIukW0mxkzz++k5O71Vends8Vx/7CHg8eNfi/qnjXXs3v9igXW6UZ8y9mZtrn1IxI312ntX35QB5Xb/s7fBiC1FsngHTGQDG52ld/wDvotn9a8v+LX7HfhHU9PnvPh/dT6DqagtHazytNaSn+6S2XT65Ye1fUlFAHwT+zr8ZPFHwf8cH4afEn7TFosdwLZkujl9Lc9HU94TkEgcYIZe4P3qjK6B0YMpGQQcgivkj/gop4CtZ/DWlfEK0hRL20nWwvWUYMkL7jGW/3WBH0f2FeofsW+L7jxd8BdKe9mM15pMj6ZM7NksIsGMn/tmyD8KAPaKKKKACuP8AjL470/4c/DvVPFeobX+yx7baAnBnnbiOMfU9fQAntXYV8Cftj+PL/wCJ/wAXrH4b+FN15Z6Zdi0ijiORc37nY7fRfuA9vnPQ0AM/Y+8Cah8VPjFf/EbxXuvLLTLv7bPJIPlub5zuRPov3yO2EHQ197ajG8un3EURw7xOq/UggVynwV8Baf8ADf4c6Z4Usdjtbx77qcDH2i4bmST8TwPRQB2rsz0oA/PH9gaRLL9oh7a7+WeXS7uBFbr5gZGI+uEavqP4mfEbRNB+KVvpdz4ITWL23hULdLGrXJMikqkK7SW6469SQPf5U+Omkav8Df2nk8V6TARaTXx1fTichJUdj50BPbBZ0I67WU96+5/h94g8JfEPw/pnjXREtLwSR4jleJTPav8AxRMeqMpOCM+44INdGGq06Um6kbqzW9jmxVKpVilTlZ3T2ueK2ngHVPEngKSztdestLa3uUuJbRpz9luFkDFZncEjzASY+OP3RB3EAjubvwbq91pGk+DLfxhqkl/Y2yzXF4oj2Wi7GRQp2eZmTLIAWzsDk/w5vfGH4TW3jCIXWj3EWmam0gNwW3eTcqM8uinG8E5DYzyQfbr/AIe+GYfCfhe10lZ2up40BuLl87pnwBnkk4AAUDsqgdq3lXthIRU9U27W2879fQ540b4ucnDRxSvffyt09TzTw94M1D4b/Bzxy3iLVbe5il065l8qJmMcarA4J+bHLZGeOw618W/BCxu7r4ffFiW3VikXhdN+P+vuF/8A0GN/1r6R/bz+L1jp3hiX4ZaJdLLquobW1Qxtn7Nbg7hGx7O5A47KDn7wrT/Yt+FCWPwN1e48RWpSTxnEwkjZcMLIoyR/i293Hsy1zYivPEVHUnuzrw9CGHpqnDZHN/8ABNO7tzpPjWxDKLhbi0lI7lCsgH6g/nX2BX5u/CHxHqX7O/7Q15pniSOUWCSNp+phVPzwMQY7hB3AwrjuVJHU1+jGk6hZarp1vqOnXUN3Z3MaywzwuGSRCMhgR1BrE2LVFFITigDwr9u+5gg/Zx1iKYgPc3dpFDnu4mV+P+Ao1cr/AME4baeP4R67cuCIZtccR577YYgT+teT/twfFWHx74vsPAXhSX+0NP0q4PmyW/zi6vW+QKmPvBASoI6szY4AJ+t/2dvArfDv4Q6J4ZnC/bo4jPfEHObiQ7nGe+0kLn0WgD0GiiigDyD9rH4or8M/hdc3FlOE17VN1ppig/MjEfPN9EXn/eKjvX5//Bz4iXPw38ajxZbaLYaxqEcTpAb4uRCz8NINpBLYyMn+8a/Vi5s7W5Km4t4Ziv3TJGGx9M1F/ZWm/wDQPtP+/C/4UAfDn/DbXjv/AKFLw3+c/wD8XR/w2147/wChS8N/nP8A/F19x/2Vpv8A0D7T/vwv+FH9lab/ANA+0/78L/hQB4b4bsdP/ad/Z7tb3xjp9tp17NcT/ZJ7IEm0kjcoHXeSSCB8yk4I9OCPmS+8NfHD9mzxLPqWkm5OlO3zXltEZ7C6QdPNT+Bsf3sMOcHvX6K28EVvH5cMaRoOiooUfkKe6hlKsAQRggjrQB8UaH+3BqkVqE1rwDZ3VwF5ktNRaFSf91kfH51znjP9rX4m+MlOieC9Eh0OS5yi/Yle7vGz2RsYB9wufQivs/VPhl8O9UnNxqPgbw1dzE5Mkulwlj9Tt5rW0Dwz4d0BCmhaFpelKRgiztI4c/8AfIFAHx3+zv8Asta3qutxeMPizFLFb+b9oXSrh99xdyZzuuDztUnkqTubvgdftiKNIkWONVVFACqBgADsKcKKAPF/2mfgNpXxZ0qO9tJotN8T2cZW1vWX5JU5PlS45K56MOVJPUEg/JmgeL/jl+zhqTaNfWNxFpPmEizv4zNYyknlopFOFJ6/Kw68jNfo3UN5a295bvb3UEU8LjDRyIGVh7g8GgD40tv247lbIrcfDqF7oDG6PVyqE/Qxk/rXBeMvjz8ZfjNK/hbwtpk1naXI2SWWixO8sintJMeQvrjaMda+3J/hV8NJ7n7TN8P/AAs82c7zpMOc/wDfNdNpGk6ZpFr9l0rTrOwt/wDnlbQLEn5KAKAPm39lX9mdfBF5B4x8ceRc+IYxus7KMh4rEn+Mt0eX0xwvYk4I+nqKKACiiigD/9k=) --- Copyright 2018 Analytics Zoo Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ###Output _____no_output_____ ###Markdown **Environment Preparation** **Install Java 8** Run the cell on the **Google Colab** to install jdk 1.8. **Note:** if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer). ###Code # Install jdk8 !apt-get install openjdk-8-jdk-headless -qq > /dev/null import os # Set environment variable JAVA_HOME. os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" !update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java !java -version ###Output _____no_output_____ ###Markdown **Install Analytics Zoo** [Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) is needed to prepare the Python environment for running this example. **Note**: The following code cell is specific for setting up conda environment on Colab; for general conda installation, please refer to the [install guide](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) for more details. ###Code # Install Miniconda !wget https://repo.continuum.io/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh !chmod +x Miniconda3-4.5.4-Linux-x86_64.sh !./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local # Update Conda !conda install --channel defaults conda python=3.6 --yes !conda update --channel defaults --all --yes # Append to the sys.path import sys _ = (sys.path .append("/usr/local/lib/python3.6/site-packages")) os.environ['PYTHONHOME']="/usr/local" ###Output _____no_output_____ ###Markdown You can install the latest pre-release version using `pip install --pre analytics-zoo`. ###Code # Install latest pre-release version of Analytics Zoo # Installing Analytics Zoo from pip will automatically install pyspark, bigdl, and their dependencies. !pip install --pre analytics-zoo # Install python dependencies !pip install torch==1.7.1 torchvision==0.8.2 !pip install six cloudpickle !pip install jep==3.9.0 ###Output _____no_output_____ ###Markdown **Distributed PyTorch using Orca APIs** In this guide we will describe how to scale out PyTorch (v1.5+) programs using Orca in 4 simple steps. ###Code # import necesary libraries and modules from __future__ import print_function import os import argparse from zoo.orca import init_orca_context, stop_orca_context from zoo.orca import OrcaContext ###Output _____no_output_____ ###Markdown **Step 1: Init Orca Context** ###Code # recommended to set it to True when running Analytics Zoo in Jupyter notebook. OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook). cluster_mode = "local" if cluster_mode == "local": init_orca_context(cores=1, memory="2g") # run in local mode elif cluster_mode == "k8s": init_orca_context(cluster_mode="k8s", num_nodes=2, cores=4) # run on K8s cluster elif cluster_mode == "yarn": init_orca_context( cluster_mode="yarn-client", cores=4, num_nodes=2, memory="2g", driver_memory="10g", driver_cores=1, conf={"spark.rpc.message.maxSize": "1024", "spark.task.maxFailures": "1", "spark.driver.extraJavaOptions": "-Dbigdl.failure.retryTimes=1"}) # run on Hadoop YARN cluster ###Output _____no_output_____ ###Markdown This is the only place where you need to specify local or distributed mode. View [Orca Context](https://analytics-zoo.readthedocs.io/en/latest/doc/Orca/Overview/orca-context.html) for more details. **Note**: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster. **Step 2: Define the Model** You may define your model, loss and optimizer in the same way as in any standard (single node) PyTorch program. ###Code import torch import torch.nn as nn import torch.nn.functional as F class LeNet(nn.Module): def __init__(self): super(LeNet, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5, 1) self.conv2 = nn.Conv2d(20, 50, 5, 1) self.fc1 = nn.Linear(4*4*50, 500) self.fc2 = nn.Linear(500, 10) def forward(self, x): x = F.relu(self.conv1(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 2) x = x.view(-1, 4*4*50) x = F.relu(self.fc1(x)) x = self.fc2(x) return F.log_softmax(x, dim=1) model = LeNet() model.train() criterion = nn.NLLLoss() lr = 0.001 adam = torch.optim.Adam(model.parameters(), lr) ###Output _____no_output_____ ###Markdown **Step 3: Define Train Dataset** You can define the dataset using standard [Pytorch DataLoader](https://pytorch.org/docs/stable/data.html). Orca also supports a data creator function or [Orca SparkXShards](./data). ###Code import torch from torchvision import datasets, transforms torch.manual_seed(0) dir='./dataset' batch_size=320 test_batch_size=320 train_loader = torch.utils.data.DataLoader( datasets.MNIST(dir, train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size= batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader( datasets.MNIST(dir, train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=test_batch_size, shuffle=False) ###Output _____no_output_____ ###Markdown **Step 4: Fit with Orca Estimator** First, Create an Estimator. ###Code from zoo.orca.learn.pytorch import Estimator from zoo.orca.learn.metrics import Accuracy est = Estimator.from_torch(model=model, optimizer=adam, loss=criterion, metrics=[Accuracy()]) ###Output _____no_output_____ ###Markdown Next, fit and evaluate using the Estimator. ###Code from zoo.orca.learn.trigger import EveryEpoch est.fit(data=train_loader, epochs=1, validation_data=test_loader, checkpoint_trigger=EveryEpoch()) ###Output _____no_output_____ ###Markdown Finally, evaluate using the Estimator. ###Code result = est.evaluate(data=test_loader) for r in result: print(str(r)) ###Output _____no_output_____ ###Markdown The accuracy of this model has reached 98%. ###Code # stop orca context when program finishes stop_orca_context() ###Output _____no_output_____ ###Markdown ![image.png](data:image/png;base64,/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAUDBAQEAwUEBAQFBQUGBwwIBwcHBw8LCwkMEQ8SEhEPERETFhwXExQaFRERGCEYGh0dHx8fExciJCIeJBweHx7/2wBDAQUFBQcGBw4ICA4eFBEUHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh7/wAARCABNAI0DASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD7LrzPT/i1p958WpvAy2O2NZHgjvjPw8yrkptxxyGXOeorrfiDr0XhnwbqmuyEFrW3Zogf4pDwg/FiK+WW8OajpHw10T4lwM51E6w0zuTyU3fu2P1dG/77r2crwNLEQlKr192P+J6/16niZpj6uHnGNLp70v8ACnY+gP2hfiafhR4AXxUNGGr5vYrX7P8AafJ++GO7dtbpt6Y710Hwx8baL8QPBVh4q0Gbfa3aZaNj88Eg+/G47Mp4/IjgivC/25dVt9b/AGY9P1i0IMF5qVnOnPQMkhx+HT8K84+Gt/q/7OmseF9du5bm7+HXjfTrSa6Yjd9humiUs3Hdck/7UeRyUrx3FxbT3R7UZKSTWzPozQvjRp+q/HrVPhPHol3HeafE8j3rTKY3Coj8L1/jH5VN+0Z8XLP4Q+DrbWpdN/tW7vLsW1tZ/aPJ3/KWdi21sBQB26kV4d8Npobj/goV4puYJUlhlsZHjkRgyupt4CGBHUEHOaZ8W7WD42/te2Pw8lkaTQPDVhMLwoxwJSm52/B2hT/gJpDPqD4aeLbLxx4C0bxZp6eXBqdqk3lltxifo6E9yrAr+Fcd8IvjNp/xD8beKfDFpol3Yy+HpWilmlmVlmIlePIA5HKZ5ryv9gjxDeadbeKvhRrRKaj4fv3lijbqEL7JVHsJFB/7aVmfsV/8l7+Lv/X6/wD6VzUAeufDf456d4p+Kmr/AA41Pw/e+H9d00SYS5nR1nMZG4IV/wBkhx6rk1j+Pf2kNM0D4jap4J0TwnqniW90q2ee9ks5kVY/LQvIvIJJUYB/2jt61xX7bXhTUPC+r6J8cvCMyWesaTcRW962B+8BJWJyP4sZMbDurDsK3P2Gvh22j+Crj4i60/2nX/FJM4mc7nS2LEjJ/vO2Xb/gPpQBiN+2ZpiXq2T/AA38RrdPysBlQSN9F25PQ16T4d+OVpq0XhJpPC+p2UniMkJHPIoa3xcND8wIyc7d3HY15N49z/w8U8Jcn/jyj7/9O9xXffG8Z+PfgEHPL24/8ma78uw8K9ZxntZv7k2efmVedCipw3ul97Ol+O/x48I/CdYbTUkuNT1q4TzINNtSN+zON7seEUkEDqT2B5rkPhB+0u/jX4g6d4O1f4fap4duNUEjWc0s+9HCIznIZEOMKeRnnFec/s8WFr4+/bA8f+JvE0aXlzo885sYZhuETLP5MbYP9xFwPQkHrX2HdWFndTW01xbQzS2snmwO6BmifBBZSeVOCRx2JrgPQPn/AMdftQ22m+Pb3wn4K8Cax40n01mW+msnIVChw+wKjlgp4LHAyOM9a9b+EHjyy+I/ge18U2Gm6hp0M7vH5N7HtcMh2tgjhlzkbh6HoQRXyVc6b8Vf2Z/iN4k8T6X4dTxF4S1SUyT3IUsvlb2dd7L80LruIJIKnPfjH1N8C/iXoHxR8FLr+hQyWnlymC7s5cb7eYAMVyOGBDAhh1z2OQADvaKKKAPF/wBpaHxDrseieE9D0u+uI7m4E11PHAzRJztQMwGAASzHPoKgv/2edCXSp0tNb1p7pYW8lZJU8oyAHbkbemcd69b8Wa7Y+GfDOpeIdT837FptrJdT+Um5tiAk4Hc4HSvD/wDhsH4Qf89Nf/8ABd/9lXpU80r0aUKVF8qV/m/M8yplVCtVlUqrmb/BeR5n8T9H8a6v+y5N4RHhXXZr/TtegeCBLCVnaBlkJ2gDJCtuyR03CvoTRPAmneL/ANnXQvBfivT5Ujl0G0hmjkTbNbSrCuGAPKujD9CDxmuH/wCGwfhB/wA9Nf8A/Bd/9nR/w2D8IP8Anpr/AP4Lv/s65MTX9vVlVta+tjrwtD6vRjSve3U8V+BXgrxj8JP2gtefVtF1PUxo+h3rWs9vaySJfKqKYVjIByWAAC9RyO1aXwJ/ZsvfiFpus+MfiVd+J/D+rX2oyFIYlFvLID8zyOJEJwXYgdPu19g+BvEll4v8K2HiTTre9gsr+LzbdbuLypGQk7W25OARyPUEGsP4mfFfwH8OrcP4r8QW1nO67orRMy3Eg9RGuWx7nA96wOg+aLD4Xa98Df2m/C+p+E7HxH4g8NahGIb+7+ztO0SysY5RK0agAKdkgyOg9qwPh5rXxM+E/wAXPH2r6f8ACTxF4gh1jUZ1R1tZ40CrcSMHVhGwYENXo+qftseB4rpo9O8K+IbuJTgSSNFFu9wNzfrW/wCC/wBr74Wa3cpa6sNW8PSM20SXsAeHP+/GWx9SAKAD9omTxP8AED9kz7XH4T1O21vUHtJpNIjgklnhInGVK7Q3AGTwK9I/Zzsb3Tfgb4PsNRtJ7O7g0uJJoJ4ykkbDOQynkH2Ndpo+p6frGmw6lpV9bX1lOu+G4t5RJHIPUMODVugD5a8beGvEU/7efhjxDBoOqS6PDaRrLfpaubdD5E4wZMbRyQOvcV23xi0fV7741+Cb+z0u9uLW3aHz54oGZIsXGTuYDA455r2+vLfih8fvhl8PbmSx1nXRdanGcPYaennzIfRsEKh9mYGunCYl4apzpX0a+9WOXF4VYmnyN21T+53PGvjF8P8A4h/C7403Pxi+FmmPrNlqJZtV02KMyMC+DKCi/MyMQHDLkq3bGM9P8Lfjl8R/iH8RNH0aL4Y3vh7RQZG1W9uEllAxE+xQzIgQF9vqx6cVhy/tteDhORF4N8QPDn77Swq2P93J/nXe/Dv9qP4UeLrqKxk1S40C9lICRatGIkZs4wJQSn5kVzHUeZ678aPjT4Th8R+D/G3wzudc1O8edNNvbGFzaGOTKqoCo3mxjPHIbHDc816B+xF8N9d+H/wyu5PEts9lqWsXgufsj/fgiVAqBx2Y/MSOwIB5yK94jdJUDowZWAKlTkEHvT6ACiiigDO8T6Jp/iPw7qGg6rG0thqFu9tcIrlC0bghgCORweorxv8A4ZM+Cv8A0L99/wCDSf8A+Kr3WigD548T/sz/AAE8OeHr/XtX0e9t7Cwge4uJDqk/yooJP8XJ7AdyQK+TvgL8OrT4sfG37Dp+mSWHheCdr27h8xnMForfLCXPJZuEz15Y9q9s/wCCgfxSMstr8K9FnJOUutYMZySesMBx+Dkf7nvXtP7JXwvX4Z/C2CO/gEevattvNTLDDRkj5Ifoinn/AGi1AGH+1f8AGyD4TeGrbw74ZW3/AOElvoMWqBQUsIB8olK9M8YRenBJ4GD4P8CP2cPEXxXc+PPiNrGoWmm6g3noWbfe6gCfvlnzsQ9iQSR0AGDXN+HbZvjz+13I+ps02l3OoyTSqScCxt87Y/YMqqv1cmvsXxn8YNF8F+NIvCtzo84tIIo/PuYjgQKy5GyMDLALjp9ADitqGHqV5ONNXaV/kjGviKdCKlUdk3b5kWi/s2/BjS7NbdPBNndEDDS3c0szt7ks3H4AVy3xD/ZJ+F+v2UreH7a58MagV/dy2srSw7u26Jycj/dKmtL4gWvjH4oabKnh+8sbDSbVopEia4Ia5ZwSN8qEqNqFG2DIBcZYkYHf/Dx9V0SztfCfiO7W71C3tVe3uxuxdRjAYZPJeMkA+qlG7kBypQVFVFPVvbqvMmNabrSpuGiW/R+R8PeHNf8AiR+y38Tzouro93otwwkmtUcm2voc482En7kg9eCCMMCK+/8Awj4g0vxT4asPEOi3K3OnX8CzwSDup7EdiDkEdiCK8q/bF8C2fjb4I6tdJCj6locbajZSryRsGZUz6Mgbj1C+leG/sY/FSXw38GfiBYXcnmDw5aNq2no/I+dWBT6eYEP1c1gdBv8A7ZHx/wBS07VZvhn4BupYr7iLVL+3JMqM3/LvERyG5G5hyM7Rg5qh8Dv2P473T4dc+KV3dJNOBIukW0mxkzz++k5O71Vends8Vx/7CHg8eNfi/qnjXXs3v9igXW6UZ8y9mZtrn1IxI312ntX35QB5Xb/s7fBiC1FsngHTGQDG52ld/wDvotn9a8v+LX7HfhHU9PnvPh/dT6DqagtHazytNaSn+6S2XT65Ye1fUlFAHwT+zr8ZPFHwf8cH4afEn7TFosdwLZkujl9Lc9HU94TkEgcYIZe4P3qjK6B0YMpGQQcgivkj/gop4CtZ/DWlfEK0hRL20nWwvWUYMkL7jGW/3WBH0f2FeofsW+L7jxd8BdKe9mM15pMj6ZM7NksIsGMn/tmyD8KAPaKKKKACuP8AjL470/4c/DvVPFeobX+yx7baAnBnnbiOMfU9fQAntXYV8Cftj+PL/wCJ/wAXrH4b+FN15Z6Zdi0ijiORc37nY7fRfuA9vnPQ0AM/Y+8Cah8VPjFf/EbxXuvLLTLv7bPJIPlub5zuRPov3yO2EHQ197ajG8un3EURw7xOq/UggVynwV8Baf8ADf4c6Z4Usdjtbx77qcDH2i4bmST8TwPRQB2rsz0oA/PH9gaRLL9oh7a7+WeXS7uBFbr5gZGI+uEavqP4mfEbRNB+KVvpdz4ITWL23hULdLGrXJMikqkK7SW6469SQPf5U+Omkav8Df2nk8V6TARaTXx1fTichJUdj50BPbBZ0I67WU96+5/h94g8JfEPw/pnjXREtLwSR4jleJTPav8AxRMeqMpOCM+44INdGGq06Um6kbqzW9jmxVKpVilTlZ3T2ueK2ngHVPEngKSztdestLa3uUuJbRpz9luFkDFZncEjzASY+OP3RB3EAjubvwbq91pGk+DLfxhqkl/Y2yzXF4oj2Wi7GRQp2eZmTLIAWzsDk/w5vfGH4TW3jCIXWj3EWmam0gNwW3eTcqM8uinG8E5DYzyQfbr/AIe+GYfCfhe10lZ2up40BuLl87pnwBnkk4AAUDsqgdq3lXthIRU9U27W2879fQ540b4ucnDRxSvffyt09TzTw94M1D4b/Bzxy3iLVbe5il065l8qJmMcarA4J+bHLZGeOw618W/BCxu7r4ffFiW3VikXhdN+P+vuF/8A0GN/1r6R/bz+L1jp3hiX4ZaJdLLquobW1Qxtn7Nbg7hGx7O5A47KDn7wrT/Yt+FCWPwN1e48RWpSTxnEwkjZcMLIoyR/i293Hsy1zYivPEVHUnuzrw9CGHpqnDZHN/8ABNO7tzpPjWxDKLhbi0lI7lCsgH6g/nX2BX5u/CHxHqX7O/7Q15pniSOUWCSNp+phVPzwMQY7hB3AwrjuVJHU1+jGk6hZarp1vqOnXUN3Z3MaywzwuGSRCMhgR1BrE2LVFFITigDwr9u+5gg/Zx1iKYgPc3dpFDnu4mV+P+Ao1cr/AME4baeP4R67cuCIZtccR577YYgT+teT/twfFWHx74vsPAXhSX+0NP0q4PmyW/zi6vW+QKmPvBASoI6szY4AJ+t/2dvArfDv4Q6J4ZnC/bo4jPfEHObiQ7nGe+0kLn0WgD0GiiigDyD9rH4or8M/hdc3FlOE17VN1ppig/MjEfPN9EXn/eKjvX5//Bz4iXPw38ajxZbaLYaxqEcTpAb4uRCz8NINpBLYyMn+8a/Vi5s7W5Km4t4Ziv3TJGGx9M1F/ZWm/wDQPtP+/C/4UAfDn/DbXjv/AKFLw3+c/wD8XR/w2147/wChS8N/nP8A/F19x/2Vpv8A0D7T/vwv+FH9lab/ANA+0/78L/hQB4b4bsdP/ad/Z7tb3xjp9tp17NcT/ZJ7IEm0kjcoHXeSSCB8yk4I9OCPmS+8NfHD9mzxLPqWkm5OlO3zXltEZ7C6QdPNT+Bsf3sMOcHvX6K28EVvH5cMaRoOiooUfkKe6hlKsAQRggjrQB8UaH+3BqkVqE1rwDZ3VwF5ktNRaFSf91kfH51znjP9rX4m+MlOieC9Eh0OS5yi/Yle7vGz2RsYB9wufQivs/VPhl8O9UnNxqPgbw1dzE5Mkulwlj9Tt5rW0Dwz4d0BCmhaFpelKRgiztI4c/8AfIFAHx3+zv8Asta3qutxeMPizFLFb+b9oXSrh99xdyZzuuDztUnkqTubvgdftiKNIkWONVVFACqBgADsKcKKAPF/2mfgNpXxZ0qO9tJotN8T2cZW1vWX5JU5PlS45K56MOVJPUEg/JmgeL/jl+zhqTaNfWNxFpPmEizv4zNYyknlopFOFJ6/Kw68jNfo3UN5a295bvb3UEU8LjDRyIGVh7g8GgD40tv247lbIrcfDqF7oDG6PVyqE/Qxk/rXBeMvjz8ZfjNK/hbwtpk1naXI2SWWixO8sintJMeQvrjaMda+3J/hV8NJ7n7TN8P/AAs82c7zpMOc/wDfNdNpGk6ZpFr9l0rTrOwt/wDnlbQLEn5KAKAPm39lX9mdfBF5B4x8ceRc+IYxus7KMh4rEn+Mt0eX0xwvYk4I+nqKKACiiigD/9k=)--- Copyright 2018 Analytics Zoo Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ###Output _____no_output_____ ###Markdown **Environment Preparation** **Install Java 8**Run the cell on the **Google Colab** to install jdk 1.8.**Note:** if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer). ###Code # Install jdk8 !apt-get install openjdk-8-jdk-headless -qq > /dev/null import os # Set environment variable JAVA_HOME. os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" !update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java !java -version ###Output _____no_output_____ ###Markdown **Install Analytics Zoo** [Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) is needed to prepare the Python environment for running this example. **Note**: The following code cell is specific for setting up conda environment on Colab; for general conda installation, please refer to the [install guide](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) for more details. ###Code import sys # Get current python version version_info = sys.version_info python_version = f"{version_info.major}.{version_info.minor}.{version_info.micro}" # Install Miniconda !wget https://repo.continuum.io/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh !chmod +x Miniconda3-4.5.4-Linux-x86_64.sh !./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local # Update Conda !conda install --channel defaults conda python=$python_version --yes !conda update --channel defaults --all --yes # Append to the sys.path _ = (sys.path .append(f"/usr/local/lib/python{version_info.major}.{version_info.minor}/site-packages")) os.environ['PYTHONHOME']="/usr/local" ###Output _____no_output_____ ###Markdown You can install the latest pre-release version using `pip install --pre --upgrade analytics-zoo`. ###Code # Install latest pre-release version of Analytics Zoo # Installing Analytics Zoo from pip will automatically install pyspark, bigdl, and their dependencies. !pip install --pre --upgrade analytics-zoo # Install python dependencies !pip install torch==1.7.1 torchvision==0.8.2 !pip install six cloudpickle !pip install jep==3.9.0 ###Output _____no_output_____ ###Markdown **Distributed PyTorch using Orca APIs**In this guide we will describe how to scale out PyTorch programs using Orca in 4 simple steps. ###Code # import necesary libraries and modules from __future__ import print_function import os import argparse from zoo.orca import init_orca_context, stop_orca_context from zoo.orca import OrcaContext ###Output _____no_output_____ ###Markdown **Step 1: Init Orca Context** ###Code # recommended to set it to True when running Analytics Zoo in Jupyter notebook. OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook). cluster_mode = "local" if cluster_mode == "local": init_orca_context(cores=1, memory="2g") # run in local mode elif cluster_mode == "k8s": init_orca_context(cluster_mode="k8s", num_nodes=2, cores=4) # run on K8s cluster elif cluster_mode == "yarn": init_orca_context( cluster_mode="yarn-client", cores=4, num_nodes=2, memory="2g", driver_memory="10g", driver_cores=1, conf={"spark.rpc.message.maxSize": "1024", "spark.task.maxFailures": "1", "spark.driver.extraJavaOptions": "-Dbigdl.failure.retryTimes=1"}) # run on Hadoop YARN cluster ###Output _____no_output_____ ###Markdown This is the only place where you need to specify local or distributed mode. View [Orca Context](https://analytics-zoo.readthedocs.io/en/latest/doc/Orca/Overview/orca-context.html) for more details.**Note**: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster. **Step 2: Define the Model**You may define your model, loss and optimizer in the same way as in any standard (single node) PyTorch program. ###Code import torch import torch.nn as nn import torch.nn.functional as F class LeNet(nn.Module): def __init__(self): super(LeNet, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5, 1) self.conv2 = nn.Conv2d(20, 50, 5, 1) self.fc1 = nn.Linear(4*4*50, 500) self.fc2 = nn.Linear(500, 10) def forward(self, x): x = F.relu(self.conv1(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 2) x = x.view(-1, 4*4*50) x = F.relu(self.fc1(x)) x = self.fc2(x) return F.log_softmax(x, dim=1) model = LeNet() model.train() criterion = nn.NLLLoss() lr = 0.001 adam = torch.optim.Adam(model.parameters(), lr) ###Output _____no_output_____ ###Markdown **Step 3: Define Train Dataset**You can define the dataset using standard [Pytorch DataLoader](https://pytorch.org/docs/stable/data.html). ###Code !wget www.di.ens.fr/~lelarge/MNIST.tar.gz !tar -zxvf MNIST.tar.gz import torch from torchvision import datasets, transforms torch.manual_seed(0) dir='./' batch_size=320 test_batch_size=320 train_loader = torch.utils.data.DataLoader( datasets.MNIST(dir, train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size= batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader( datasets.MNIST(dir, train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=test_batch_size, shuffle=False) ###Output _____no_output_____ ###Markdown **Step 4: Fit with Orca Estimator** First, Create an Estimator. ###Code from zoo.orca.learn.pytorch import Estimator from zoo.orca.learn.metrics import Accuracy est = Estimator.from_torch(model=model, optimizer=adam, loss=criterion, metrics=[Accuracy()]) ###Output _____no_output_____ ###Markdown Next, fit and evaluate using the Estimator. ###Code from zoo.orca.learn.trigger import EveryEpoch est.fit(data=train_loader, epochs=1, validation_data=test_loader, checkpoint_trigger=EveryEpoch()) ###Output _____no_output_____ ###Markdown Finally, evaluate using the Estimator. ###Code result = est.evaluate(data=test_loader) for r in result: print(r, ":", result[r]) ###Output _____no_output_____ ###Markdown The accuracy of this model has reached 98%. ###Code # stop orca context when program finishes stop_orca_context() ###Output _____no_output_____
C4/W4/ungraded_labs/C4_W4_Lab_2_Sunspots.ipynb
###Markdown ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown **Note:** This notebook can run using TensorFlow 2.5.0 ###Code #!pip install tensorflow==2.5.0 import tensorflow as tf print(tf.__version__) import numpy as np import matplotlib.pyplot as plt def plot_series(time, series, format="-", start=0, end=None): plt.plot(time[start:end], series[start:end], format) plt.xlabel("Time") plt.ylabel("Value") plt.grid(True) # Sunspots.csv !gdown --id 1bLnqPgwoSh6rHz_DKDdDeQyAyl8_nqT5 import csv time_step = [] sunspots = [] with open('./Sunspots.csv') as csvfile: reader = csv.reader(csvfile, delimiter=',') next(reader) for row in reader: sunspots.append(float(row[2])) time_step.append(int(row[0])) series = np.array(sunspots) time = np.array(time_step) plt.figure(figsize=(10, 6)) plot_series(time, series) series = np.array(sunspots) time = np.array(time_step) plt.figure(figsize=(10, 6)) plot_series(time, series) split_time = 3000 time_train = time[:split_time] x_train = series[:split_time] time_valid = time[split_time:] x_valid = series[split_time:] window_size = 30 batch_size = 32 shuffle_buffer_size = 1000 def windowed_dataset(series, window_size, batch_size, shuffle_buffer): series = tf.expand_dims(series, axis=-1) ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size + 1, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size + 1)) ds = ds.shuffle(shuffle_buffer) ds = ds.map(lambda w: (w[:-1], w[1:])) return ds.batch(batch_size).prefetch(1) def model_forecast(model, series, window_size): ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size)) ds = ds.batch(32).prefetch(1) forecast = model.predict(ds) return forecast tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) window_size = 64 batch_size = 256 train_set = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size) print(train_set) print(x_train.shape) model = tf.keras.models.Sequential([ tf.keras.layers.Conv1D(filters=32, kernel_size=5, strides=1, padding="causal", activation="relu", input_shape=[None, 1]), tf.keras.layers.LSTM(64, return_sequences=True), tf.keras.layers.LSTM(64, return_sequences=True), tf.keras.layers.Dense(30, activation="relu"), tf.keras.layers.Dense(10, activation="relu"), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x: x * 400) ]) lr_schedule = tf.keras.callbacks.LearningRateScheduler( lambda epoch: 1e-8 * 10**(epoch / 20)) optimizer = tf.keras.optimizers.SGD(learning_rate=1e-8, momentum=0.9) model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) history = model.fit(train_set, epochs=100, callbacks=[lr_schedule]) plt.semilogx(history.history["lr"], history.history["loss"]) plt.axis([1e-8, 1e-4, 0, 60]) tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) train_set = windowed_dataset(x_train, window_size=60, batch_size=100, shuffle_buffer=shuffle_buffer_size) model = tf.keras.models.Sequential([ tf.keras.layers.Conv1D(filters=60, kernel_size=5, strides=1, padding="causal", activation="relu", input_shape=[None, 1]), tf.keras.layers.LSTM(60, return_sequences=True), tf.keras.layers.LSTM(60, return_sequences=True), tf.keras.layers.Dense(30, activation="relu"), tf.keras.layers.Dense(10, activation="relu"), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x: x * 400) ]) optimizer = tf.keras.optimizers.SGD(learning_rate=1e-5, momentum=0.9) model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) history = model.fit(train_set,epochs=500) rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size) rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0] plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, rnn_forecast) tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy() import matplotlib.image as mpimg import matplotlib.pyplot as plt #----------------------------------------------------------- # Retrieve a list of list results on training and test data # sets for each training epoch #----------------------------------------------------------- loss=history.history['loss'] epochs=range(len(loss)) # Get number of epochs #------------------------------------------------ # Plot training and validation loss per epoch #------------------------------------------------ plt.plot(epochs, loss, 'r') plt.title('Training loss') plt.xlabel("Epochs") plt.ylabel("Loss") plt.legend(["Loss"]) plt.figure() zoomed_loss = loss[200:] zoomed_epochs = range(200,500) #------------------------------------------------ # Plot training and validation loss per epoch #------------------------------------------------ plt.plot(zoomed_epochs, zoomed_loss, 'r') plt.title('Training loss') plt.xlabel("Epochs") plt.ylabel("Loss") plt.legend(["Loss"]) plt.figure() print(rnn_forecast) ###Output _____no_output_____ ###Markdown ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown **Note:** This notebook can run using TensorFlow 2.5.0 ###Code #!pip install tensorflow==2.5.0 import tensorflow as tf print(tf.__version__) import numpy as np import matplotlib.pyplot as plt def plot_series(time, series, format="-", start=0, end=None): plt.plot(time[start:end], series[start:end], format) plt.xlabel("Time") plt.ylabel("Value") plt.grid(True) # Sunspots.csv !gdown --id 1bLnqPgwoSh6rHz_DKDdDeQyAyl8_nqT5 import csv time_step = [] sunspots = [] with open('./Sunspots.csv') as csvfile: reader = csv.reader(csvfile, delimiter=',') next(reader) for row in reader: sunspots.append(float(row[2])) time_step.append(int(row[0])) series = np.array(sunspots) time = np.array(time_step) plt.figure(figsize=(10, 6)) plot_series(time, series) series = np.array(sunspots) time = np.array(time_step) plt.figure(figsize=(10, 6)) plot_series(time, series) split_time = 3000 time_train = time[:split_time] x_train = series[:split_time] time_valid = time[split_time:] x_valid = series[split_time:] window_size = 30 batch_size = 32 shuffle_buffer_size = 1000 def windowed_dataset(series, window_size, batch_size, shuffle_buffer): series = tf.expand_dims(series, axis=-1) ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size + 1, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size + 1)) ds = ds.shuffle(shuffle_buffer) ds = ds.map(lambda w: (w[:-1], w[1:])) return ds.batch(batch_size).prefetch(1) def model_forecast(model, series, window_size): ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size)) ds = ds.batch(32).prefetch(1) forecast = model.predict(ds) return forecast tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) window_size = 64 batch_size = 256 train_set = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size) print(train_set) print(x_train.shape) model = tf.keras.models.Sequential([ tf.keras.layers.Conv1D(filters=32, kernel_size=5, strides=1, padding="causal", activation="relu", input_shape=[None, 1]), tf.keras.layers.LSTM(64, return_sequences=True), tf.keras.layers.LSTM(64, return_sequences=True), tf.keras.layers.Dense(30, activation="relu"), tf.keras.layers.Dense(10, activation="relu"), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x: x * 400) ]) lr_schedule = tf.keras.callbacks.LearningRateScheduler( lambda epoch: 1e-8 * 10**(epoch / 20)) optimizer = tf.keras.optimizers.SGD(learning_rate=1e-8, momentum=0.9) model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) history = model.fit(train_set, epochs=100, callbacks=[lr_schedule]) plt.semilogx(history.history["lr"], history.history["loss"]) plt.axis([1e-8, 1e-4, 0, 60]) tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) train_set = windowed_dataset(x_train, window_size=60, batch_size=100, shuffle_buffer=shuffle_buffer_size) model = tf.keras.models.Sequential([ tf.keras.layers.Conv1D(filters=60, kernel_size=5, strides=1, padding="causal", activation="relu", input_shape=[None, 1]), tf.keras.layers.LSTM(60, return_sequences=True), tf.keras.layers.LSTM(60, return_sequences=True), tf.keras.layers.Dense(30, activation="relu"), tf.keras.layers.Dense(10, activation="relu"), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x: x * 400) ]) optimizer = tf.keras.optimizers.SGD(learning_rate=1e-5, momentum=0.9) model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) history = model.fit(train_set,epochs=500) rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size) rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0] plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, rnn_forecast) tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy() import matplotlib.image as mpimg import matplotlib.pyplot as plt #----------------------------------------------------------- # Retrieve a list of list results on training and test data # sets for each training epoch #----------------------------------------------------------- loss=history.history['loss'] epochs=range(len(loss)) # Get number of epochs #------------------------------------------------ # Plot training and validation loss per epoch #------------------------------------------------ plt.plot(epochs, loss, 'r') plt.title('Training loss') plt.xlabel("Epochs") plt.ylabel("Loss") plt.legend(["Loss"]) plt.figure() zoomed_loss = loss[200:] zoomed_epochs = range(200,500) #------------------------------------------------ # Plot training and validation loss per epoch #------------------------------------------------ plt.plot(zoomed_epochs, zoomed_loss, 'r') plt.title('Training loss') plt.xlabel("Epochs") plt.ylabel("Loss") plt.legend(["Loss"]) plt.figure() print(rnn_forecast) ###Output [132.66046 99.54072 107.6558 128.91125 111.03563 151.47214 192.08429 165.69238 199.69415 161.64348 166.61966 179.33974 167.90027 173.98668 196.63515 176.85118 185.33414 193.11104 180.59848 189.79047 186.12187 180.80724 170.65663 174.82767 150.25401 138.95514 138.00024 158.76915 149.82372 127.79305 181.49178 124.95189 131.50029 205.10687 174.82045 185.96513 184.54234 160.99121 149.51663 156.87787 166.40067 163.81752 149.0958 152.98271 173.51605 152.72585 149.7078 142.00697 137.06386 138.05042 107.834885 108.83899 102.61352 94.028885 109.75278 104.75885 97.08565 84.174255 81.7241 76.98352 71.95897 72.32902 75.22252 84.76587 69.47644 70.38142 70.93608 77.57038 75.44669 53.302944 62.144325 71.634125 45.290775 43.65843 46.398064 36.2901 31.10732 42.0205 39.754414 43.191273 50.96356 35.542763 28.48089 25.564178 47.18436 18.156162 21.4515 14.637497 15.518425 20.519058 19.620472 21.168514 21.303743 24.879295 25.060114 28.978943 24.799255 25.767979 21.155775 17.050577 13.787204 11.825368 14.07182 12.356406 12.451792 11.536841 9.933403 8.538726 11.082749 6.6992626 4.6961994 9.47824 8.854156 6.6682043 8.446341 9.420887 5.5685596 6.057857 7.718268 10.059232 9.14131 7.3585467 6.5061717 4.962775 4.1813583 4.2084246 4.9833593 5.4762545 3.4752698 3.579986 4.78054 5.918063 8.720682 10.680115 20.227076 22.192997 28.245815 31.129383 26.44602 28.62373 39.983116 48.429375 54.515232 60.70169 57.550037 52.68209 52.331573 64.253494 74.3048 75.572 66.50009 76.33621 84.25371 104.17197 111.86852 121.34161 111.374214 92.782745 67.91977 92.46443 104.32341 94.172844 104.00111 94.21547 90.10542 79.9086 76.56769 78.995476 66.222404 91.15841 79.9786 82.82999 102.01068 87.37413 94.52303 82.1846 94.33379 62.187622 107.998085 109.75327 100.91765 106.88801 131.58705 115.29954 94.56478 99.10185 98.42014 97.23749 107.7204 121.352455 107.67292 101.28691 125.467155 98.88211 95.122986 76.82917 84.61932 90.69195 74.824196 74.436516 71.060036 81.3919 69.99475 63.121105 59.40194 58.415096 54.80584 51.866234 48.10602 52.722145 43.720585 49.830036 45.221405 32.978527 24.581633 26.442486 23.264582 21.357033 22.370384 20.69752 22.859715 20.095478 19.015182 18.3956 18.24233 19.720163 9.578042 10.554679 9.395352 8.893247 10.756187 8.756211 9.8517475 10.250621 11.615843 ] ###Markdown ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown **Note:** This notebook can run using TensorFlow 2.5.0 ###Code #!pip install tensorflow==2.5.0 import tensorflow as tf print(tf.__version__) import numpy as np import matplotlib.pyplot as plt def plot_series(time, series, format="-", start=0, end=None): plt.plot(time[start:end], series[start:end], format) plt.xlabel("Time") plt.ylabel("Value") plt.grid(True) # Sunspots.csv !gdown --id 1bLnqPgwoSh6rHz_DKDdDeQyAyl8_nqT5 import csv time_step = [] sunspots = [] with open('./Sunspots.csv') as csvfile: reader = csv.reader(csvfile, delimiter=',') next(reader) for row in reader: sunspots.append(float(row[2])) time_step.append(int(row[0])) series = np.array(sunspots) time = np.array(time_step) plt.figure(figsize=(10, 6)) plot_series(time, series) series = np.array(sunspots) time = np.array(time_step) plt.figure(figsize=(10, 6)) plot_series(time, series) split_time = 3000 time_train = time[:split_time] x_train = series[:split_time] time_valid = time[split_time:] x_valid = series[split_time:] window_size = 30 batch_size = 32 shuffle_buffer_size = 1000 def windowed_dataset(series, window_size, batch_size, shuffle_buffer): series = tf.expand_dims(series, axis=-1) ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size + 1, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size + 1)) ds = ds.shuffle(shuffle_buffer) ds = ds.map(lambda w: (w[:-1], w[1:])) return ds.batch(batch_size).prefetch(1) def model_forecast(model, series, window_size): ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size)) ds = ds.batch(32).prefetch(1) forecast = model.predict(ds) return forecast tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) window_size = 64 batch_size = 256 train_set = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size) print(train_set) print(x_train.shape) model = tf.keras.models.Sequential([ tf.keras.layers.Conv1D(filters=32, kernel_size=5, strides=1, padding="causal", activation="relu", input_shape=[None, 1]), tf.keras.layers.LSTM(64, return_sequences=True), tf.keras.layers.LSTM(64, return_sequences=True), tf.keras.layers.Dense(30, activation="relu"), tf.keras.layers.Dense(10, activation="relu"), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x: x * 400) ]) lr_schedule = tf.keras.callbacks.LearningRateScheduler( lambda epoch: 1e-8 * 10**(epoch / 20)) optimizer = tf.keras.optimizers.SGD(learning_rate=1e-8, momentum=0.9) model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) history = model.fit(train_set, epochs=100, callbacks=[lr_schedule]) plt.semilogx(history.history["lr"], history.history["loss"]) plt.axis([1e-8, 1e-4, 0, 60]) tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) train_set = windowed_dataset(x_train, window_size=60, batch_size=100, shuffle_buffer=shuffle_buffer_size) model = tf.keras.models.Sequential([ tf.keras.layers.Conv1D(filters=60, kernel_size=5, strides=1, padding="causal", activation="relu", input_shape=[None, 1]), tf.keras.layers.LSTM(60, return_sequences=True), tf.keras.layers.LSTM(60, return_sequences=True), tf.keras.layers.Dense(30, activation="relu"), tf.keras.layers.Dense(10, activation="relu"), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x: x * 400) ]) optimizer = tf.keras.optimizers.SGD(learning_rate=1e-5, momentum=0.9) model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) history = model.fit(train_set,epochs=500) rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size) rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0] plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, rnn_forecast) tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy() import matplotlib.image as mpimg import matplotlib.pyplot as plt #----------------------------------------------------------- # Retrieve a list of list results on training and test data # sets for each training epoch #----------------------------------------------------------- loss=history.history['loss'] epochs=range(len(loss)) # Get number of epochs #------------------------------------------------ # Plot training and validation loss per epoch #------------------------------------------------ plt.plot(epochs, loss, 'r') plt.title('Training loss') plt.xlabel("Epochs") plt.ylabel("Loss") plt.legend(["Loss"]) plt.figure() zoomed_loss = loss[200:] zoomed_epochs = range(200,500) #------------------------------------------------ # Plot training and validation loss per epoch #------------------------------------------------ plt.plot(zoomed_epochs, zoomed_loss, 'r') plt.title('Training loss') plt.xlabel("Epochs") plt.ylabel("Loss") plt.legend(["Loss"]) plt.figure() print(rnn_forecast) ###Output _____no_output_____ ###Markdown ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown **Note:** This notebook can run using TensorFlow 2.5.0 ###Code #!pip install tensorflow==2.5.0 import tensorflow as tf print(tf.__version__) import numpy as np import matplotlib.pyplot as plt def plot_series(time, series, format="-", start=0, end=None): plt.plot(time[start:end], series[start:end], format) plt.xlabel("Time") plt.ylabel("Value") plt.grid(True) # Sunspots.csv !gdown --id 1bLnqPgwoSh6rHz_DKDdDeQyAyl8_nqT5 import csv time_step = [] sunspots = [] with open('./Sunspots.csv') as csvfile: reader = csv.reader(csvfile, delimiter=',') next(reader) for row in reader: sunspots.append(float(row[2])) time_step.append(int(row[0])) series = np.array(sunspots) time = np.array(time_step) plt.figure(figsize=(10, 6)) plot_series(time, series) series = np.array(sunspots) time = np.array(time_step) plt.figure(figsize=(10, 6)) plot_series(time, series) split_time = 3000 time_train = time[:split_time] x_train = series[:split_time] time_valid = time[split_time:] x_valid = series[split_time:] window_size = 30 batch_size = 32 shuffle_buffer_size = 1000 def windowed_dataset(series, window_size, batch_size, shuffle_buffer): series = tf.expand_dims(series, axis=-1) ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size + 1, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size + 1)) ds = ds.shuffle(shuffle_buffer) ds = ds.map(lambda w: (w[:-1], w[1:])) return ds.batch(batch_size).prefetch(1) def model_forecast(model, series, window_size): ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size)) ds = ds.batch(32).prefetch(1) forecast = model.predict(ds) return forecast tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) window_size = 64 batch_size = 256 train_set = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size) print(train_set) print(x_train.shape) model = tf.keras.models.Sequential([ tf.keras.layers.Conv1D(filters=32, kernel_size=5, strides=1, padding="causal", activation="relu", input_shape=[None, 1]), tf.keras.layers.LSTM(64, return_sequences=True), tf.keras.layers.LSTM(64, return_sequences=True), tf.keras.layers.Dense(30, activation="relu"), tf.keras.layers.Dense(10, activation="relu"), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x: x * 400) ]) lr_schedule = tf.keras.callbacks.LearningRateScheduler( lambda epoch: 1e-8 * 10**(epoch / 20)) optimizer = tf.keras.optimizers.SGD(learning_rate=1e-8, momentum=0.9) model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) history = model.fit(train_set, epochs=100, callbacks=[lr_schedule]) plt.semilogx(history.history["lr"], history.history["loss"]) plt.axis([1e-8, 1e-4, 0, 60]) tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) train_set = windowed_dataset(x_train, window_size=60, batch_size=100, shuffle_buffer=shuffle_buffer_size) model = tf.keras.models.Sequential([ tf.keras.layers.Conv1D(filters=60, kernel_size=5, strides=1, padding="causal", activation="relu", input_shape=[None, 1]), tf.keras.layers.LSTM(60, return_sequences=True), tf.keras.layers.LSTM(60, return_sequences=True), tf.keras.layers.Dense(30, activation="relu"), tf.keras.layers.Dense(10, activation="relu"), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x: x * 400) ]) optimizer = tf.keras.optimizers.SGD(learning_rate=1e-5, momentum=0.9) model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) history = model.fit(train_set,epochs=500) rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size) rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0] plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, rnn_forecast) tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy() import matplotlib.image as mpimg import matplotlib.pyplot as plt #----------------------------------------------------------- # Retrieve a list of list results on training and test data # sets for each training epoch #----------------------------------------------------------- loss=history.history['loss'] epochs=range(len(loss)) # Get number of epochs #------------------------------------------------ # Plot training and validation loss per epoch #------------------------------------------------ plt.plot(epochs, loss, 'r') plt.title('Training loss') plt.xlabel("Epochs") plt.ylabel("Loss") plt.legend(["Loss"]) plt.figure() zoomed_loss = loss[200:] zoomed_epochs = range(200,500) #------------------------------------------------ # Plot training and validation loss per epoch #------------------------------------------------ plt.plot(zoomed_epochs, zoomed_loss, 'r') plt.title('Training loss') plt.xlabel("Epochs") plt.ylabel("Loss") plt.legend(["Loss"]) plt.figure() print(rnn_forecast) ###Output _____no_output_____
docs/examples/non-ipympl-backends.ipynb
###Markdown non ipympl backendsevery function in this library should work with any interactive matplotlib backend. Although the functions from `mpl_interactions.jupyter` assume that you are in a notebook context in order to display the sliders. ###Code # NBVAL_SKIP %matplotlib qt5 import matplotlib.pyplot as plt import numpy as np from mpl_interactions import interactive_plot ###Output _____no_output_____ ###Markdown The below cell will display the sliders in the notebook but a separate matplotlib window will pop up. An important caveat is that the performance of `interactive_plot` seems to be significantly worse with a non-ipympl backend. ###Code x = np.linspace(0, np.pi, 100) τ = np.linspace(1, 10, 100) β = np.linspace(1, 10, 100) def f(x, τ, β): return np.sin(x * τ) * x**β fig, ax = plt.subplots() controls = interactive_plot(x, f, τ=τ, β=β) ###Output _____no_output_____ ###Markdown generic submodule`mpl_interactions.generic` contains functions that will work in any matplotlib context. So for example the below code will work in the notebook or from a script ###Code from mpl_interactions.generic import heatmap_slicer x = np.linspace(0, np.pi, 100) y = np.linspace(0, 10, 200) X, Y = np.meshgrid(x, y) data1 = np.sin(X) + np.exp(np.cos(Y)) data2 = np.cos(X) + np.exp(np.sin(Y)) fig, axes = heatmap_slicer( x, y, (data1, data2), slices="both", heatmap_names=("dataset 1", "dataset 2"), labels=("Some wild X variable", "Y axis"), interaction_type="move", ) ###Output _____no_output_____ ###Markdown non ipympl backendsevery function in this library should work with any interactive matplotlib backend. Although the functions from `mpl_interactions.jupyter` assume that you are in a notebook context in order to display the sliders. ###Code # NBVAL_SKIP %matplotlib qt5 import matplotlib.pyplot as plt import numpy as np from mpl_interactions import interactive_plot ###Output _____no_output_____ ###Markdown The below cell will display the sliders in the notebook but a separate matplotlib window will pop up. An important caveat is that the performance of `interactive_plot` seems to be significantly worse with a non-ipympl backend. ###Code x = np.linspace(0, np.pi, 100) τ = np.linspace(1, 10, 100) β = np.linspace(1, 10, 100) def f(x, τ, β): return np.sin(x * τ) * x ** β fig, ax = plt.subplots() controls = interactive_plot(x, f, τ=τ, β=β) ###Output _____no_output_____ ###Markdown generic submodule`mpl_interactions.generic` contains functions that will work in any matplotlib context. So for example the below code will work in the notebook or from a script ###Code from mpl_interactions.generic import heatmap_slicer x = np.linspace(0, np.pi, 100) y = np.linspace(0, 10, 200) X, Y = np.meshgrid(x, y) data1 = np.sin(X) + np.exp(np.cos(Y)) data2 = np.cos(X) + np.exp(np.sin(Y)) fig, axes = heatmap_slicer( x, y, (data1, data2), slices="both", heatmap_names=("dataset 1", "dataset 2"), labels=("Some wild X variable", "Y axis"), interaction_type="move", ) ###Output _____no_output_____
Chapter 2/Exercises/2.34_Ace_of_Clubs.ipynb
###Markdown 2.34 Ace of clubs wins (mean and STD) Consider the following card game with a well-shuffled deck of cards.If you draw a red card, you win nothing. If you get a spade, you win 5 dollars. For any club, you win10 dollars plus an extra 20 dollars for the ace of clubs. ###Code import math """ Wirks using a list of tuples (x,P(x))""" #Mean function def mean(l): s=0 for i in t: s = s +(i[0] * i[1]) return s #Standard Deviation def std(l): s = 0 mu = mean(l) for i in l: s = s + (((i[0] - mu)**2) * i[1] ) return math.sqrt(s) ###Output _____no_output_____ ###Markdown **question : ** Compute a statistical model, find the mean and the standard deviation * **Mean : ** ###Code """Setting up variables""" # list of tuples x1 = (0,26/52) x2 = (5,13/52) x3 = (10,12/52) x4 = (30,1/52) t = [x1,x2,x3,x4] #computing the mean ex = mean(t) print(round(ex,4)) ###Output 4.1346 ###Markdown * **Standard Deviation** ###Code #computing the standard deviation s = std(t) print(round(s,4)) ###Output 5.435
homepage/content/research/.ipynb_checkpoints/spectraldns-checkpoint.ipynb
###Markdown SpectralDNS **Rayleigh-Bénard** convection computed with spectral accuracy using a [spectralDNS solver](https://github.com/spectralDNS/spectralDNS/blob/master/demo/RayleighBenard.py). The [spectralDNS](https://github.com/spectralDNS) project revolves aroundimplementing high-performance flow solvers in [Python](https://www.python.org), which is a modern and veryhigh-level programming language. The project is supported through several grants from [KingAbdullahs University of Science and Technology](https://www.hpc.kaust.edu.sa),granting access to some of the world's largest supercomputers. The work hasbeen presented at several conferences and as invited talks * [11'th International Conference on Scientific Computing and Applications](http://tianyuan.xmu.edu.cn/activities/19-20/ICSCA2019/index.html) * [International Conference on Computational Science and Engineering](https://cseconf2017.simula.no) * [I3MS Seminar Series of the Institute for Modeling and Simulation, 6 Nov 2017 RWTH AAchen University](https://www.aices.rwth-aachen.de/en/media-and-seminars/events/mortensen-seminar) * [Predictive Complex Computational Fluid Dynamics, KAUST, May 2017](https://pccfd.kaust.edu.sa/speaker?si=4) * [MekIT'17 National Conference on computational Mechanics, Trondheim, May 2017](http://arxiv.org/abs/1708.03188) * [EuroScipy, Cambridge, August 2015](https://www.euroscipy.org/2015/schedule/presentation/6/)The *spectralDNS* project on github contains several repositories, each representing a smaller part of the overall project. The most important are presented beolw. spectralDNS **Strong scaling** of triply periodic Navier-Stokes solver on the Shaheen II supercomputer at KAUST. The [spectralDNS](https://github.com/spectralDNS/spectralDNS) repository is home to several different pseudo-spectral Navier-Stokes and MagnetoHydroDynamics solvers. Most solvers are for triply periodic domains. The simplest possible Navier-Stokes solver is described by {% cite Mortensen2016 %}, who show that a highly efficient solver can be created using no more than 100 lines of code, using nothing more than standardtools like *Numpy* and *MPI for Python*. The DNS solver has been tested for atransitional Taylor-Green vortex using a computational box of size $2048^3$. Accuracy is, well spectral, and in benchmark tests on the Shaheen II supercomputer at KAUST it has been found to scale well up to 64,000 cores.A state-of-the-art spectral channel flow solver that is making extensive use of *shenfun*, has been described by {% cite mortensen2017spectral %}. Turbulent flow at $Re_{\tau}=2000$ is shown in the movie below.With colleagues at the Extreme Computing Research Center (ECRC), King Abdullah University of Science and Technology (KAUST), we have been using [spectralDNS](https://github.com/spectralDNS) to investigate time integration of Fourier pseudospectral Direct Numerical Simulations {% cite ketcheson2020 %}. We investigate the use of higher‐order Runge‐Kutta pairs and automatic step size control based on local error estimation. We find that the fifth‐order accurate Runge‐Kutta pair of Bogacki and Shampine gives much greater accuracy at a significantly reduced computational cost. ShenfunWith the [shenfun](https://github.com/spectralDNS/shenfun) Python module{% cite shenfun %} aneffort is made towards automating the implementation of the spectral Galerkinmethod for simple tensor product domains, consisting of non-periodic andperiodic directions. The user interface to *shenfun* is intentionally made verysimilar to [FEniCS](https://fenicsproject.org). Partial Differential Equationsare represented through weak variational forms and solved using efficient directsolvers where available. MPI decomposition is achieved through the [mpi4py-fft](https://bitbucket.org/mpi4py/mpi4py-fft) module, and all developed solvers may,with no additional effort, be run on supercomputers using thousands ofprocessors.An introduction to *shenfun* is given in {% cite shenfun %}, on [readthedocs](https://shenfun.readthedocs.io)and the recent paper {% cite mortensen_joss %}. Introduction to *mpi4py-fft*is given [here](https://mpi4py-fft.readthedocs.io/en/latest/) andin {% cite mpi4py-fft_joss jpdc_fft %}. Further documentation is found at [![Documentation](https://readthedocs.org/projects/shenfun/badge/?version=latest)](https://shenfun.readthedocs.io/en/latest/)Try shenfun in the computational cell below. Modify to own liking and run interactively. ###Code from sympy import symbols, cos, sin, pi from shenfun import * import matplotlib.pyplot as plt %matplotlib inline x, y = symbols("x,y") C = Basis(20, 'Chebyshev') F = Basis(24, 'Fourier', dtype='d') T = TensorProductSpace(comm, (C, F)) u = project(sin(2*x)*cos(pi*y), T) X = T.local_mesh(True) plt.contourf(X[0], X[1], u.backward()) plt.colorbar() plt.show() ###Output _____no_output_____
4_pretrained.ipynb
###Markdown ###Code from google.colab import drive drive.mount('/content/drive') %matplotlib inline !ls -l !cp ./drive/MyDrive/training_data.zip . !unzip training_data.zip import glob import numpy as np import matplotlib.pyplot as plt from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img IMG_DIM = (150, 150) train_files = glob.glob('training_data/*') train_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in train_files] train_imgs = np.array(train_imgs) train_labels = [fn.split('/')[1].split('.')[0].strip() for fn in train_files] validation_files = glob.glob('validation_data/*') validation_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in validation_files] validation_imgs = np.array(validation_imgs) validation_labels = [fn.split('/')[1].split('.')[0].strip() for fn in validation_files] print('Train dataset shape:', train_imgs.shape, '\tValidation dataset shape:', validation_imgs.shape) train_imgs_scaled = train_imgs.astype('float32') validation_imgs_scaled = validation_imgs.astype('float32') train_imgs_scaled /= 255 validation_imgs_scaled /= 255 batch_size = 50 num_classes = 2 epochs = 150 input_shape = (150, 150, 3) from sklearn.preprocessing import LabelEncoder le = LabelEncoder() le.fit(train_labels) # encode wine type labels train_labels_enc = le.transform(train_labels) validation_labels_enc = le.transform(validation_labels) print(train_labels[0:5], train_labels_enc[0:5]) from tensorflow.keras.applications import vgg16 from tensorflow.keras.models import Model import tensorflow.keras vgg = vgg16.VGG16(include_top=False, weights='imagenet', input_shape=input_shape) output = vgg.layers[-1].output output = tensorflow.keras.layers.Flatten()(output) vgg_model = Model(vgg.input, output) vgg_model.trainable = False for layer in vgg_model.layers: layer.trainable = False vgg_model.summary() import pandas as pd pd.set_option('max_colwidth', -1) layers = [(layer, layer.name, layer.trainable) for layer in vgg_model.layers] pd.DataFrame(layers, columns=['Layer Type', 'Layer Name', 'Layer Trainable']) print("Trainable layers:", vgg_model.trainable_weights) bottleneck_feature_example = vgg.predict(train_imgs_scaled[0:1]) print(bottleneck_feature_example.shape) plt.imshow(bottleneck_feature_example[0][:,:,0]) def get_bottleneck_features(model, input_imgs): features = model.predict(input_imgs, verbose=0) return features train_features_vgg = get_bottleneck_features(vgg_model, train_imgs_scaled) validation_features_vgg = get_bottleneck_features(vgg_model, validation_imgs_scaled) print('Train Bottleneck Features:', train_features_vgg.shape, '\tValidation Bottleneck Features:', validation_features_vgg.shape) from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, InputLayer from tensorflow.keras.models import Sequential from tensorflow.keras import optimizers input_shape = vgg_model.output_shape[1] model = Sequential() model.add(InputLayer(input_shape=(input_shape,))) model.add(Dense(512, activation='relu', input_dim=input_shape)) model.add(Dropout(0.3)) model.add(Dense(512, activation='relu')) model.add(Dropout(0.3)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['accuracy']) model.summary() history = model.fit(x=train_features_vgg, y=train_labels_enc, validation_data=(validation_features_vgg, validation_labels_enc), batch_size=batch_size, epochs=epochs, verbose=1) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4)) t = f.suptitle('Pre-trained CNN (Transfer Learning) Performance', fontsize=12) f.subplots_adjust(top=0.85, wspace=0.3) epoch_list = list(range(1,151)) ax1.plot(epoch_list, history.history['accuracy'], label='Train Accuracy') ax1.plot(epoch_list, history.history['val_accuracy'], label='Validation Accuracy') ax1.set_xticks(np.arange(0, 151, 5)) ax1.set_ylabel('Accuracy Value') ax1.set_xlabel('Epoch') ax1.set_title('Accuracy') l1 = ax1.legend(loc="best") ax2.plot(epoch_list, history.history['loss'], label='Train Loss') ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss') ax2.set_xticks(np.arange(0, 151, 5)) ax2.set_ylabel('Loss Value') ax2.set_xlabel('Epoch') ax2.set_title('Loss') l2 = ax2.legend(loc="best") model.save('4-pretrained_cnn.h5') ###Output _____no_output_____ ###Markdown ###Code %matplotlib inline !unzip training_data.zip !unzip validation_data.zip import glob import numpy as np import matplotlib.pyplot as plt from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img IMG_DIM = (150, 150) train_files = glob.glob('training_data/*') train_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in train_files] train_imgs = np.array(train_imgs) train_labels = [fn.split('/')[1].split('.')[0].strip() for fn in train_files] validation_files = glob.glob('validation_data/*') validation_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in validation_files] validation_imgs = np.array(validation_imgs) validation_labels = [fn.split('/')[1].split('.')[0].strip() for fn in validation_files] print('Train dataset shape:', train_imgs.shape, '\tValidation dataset shape:', validation_imgs.shape) train_imgs_scaled = train_imgs.astype('float32') validation_imgs_scaled = validation_imgs.astype('float32') train_imgs_scaled /= 255 validation_imgs_scaled /= 255 batch_size = 50 num_classes = 2 epochs = 150 input_shape = (150, 150, 3) from sklearn.preprocessing import LabelEncoder le = LabelEncoder() le.fit(train_labels) # encode wine type labels train_labels_enc = le.transform(train_labels) validation_labels_enc = le.transform(validation_labels) print(train_labels[0:5], train_labels_enc[0:5]) from tensorflow.keras.applications import vgg16 from tensorflow.keras.models import Model import tensorflow.keras vgg = vgg16.VGG16(include_top=False, weights='imagenet', input_shape=input_shape) output = vgg.layers[-1].output output = tensorflow.keras.layers.Flatten()(output) vgg_model = Model(vgg.input, output) vgg_model.trainable = False for layer in vgg_model.layers: layer.trainable = False vgg_model.summary() import pandas as pd pd.set_option('max_colwidth', -1) layers = [(layer, layer.name, layer.trainable) for layer in vgg_model.layers] pd.DataFrame(layers, columns=['Layer Type', 'Layer Name', 'Layer Trainable']) print("Trainable layers:", vgg_model.trainable_weights) bottleneck_feature_example = vgg.predict(train_imgs_scaled[0:1]) print(bottleneck_feature_example.shape) plt.imshow(bottleneck_feature_example[0][:,:,0]) def get_bottleneck_features(model, input_imgs): features = model.predict(input_imgs, verbose=0) return features train_features_vgg = get_bottleneck_features(vgg_model, train_imgs_scaled) validation_features_vgg = get_bottleneck_features(vgg_model, validation_imgs_scaled) print('Train Bottleneck Features:', train_features_vgg.shape, '\tValidation Bottleneck Features:', validation_features_vgg.shape) from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, InputLayer from tensorflow.keras.models import Sequential from tensorflow.keras import optimizers input_shape = vgg_model.output_shape[1] model = Sequential() model.add(InputLayer(input_shape=(input_shape,))) model.add(Dense(512, activation='relu', input_dim=input_shape)) model.add(Dropout(0.3)) model.add(Dense(512, activation='relu')) model.add(Dropout(0.3)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['accuracy']) model.summary() history = model.fit(x=train_features_vgg, y=train_labels_enc, validation_data=(validation_features_vgg, validation_labels_enc), batch_size=batch_size, epochs=epochs, verbose=1) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4)) t = f.suptitle('Pre-trained CNN (Transfer Learning) Performance', fontsize=12) f.subplots_adjust(top=0.85, wspace=0.3) epoch_list = list(range(1,151)) ax1.plot(epoch_list, history.history['accuracy'], label='Train Accuracy') ax1.plot(epoch_list, history.history['val_accuracy'], label='Validation Accuracy') ax1.set_xticks(np.arange(0, 151, 5)) ax1.set_ylabel('Accuracy Value') ax1.set_xlabel('Epoch') ax1.set_title('Accuracy') l1 = ax1.legend(loc="best") ax2.plot(epoch_list, history.history['loss'], label='Train Loss') ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss') ax2.set_xticks(np.arange(0, 151, 5)) ax2.set_ylabel('Loss Value') ax2.set_xlabel('Epoch') ax2.set_title('Loss') l2 = ax2.legend(loc="best") model.save('4-pretrained_cnn.h5') ###Output _____no_output_____ ###Markdown ###Code from google.colab import drive drive.mount('/content/drive') %matplotlib inline !ls -l !cp ./drive/MyDrive/training_data.zip . !unzip training_data.zip import glob import numpy as np import matplotlib.pyplot as plt from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img IMG_DIM = (150, 150) train_files = glob.glob('training_data/*') train_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in train_files] train_imgs = np.array(train_imgs) train_labels = [fn.split('/')[1].split('.')[0].strip() for fn in train_files] validation_files = glob.glob('validation_data/*') validation_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in validation_files] validation_imgs = np.array(validation_imgs) validation_labels = [fn.split('/')[1].split('.')[0].strip() for fn in validation_files] print('Train dataset shape:', train_imgs.shape, '\tValidation dataset shape:', validation_imgs.shape) train_imgs_scaled = train_imgs.astype('float32') validation_imgs_scaled = validation_imgs.astype('float32') train_imgs_scaled /= 255 validation_imgs_scaled /= 255 batch_size = 50 num_classes = 2 epochs = 150 input_shape = (150, 150, 3) from sklearn.preprocessing import LabelEncoder le = LabelEncoder() le.fit(train_labels) # encode wine type labels train_labels_enc = le.transform(train_labels) validation_labels_enc = le.transform(validation_labels) print(train_labels[0:5], train_labels_enc[0:5]) from tensorflow.keras.applications import vgg16 from tensorflow.keras.models import Model import tensorflow.keras vgg = vgg16.VGG16(include_top=False, weights='imagenet', input_shape=input_shape) output = vgg.layers[-1].output output = tensorflow.keras.layers.Flatten()(output) vgg_model = Model(vgg.input, output) vgg_model.trainable = False for layer in vgg_model.layers: layer.trainable = False vgg_model.summary() import pandas as pd pd.set_option('max_colwidth', -1) layers = [(layer, layer.name, layer.trainable) for layer in vgg_model.layers] pd.DataFrame(layers, columns=['Layer Type', 'Layer Name', 'Layer Trainable']) print("Trainable layers:", vgg_model.trainable_weights) bottleneck_feature_example = vgg.predict(train_imgs_scaled[0:1]) print(bottleneck_feature_example.shape) plt.imshow(bottleneck_feature_example[0][:,:,0]) def get_bottleneck_features(model, input_imgs): features = model.predict(input_imgs, verbose=0) return features train_features_vgg = get_bottleneck_features(vgg_model, train_imgs_scaled) validation_features_vgg = get_bottleneck_features(vgg_model, validation_imgs_scaled) print('Train Bottleneck Features:', train_features_vgg.shape, '\tValidation Bottleneck Features:', validation_features_vgg.shape) from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, InputLayer from tensorflow.keras.models import Sequential from tensorflow.keras import optimizers input_shape = vgg_model.output_shape[1] model = Sequential() model.add(InputLayer(input_shape=(input_shape,))) model.add(Dense(512, activation='relu', input_dim=input_shape)) model.add(Dropout(0.3)) model.add(Dense(512, activation='relu')) model.add(Dropout(0.3)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['accuracy']) model.summary() history = model.fit(x=train_features_vgg, y=train_labels_enc, validation_data=(validation_features_vgg, validation_labels_enc), batch_size=batch_size, epochs=epochs, verbose=1) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4)) t = f.suptitle('Pre-trained CNN (Transfer Learning) Performance', fontsize=12) f.subplots_adjust(top=0.85, wspace=0.3) epoch_list = list(range(1,151)) ax1.plot(epoch_list, history.history['accuracy'], label='Train Accuracy') ax1.plot(epoch_list, history.history['val_accuracy'], label='Validation Accuracy') ax1.set_xticks(np.arange(0, 151, 5)) ax1.set_ylabel('Accuracy Value') ax1.set_xlabel('Epoch') ax1.set_title('Accuracy') l1 = ax1.legend(loc="best") ax2.plot(epoch_list, history.history['loss'], label='Train Loss') ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss') ax2.set_xticks(np.arange(0, 151, 5)) ax2.set_ylabel('Loss Value') ax2.set_xlabel('Epoch') ax2.set_title('Loss') l2 = ax2.legend(loc="best") model.save('4-pretrained_cnn.h5') ###Output _____no_output_____
2021-07-22-precision-recall.ipynb
###Markdown Precision vs monotonic precision ###Code from matplotlib.patches import Rectangle # plt.rcParams['figure.figsize'] = (14, 14) # fig = plt.figure() fig, axes = plt.subplots(1, 3, figsize=(9, 3)) axes = iter(axes.ravel()) ax2 = next(axes) ax2.plot(PPV, label="PPV(k)"); ax2.plot(PPVi, label="PPV'(k)"); ax2.set_title('A') ax2.set_ylabel("precision") ax2.set_xlabel("k (confidence rank)") # ax2.legend(); ax2.add_patch(Rectangle((0, PPV[:1500].min()), 1500, 1-PPV[:1500].min(), fill=False)) ax2 = next(axes) ax2.plot(PPV[:1500], label="PPV(k)"); ax2.plot(PPVi[:1500], label="PPV'(k)"); ax2.set_title('B') ax2.set_ylabel("precision") ax2.set_xlabel("k (confidence rank)") # ax2.legend(); ax3 = next(axes) ax3.axis('off') ax3.legend(*ax2.get_legend_handles_labels(), loc='lower right', fontsize='x-large', borderpad=2) fig.tight_layout() fig.savefig("PPV_vs_PPVi.pdf") ###Output _____no_output_____ ###Markdown PR-curve and AveP / AP ###Code fig, axes = plt.subplots(1, 4, figsize=(12, 3)) # fig.suptitle('Precision-recall curves') axes = iter(axes.ravel()) ax2 = next(axes) ax2.plot(TPR, label="TPR(k)"); ax2.plot(PPVi, label="PPV'(k)"); ax2.set_title('A') ax2.set_ylabel("precision / recall") ax2.set_xlabel("k (rank)") ax2.legend(); ax2 = next(axes) ax2.plot(TPR, PPVi, label="PPV'(TPR)"); ax2.set_title('B') ax2.set_ylabel("precision") ax2.set_xlabel("recall") ax2.legend() ax2 = next(axes) ax2.plot(TPR, PPVi, label="PPV'(TPR)"); ax2.set_title('C') ax2.set_ylabel("precision") ax2.set_xlabel("recall") ax2.set_xlim([0, 1]) ax2.set_ylim([0, 1]) ax2.fill_between(TPR, PPVi, hatch='//', alpha=0.3, label="AveP = AuC(PPV')") ax2.legend() ax2 = next(axes) ax2.scatter(recThrs, q, label="PPV"); ax2.axhline(np.mean(q), label="AveP = mean(PPV'_i))") ax2.set_title('D') ax2.set_ylabel("precision") ax2.set_xlabel("recall") ax2.set_xlim([-0.01, 1.01]) ax2.set_ylim([-0.01, 1.01]) ax2.legend(loc='lower left') fig.tight_layout() fig.savefig("PR_AP_examples.pdf") plt.figure(figsize=(6, 6)) plt.step(recThrs, q); ###Output _____no_output_____
Test_frag_reattach_after_merge_files.ipynb
###Markdown Simple example ###Code moles_dict = {} spe_label = ['ArCC(C)R', 'RCC', 'RCCCCR', 'RCC__C', 'ArC__CR', 'RCC__CCR'] values = [6, 2, 3, 6, 4, 3] i=0 for spe_name in spe_label: moles_dict[spe_name] = values[i] i += 1 moles_dict r_moles, _, rr_list = afm.simulator.categorize_fragments_1_label(moles_dict) r_moles rr_list # test reattchment grind_size = 1 shuffle_seed = 0 grinded_r_moles = afm.utils.grind(r_moles, grind_size) grinded_r_moles r_moles_shuffle = afm.utils.shuffle(grinded_r_moles, shuffle_seed) r_moles_shuffle half_length = len(r_moles_shuffle)/2 half_r_moles_shuffle_1 = r_moles_shuffle[0:half_length] half_r_moles_shuffle_2 = r_moles_shuffle[half_length:len(r_moles_shuffle)] len(half_r_moles_shuffle_1) == len(half_r_moles_shuffle_2) matches0 = afm.utils.match_concentrations_with_same_sums(half_r_moles_shuffle_1, half_r_moles_shuffle_2, diff_tol=1e-3) matches0 matches1, new_r_l_moles = afm.utils.matches_resolve_1_label(matches0, rr_list) matches1 r_r_moles=[] r_r_moles.extend(new_r_l_moles) matches = afm.utils.match_concentrations_with_different_sums(matches1, r_r_moles) matches moles_dict_after_match = {} for match in matches: combo, val = match frags = afm.utils.flatten(combo) for frag in frags: if frag not in moles_dict_after_match: moles_dict_after_match[frag] = val else: moles_dict_after_match[frag] += val for frag, val in moles_dict.iteritems(): val_after_match = moles_dict_after_match[frag] diff_pct = abs(val - val_after_match)/moles_dict[frag] val moles_dict[frag] val_after_match ###Output _____no_output_____
FraudAnalysis_Inpatient.ipynb
###Markdown Fraud Analysis Data Introduction ###Code import pandas as pd import numpy as np import tkinter as tk import matplotlib.pyplot as plt import matplotlib as mpl from tkinter import filedialog from pandas import DataFrame import seaborn as sns # Upload Data train_benFile= "C:/Users/Nicole/Desktop/fraud_detection/Train_Beneficiarydata-1542865627584.csv" train_ben= pd.read_csv(train_benFile) train_ben train_inpatientFile= "C:/Users/Nicole/Desktop/fraud_detection/Train_Inpatientdata-1542865627584.csv" train_inpatient= pd.read_csv(train_inpatientFile) train_inpatient train_outpatientFile= "C:/Users/Nicole/Desktop/fraud_detection/Train_Outpatientdata-1542865627584.csv" train_outpatient= pd.read_csv(train_outpatientFile) train_outpatient ###Output _____no_output_____ ###Markdown Test Sets ###Code test_benFile= "C:/Users/Nicole/Desktop/fraud_detection/Test_Beneficiarydata-1542969243754.csv" test_ben= pd.read_csv(test_benFile) test_ben test_inpatientFile= "C:/Users/Nicole/Desktop/fraud_detection/Test_Inpatientdata-1542969243754.csv" test_inpatient= pd.read_csv(test_inpatientFile) test_inpatient test_outpatientFile= "C:/Users/Nicole/Desktop/fraud_detection/Test_Outpatientdata-1542969243754.csv" test_outpatient= pd.read_csv(test_outpatientFile) test_outpatient ###Output _____no_output_____ ###Markdown Merged Datasets for Train and Test Since we will most likely clean both the train and test sets, it makes sense to remerge the inpatient and outpatient test and train set ###Code frames_outpatient= [train_outpatient, test_outpatient] merged_outpatient= pd.concat(frames_outpatient) merged_outpatient frames_inpatient= [train_inpatient, test_inpatient] merged_inpatient= pd.concat(frames_inpatient) merged_inpatient frames_ben= [train_ben,test_ben] merged_ben= pd.concat(frames_ben) merged_ben ###Output _____no_output_____ ###Markdown Number of Unique Beneficiaries, Providers and Claims in each Dataset Number of Unique Beneficiaries in each Set ###Code merged_ben['BeneID'].nunique() merged_inpatient['BeneID'].nunique() merged_outpatient['BeneID'].nunique() ###Output _____no_output_____ ###Markdown Number of Unique Claim Ids in each Set ###Code merged_outpatient['ClaimID'].nunique() merged_inpatient['ClaimID'].nunique() ###Output _____no_output_____ ###Markdown Number of Unique Providers in each Set ###Code merged_outpatient['Provider'].nunique() merged_inpatient['Provider'].nunique() ###Output _____no_output_____ ###Markdown Should we merge the inpatient and outpatient information given that there are some patients and/or providers that are both in the inpatient and outpatient datasets? Based on background knowledge, inpatient and outpatient services are different and so the providers would be different hence there should be no overlap. However, we will confirm nonetheless. We do this by checking if there are patients in the inpatient file that are also in the outpatient file. ###Code ben_list_inpatient= merged_inpatient['BeneID'].unique() ben_in= pd.DataFrame(ben_list_inpatient) ben_in # create a list with all unique beneficiary IDs in the outpatient dataset. ben_list_outpatient= merged_outpatient['BeneID'].unique() # convert list to a df ben_out= pd.DataFrame(ben_list_outpatient) ben_out # create another list which contains entries that are both in outpatient and inpatient file overlap=ben_out.isin(ben_in) # to access the .unique() function, we convert overlap to a series overlap= overlap.squeeze() type(overlap) # Since there are no "True"values, then we do not have any overlap overlap.unique() # we do the same for providers prov_list_inpatient= merged_inpatient['Provider'].unique() prov_in= pd.DataFrame(prov_list_inpatient) prov_in prov_list_outpatient= merged_outpatient['Provider'].unique() prov_out= pd.DataFrame(prov_list_outpatient) prov_out overlap_prov=prov_out.isin(prov_in) overlap_prov= overlap_prov.squeeze() type(overlap_prov) overlap_prov.unique() ###Output _____no_output_____ ###Markdown There is no overlap and so there is no value in merging the inpatient and outpatient files. This simplifies our workflow. Now we need to merge the beneficiary information to both the inpatient and outpatient datasets. However, it makes sense to 'clean' the beneficiary dataset before merging it with the inpatient and outpatient datasets. ###Code tmp_ben1= merged_ben #tmp_ben1['RenalDiseaseIndicator'].unique() tmp_ben1['RenalDiseaseIndicator']= tmp_ben1['RenalDiseaseIndicator'].astype(str) def renal_cleaner (df): len_renal= len(df) df2= [] for i in range(len_renal): val= df.iloc[i]['RenalDiseaseIndicator'] if val == 'Y': val= 1 df2.append(val) else: df2.append(val) return df2 j=renal_cleaner(tmp_ben1) j tmp_ben1['RenalDiseaseIndicator']= j ## Only col with severl null values is the DOD col tmp_ben1.isnull().sum() ###Output _____no_output_____ ###Markdown Merge Changes We do the same check and for the inpatient dataset, we get a df with 76827 rows × 54 columns which is correct. ###Code tmp_in= merged_inpatient tmp_in['ClaimID'].nunique() tmp_ben1['BeneID'].nunique() inner_join_in=pd.merge(tmp_in,tmp_ben1, on='BeneID', how='inner') inner_join_in['ClaimID'].nunique() inner_join_in['BeneID'].nunique() inner_join_in['ClaimID'].nunique() meh=inner_join_in.drop_duplicates(subset=['ClaimID'], keep= False) type(meh) meh['ClaimID'].nunique() ###Output _____no_output_____ ###Markdown Fix Zone ###Code ben_Inp= inner_join_in['BeneID'].unique().tolist() ben_Inp= pd.DataFrame(ben_Inp, columns = ['BeneID']) ben_Inp.dtypes ben_Master= merged_ben['BeneID'].unique().tolist() ben_Master= pd.DataFrame(ben_Master, columns = ['BeneID']).astype('str') met=pd.concat([ben_Inp,ben_Master], axis=1) def misisngClaims (df1, df2): len1= len(df1) #len2= len(df2) df3=[] for i in range(len1): if str(df2.BenMaster).isin(df1.BenInp) == True: df3.append('Here') else: df3.append(df1.BenInp[i]) return df3 ret= pd.merge(ben_Inp,ben_Master, on='BeneID', how='inner') ret['BeneID'].nunique ###Output _____no_output_____ ###Markdown Unhelpful NaNs For Inpatient dataset, it is worth keeping the column ClmDiagnosisCode_n up to n=10. As for .ClmProcedureCode_n, we can definately drop 'ClmProcedureCode_6' and keep the rest ###Code inner_join_in.ClmDiagnosisCode_10.unique() inner_join_in.ClmProcedureCode_6.unique() inner_join_in.ClmProcedureCode_5.unique() inner_join_in.ClmProcedureCode_4.unique() drop_inp= ['ClmProcedureCode_6'] inpatient= inner_join_in inpatient['DOD'].unique() ###Output _____no_output_____ ###Markdown EDA Insights ###Code import matplotlib.pyplot as plt import seaborn as sns ###Output _____no_output_____ ###Markdown Inpatient ###Code ## Claim per provider claim_per_provider = inpatient.groupby(by=['Provider'])['ClaimID'].agg(['count']).rank(ascending=False) ## still get M, M, M claim_per_provider=claim_per_provider.reset_index() claim_per_provider sns.histplot(data=claim_per_provider,y='count',bins=25) claim_per_provider.mean() claim_per_provider.min() claim_per_provider.max() bene_per_claim = inpatient.groupby(by=['BeneID'])['ClaimID'].agg(['count']).rank(ascending=False) bene_per_claim= bene_per_claim.reset_index() bene_per_claim sns.histplot(data=bene_per_claim,x='count',bins=30) bene_per_claim.mean() bene_per_claim.max() bene_per_claim.min() inpatient['InscClaimAmtReimbursed'].mean() inpatient['InscClaimAmtReimbursed'].max() inpatient['InscClaimAmtReimbursed'].min() dates= inpatient['ClaimStartDt'] date1=sorted(dates) date1[76826] ###Output _____no_output_____ ###Markdown Value Replacements ###Code from datetime import date import numpy as np ###Output _____no_output_____ ###Markdown Inpatient Dataset Calculating Number of Days patient was admitted then drop 'AdmissionDt' and 'DischargeDt'. We will do the same to the dates associated with claim dates Inpatient: Delta to Derive Number of Admit Days ###Code tmp_date= inpatient tmp_date['AdmissionDt']= pd.to_datetime(tmp_date['AdmissionDt']) tmp_date['DischargeDt']= pd.to_datetime(tmp_date['DischargeDt']) tmp_date['AdmitDays']=tmp_date['DischargeDt'] - tmp_date['AdmissionDt'] tmp_date['AdmitDays']= tmp_date['AdmitDays'].apply(lambda x: x.days) tmp_date.AdmitDays=tmp_date.AdmitDays.astype('int64') type(tmp_date.iloc[0]['AdmitDays']) ###Output _____no_output_____ ###Markdown Inpatient: Delta to Derive Number of Claim Days ###Code tmp_date['ClaimStartDt']= pd.to_datetime(tmp_date['ClaimStartDt']) tmp_date['ClaimEndDt']= pd.to_datetime(tmp_date['ClaimEndDt']) tmp_date['ClaimDays']=tmp_date['ClaimEndDt'] - tmp_date['ClaimStartDt'] #tmp_date.ClaimDays=tmp_date.ClaimDays.astype('int64') tmp_date['ClaimDays']= tmp_date['ClaimDays'].apply(lambda x: x.days) tmp_date.ClaimDays=tmp_date.ClaimDays.astype('int64') drop_inp1= ['ClaimEndDt', 'ClaimStartDt','AdmissionDt','DischargeDt' ] ###Output _____no_output_____ ###Markdown Recoding Providers ###Code tmp_date.dtypes tmp_date.Provider=tmp_date.Provider.astype('str') tmp_date.iloc[1]['Provider'] def remove_prefix(text, prefix): if text.startswith(prefix): return text[len(prefix):] return text def provider_cleaner(df, col_name, prefix): df2= [] b=len(df) for i in range (b): text= df.iloc[i][col_name] df2.append(remove_prefix(text,prefix)) return df2 tmp_=provider_cleaner(tmp_date,'Provider',"PRV") tmp_date['Provider_C']=tmp_ tmp_date['Provider_C']= tmp_date['Provider_C'].astype('int64') tmp_date.dtypes tmp_date=tmp_date.fillna(0) tmp_date['DOD'].dtypes ###Output _____no_output_____ ###Markdown Repeat for 'AttendingPhysician', 'OperatingPhysician', 'OtherPhysician' ###Code tmp_date.AttendingPhysician=tmp_date.AttendingPhysician.astype('str') tmp_date.OperatingPhysician=tmp_date.OperatingPhysician.astype('str') tmp_date.OtherPhysician=tmp_date.OtherPhysician.astype('str') tmp_Attending=provider_cleaner(tmp_date,'AttendingPhysician',"PHY") tmp_Operating=provider_cleaner(tmp_date,'OperatingPhysician',"PHY") tmp_Other=provider_cleaner(tmp_date,'OtherPhysician',"PHY") tmp_date['Attending_P']=tmp_Attending tmp_date['Attending_P']= tmp_date['Attending_P'].astype('float64') tmp_date['Attending_P']= tmp_date['Attending_P'].fillna(0) tmp_date['Operating_P']=tmp_Operating tmp_date['Operating_P']= tmp_date['Operating_P'].astype('float64') tmp_date['Operating_P']= tmp_date['Operating_P'].fillna(0) tmp_date['Other_P']=tmp_Other tmp_date['Other_P']= tmp_date['Other_P'].astype('float64') tmp_date['Other_P']= tmp_date['Other_P'].fillna(0) # New cols added tmp_date.fillna(0) tmp_date.isnull().sum() ###Output _____no_output_____ ###Markdown Dealing with Claims ###Code base= tmp_date # Inpatient tmp_date.ClmAdmitDiagnosisCode=tmp_date.ClmAdmitDiagnosisCode.astype('str') tmp_date.ClmDiagnosisCode_1=tmp_date.ClmDiagnosisCode_1.astype('str') tmp_date.ClmDiagnosisCode_2=tmp_date.ClmDiagnosisCode_2.astype('str') tmp_date.ClmDiagnosisCode_3=tmp_date.ClmDiagnosisCode_3.astype('str') tmp_date.ClmDiagnosisCode_4=tmp_date.ClmDiagnosisCode_4.astype('str') tmp_date.ClmDiagnosisCode_5=tmp_date.ClmDiagnosisCode_5.astype('str') tmp_date.ClmDiagnosisCode_6=tmp_date.ClmDiagnosisCode_6.astype('str') tmp_date.ClmDiagnosisCode_7=tmp_date.ClmDiagnosisCode_7.astype('str') tmp_date.ClmDiagnosisCode_8=tmp_date.ClmDiagnosisCode_8.astype('str') tmp_date.ClmDiagnosisCode_9=tmp_date.ClmDiagnosisCode_9.astype('str') tmp_date.ClmDiagnosisCode_10=tmp_date.ClmDiagnosisCode_10.astype('str') list_= ['ClmAdmitDiagnosisCode', 'ClmDiagnosisCode_1' , 'ClmDiagnosisCode_2' , 'ClmDiagnosisCode_3' , 'ClmDiagnosisCode_4', 'ClmDiagnosisCode_5', 'ClmDiagnosisCode_6', 'ClmDiagnosisCode_7', 'ClmDiagnosisCode_8', 'ClmDiagnosisCode_9', 'ClmDiagnosisCode_10' ] def claim_cleaner(df,col_name): b=len(df) df2=[] for n in range (b): text= df.iloc[n][col_name] if text[:1].isdigit(): df2.append(text) elif text == 'nan': df2.append(text) else: text = text.replace(text[0], '99') df2.append(text) return df2 Admit_Code= claim_cleaner(tmp_date,'ClmAdmitDiagnosisCode') x_1= claim_cleaner(tmp_date,'ClmDiagnosisCode_1') #x_1 x_2= claim_cleaner(tmp_date,'ClmDiagnosisCode_2') #x_2 x_3= claim_cleaner(tmp_date,'ClmDiagnosisCode_3') #x_3 x_4= claim_cleaner(tmp_date,'ClmDiagnosisCode_4') #x_4 x_5= claim_cleaner(tmp_date,'ClmDiagnosisCode_5') #x_5 x_6= claim_cleaner(tmp_date,'ClmDiagnosisCode_6') #x_6 x_7= claim_cleaner(tmp_date,'ClmDiagnosisCode_7') #x_7 x_8= claim_cleaner(tmp_date,'ClmDiagnosisCode_8') #x_8 x_9= claim_cleaner(tmp_date,'ClmDiagnosisCode_9') #x_9 x_10= claim_cleaner(tmp_date,'ClmDiagnosisCode_10') #x_10 ## conca clean_claims= pd.DataFrame( { 'AdmitClmCode':Admit_Code, 'ClmDCode_1': x_1, 'ClmDCode_2': x_2, 'ClmDCode_3': x_3, 'ClmDCode_4': x_4, 'ClmDCode_5': x_5, 'ClmDCode_6': x_6, 'ClmDCode_7': x_7, 'ClmDCode_8': x_8, 'ClmDCode_9': x_9, 'ClmDCode_10': x_10, }) clean_claims.columns clean_claims[['AdmitClmCode', 'ClmDCode_1', 'ClmDCode_2', 'ClmDCode_3', 'ClmDCode_4', 'ClmDCode_5', 'ClmDCode_6', 'ClmDCode_7', 'ClmDCode_8', 'ClmDCode_9', 'ClmDCode_10']]=clean_claims[['AdmitClmCode', 'ClmDCode_1', 'ClmDCode_2', 'ClmDCode_3', 'ClmDCode_4', 'ClmDCode_5', 'ClmDCode_6', 'ClmDCode_7', 'ClmDCode_8', 'ClmDCode_9', 'ClmDCode_10']].apply(pd.to_numeric, errors='coerce') clean_claims.dtypes #clean_claims inp_cln= pd.concat([tmp_date, clean_claims], axis=1) inp_cln.shape base2= inp_cln ###Output _____no_output_____ ###Markdown Derive Age ###Code inp_cln.DOD=inp_cln.DOD.astype('str') inp_cln['DOD'].unique() def dod_nan (df): df2=[] len_df=len(df) for i in range (len_df): val= df.iloc[i]['DOD'] if val == '0': val= df.iloc[i]['AdmissionDt'] df2.append(val) else: df2.append(val) return df2 #Should have dates even with entries with original 0 values dod_clean= dod_nan(inp_cln) dod_clean # Dont have to worry about the modifying orig DOD Col since base2=intp_cln will give us the orig state of DOD col inp_cln['DOD']=dod_clean inp_cln['DOB']= pd.to_datetime(inp_cln['DOB']) inp_cln['DOD']= pd.to_datetime(inp_cln['DOD']) inp_cln['Age']= inp_cln['DOD'] - inp_cln['DOB'] inp_cln['Age']= inp_cln['Age'].apply(lambda x: x.days) #.astype('int64') inp_cln['Age']= inp_cln['Age'].astype('int64') inp_cln['Age']= inp_cln['Age'].apply(lambda x: x/365) # Resulting value makes sense inp_cln.iloc[0]['Age'] ###Output _____no_output_____ ###Markdown DOD ###Code base2.shape base2['DOD']= base['DOD'].fillna(0) #.astype('int64') _dod= base['DOD'] _dod.unique() h= _dod[0] h x=_dod[20] #x.isna() type(x) def is_dead (val): if val == 0: val= 0 else: val= 1 return val #y= _dod.isnull() y= _dod.apply(lambda x:is_dead(x)) y # should have 0 and 1 y.unique() # Confirm row numbers match inp_cln.shape base2.shape inp_cln['DOD_Code']= y list(inp_cln) merged_cleaned_inp= inp_cln cols_drop= [ 'ClaimStartDt', 'ClaimEndDt', 'Provider', 'AttendingPhysician', 'OperatingPhysician', 'OtherPhysician', 'AdmissionDt', 'ClmAdmitDiagnosisCode', 'ClmDiagnosisCode_1', 'ClmDiagnosisCode_2', 'ClmDiagnosisCode_3', 'ClmDiagnosisCode_4', 'ClmDiagnosisCode_5', 'ClmDiagnosisCode_6', 'ClmDiagnosisCode_7', 'ClmDiagnosisCode_8', 'ClmDiagnosisCode_9', 'ClmDiagnosisCode_10', 'DOB', 'DischargeDt', 'DOD' ] merged_cleaned_inp['RenalDiseaseIndicator']= merged_cleaned_inp['RenalDiseaseIndicator'].astype('int64') def non_numeric(df, col_name): len_df= len(df) df2= [] for i in range(len_df): if df.iloc[i][col_name].isdigit(): df2.append(df.iloc[i][col_name]) else: df2.append(1000) return df2 # should all be numeric tmp_n= non_numeric(merged_cleaned_inp, 'DiagnosisGroupCode') tmp_n merged_cleaned_inp['DiagnosisGroupCode']=tmp_n merged_cleaned_inp['DiagnosisGroupCode']= merged_cleaned_inp['DiagnosisGroupCode'].astype('float64') # dropcols merged_cleaned_inp=merged_cleaned_inp.drop(cols_drop,axis=1 ) merged_cleaned_inp.shape # expectation is that all dtypes is numeric except the 2 labels list(merged_cleaned_inp.dtypes) m=merged_cleaned_inp.drop_duplicates(subset=['ClaimID'], keep= False) m.shape ###Output _____no_output_____ ###Markdown Modeling ###Code from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.model_selection import cross_validate from sklearn.cluster import KMeans from scipy.cluster.hierarchy import linkage,dendrogram,ward from sklearn.cluster import AgglomerativeClustering from sklearn.manifold import TSNE from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression ###Output _____no_output_____ ###Markdown PCA Implementation ###Code x= m.drop(['BeneID','ClaimID',], axis=1) x x.shape #y= merged_cleaned_inp[['BeneID','ClaimID']] y= m['ClaimID'] y #Smaller Datasets x1= x y1= y y1 x1.shape scaler = StandardScaler().fit(x) x_inp_scaled=scaler.transform(x1) x1_scaled=scaler.transform(x1) x_inp_scaled.shape ### PCA when C=30 using full dataset pca=PCA(26).fit(x_inp_scaled) print(pca.explained_variance_ratio_,":","sum:",pca.explained_variance_ratio_.sum()) ## Explained VARIANCE plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); ###Output _____no_output_____ ###Markdown As seen above, c=30 yields a resonable cummulative explained variance score. However, this function took approximately 10 minutes to run. Hence, it is woth exploring how a smaller dataset compares. ###Code pcax1=PCA(50).fit(x1_scaled) print(pcax1.explained_variance_ratio_,":","sum:",pcax1.explained_variance_ratio_.sum()) ## Explained VARIANCE plt.plot(np.cumsum(pcax1.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); ###Output _____no_output_____ ###Markdown As seen seen above, the smaller dataset yielded similar results when c=28 Analyzing trends with Individual Components ###Code for a,b in zip(x.columns,pca.components_[20]): print(a,":",b.round(3)) for a,b in zip(x.columns,pcax1.components_[0]): print(a,":",b.round(3)) for a,b in zip(x.columns,pca.components_[29]): print(a,":",b.round(3)) for a,b in zip(x.columns,pcax1.components_[27]): print(a,":",b.round(3)) ###Output _____no_output_____ ###Markdown Graph the Components Model with full dataset ###Code pc_coords=pca.transform(x_inp_scaled) # the whole dataset when c=30 fig, ax = plt.subplots(figsize=(100,100)) xx=pc_coords[0:,0] yy=pc_coords[0:,1] #zz= pc_coords[0:1000,2] ax.scatter(xx,yy) for i,txt in enumerate(y): ax.annotate(txt, (xx[i], yy[i])) ###Output _____no_output_____ ###Markdown It is aparent that there is a main focal point and the surrounding area with sparse number of claims. The following graphs offer a closer look at this clear seperation ###Code fig, ax = plt.subplots(figsize=(10,10)) xx=pc_coords[0:10000,0] yy=pc_coords[0:10000,1] ax.scatter(xx,yy) for i,txt in enumerate(y[0:10000]): ax.annotate(txt, (xx[i], yy[i])) fig, ax = plt.subplots(figsize=(10,10)) xx=pc_coords[0:1000,0] yy=pc_coords[0:1000,1] ax.scatter(xx,yy) for i,txt in enumerate(y[0:1000]): ax.annotate(txt, (xx[i], yy[i])) ###Output _____no_output_____ ###Markdown Let us see how the model with less data but similar cummulative varaince scor compare with graphs above. ###Code pc30_coords=pca30.transform(x1_scaled) pc30_coords # Smaller dataset when c= 28 fig, ax = plt.subplots(figsize=(100,100)) xx=pc30_coords[:,0] yy=pc30_coords[:,1] ax.scatter(xx,yy) for i,txt in enumerate(y1): ax.annotate(txt, (xx[i], yy[i])) fig, ax = plt.subplots(figsize=(10,10)) xx=pc30_coords[:2000,0] yy=pc30_coords[:2000,1] ax.scatter(xx,yy) for i,txt in enumerate(y1[0:2000]): ax.annotate(txt, (xx[i], yy[i])) plt.scatter(pc_coords[:,0],pc_coords[:,1],c= 'orange',cmap='rainbow',) #### The graphical pattern from the full dataset seems to be present in the small one as well. ###Output _____no_output_____ ###Markdown plt.plot(np.cumsum(pca.explained_variance_ratio_))plt.xlabel('number of components')plt.ylabel('cumulative explained variance'); K Means ###Code x_s= x y_s= y xs_scaled=scaler.transform(x_s) x_inp_kmeans = KMeans(n_clusters=35,random_state=0).fit(xs_scaled) y_kmeans = x_inp_kmeans.predict(xs_scaled) kmeans20 = KMeans(n_clusters=20,random_state=0).fit(xs_scaled) def km_mse(inputs,k): mse=[] for i in range(1,k): errors=[] kmeans = KMeans(n_clusters=i, n_init=50,random_state=0).fit(inputs) for pt,lab in zip(inputs,kmeans.labels_): errors.append(np.linalg.norm(pt-kmeans.cluster_centers_[lab])**2) mse.append(np.mean(errors)) return mse inp_mse=km_mse(xs_scaled,35) inp_mse mse20=km_mse(xs_scaled,20) mse20 plt.plot(list(range(1,35)),inp_mse) plt.plot(list(range(1,20)),mse20) ###Output _____no_output_____ ###Markdown While we are not expecting a balanced distribution among the clusters per se, it seems that with larger values of k, each cluster tend to be either very high or low which indicates a natural grouping that is occuring that perhaps we can better see if k is small. ###Code ## See how balanced the distribution among clusters for i in range(4): print(i+1,":",(x_inp_kmeans.labels_==i).sum()) for i in range(20): print(i+1,":",(kmeans20.labels_==i).sum()) cx=kmeans20.cluster_centers_[:,0] cy=kmeans20.cluster_centers_[:,1] plt.scatter(xs_scaled[:,0],xs_scaled[:,1],c=kmeans20.labels_,cmap='rainbow') #plt.scatter(cx,cy,marker="^",c='r',s=150,edgecolor='w') label = kmeans20.fit_predict(xs_scaled) #filter rows of original data filtered_label2 = xs_scaled[label == 0] filtered_label8 = xs_scaled[label == 1] filtered_label1 = xs_scaled[label == 2] #Plotting the results plt.scatter(filtered_label2[:,0] , filtered_label2[:,1] , color = 'red') plt.scatter(filtered_label8[:,0] , filtered_label8[:,1] , color = 'black') plt.scatter(filtered_label1[:,0] , filtered_label1[:,1] , color = 'blue') plt.show() fig = plt.figure(figsize = (15,15)) ax = fig.add_subplot(111, projection='3d') ax.scatter(xs_scaled[label == 0,0],xs_scaled[label == 0,1],xs_scaled[label == 0,2], s = 40 , color = 'blue', label = "cluster 0") ax.scatter(xs_scaled[label == 1,0],xs_scaled[label == 1,1],xs_scaled[label == 1,2], s = 40 , color = 'orange', label = "cluster 1") ax.scatter(xs_scaled[label == 2,0],xs_scaled[label == 2,1],xs_scaled[label == 2,2], s = 40 , color = 'green', label = "cluster 2") ax.scatter(xs_scaled[label == 3,0],xs_scaled[label == 3,1],xs_scaled[label == 3,2], s = 40 , color = '#D12B60', label = "cluster 3") #ax.scatter(x[y_clusters == 4,0],x[y_clusters == 4,1],x[y_clusters == 4,2], s = 40 , color = 'purple', label = "cluster 4") #ax.set_xlabel('Age of a customer-->') #ax.set_ylabel('Anual Income-->') #ax.set_zlabel('Spending Score-->') ax.legend() plt.show() fig = plt.figure(figsize = (15,15)) ax = fig.add_subplot(111, projection='3d') ax.scatter(xs_scaled[label == 0,0],xs_scaled[label == 0,1],xs_scaled[label == 0,2], s = 40 , color = 'blue', label = "cluster 0") ax.scatter(xs_scaled[label == 1,0],xs_scaled[label == 1,1],xs_scaled[label == 1,2], s = 40 , color = 'orange', label = "cluster 1") ax.scatter(xs_scaled[label == 2,0],xs_scaled[label == 2,1],xs_scaled[label == 2,2], s = 40 , color = 'green', label = "cluster 2") ax.scatter(xs_scaled[label == 3,0],xs_scaled[label == 3,1],xs_scaled[label == 3,2], s = 40 , color = '#D12B60', label = "cluster 3") ax.scatter(xs_scaled[label == 4,0],xs_scaled[label == 4,1],xs_scaled[label == 4,2], s = 40 , color = 'purple', label = "cluster 4") ax.scatter(xs_scaled[label == 5,0],xs_scaled[label == 5,1],xs_scaled[label == 5,2], s = 40 , color = 'pink', label = "cluster 5") ax.scatter(xs_scaled[label == 6,0],xs_scaled[label == 6,1],xs_scaled[label == 6,2], s = 40 , color = 'lightcoral', label = "cluster 6") ax.scatter(xs_scaled[label == 7,0],xs_scaled[label == 7,1],xs_scaled[label == 7,2], s = 40 , color = 'grey', label = "cluster 7") ax.scatter(xs_scaled[label == 8,0],xs_scaled[label == 8,1],xs_scaled[label == 8,2], s = 40 , color = 'olive', label = "cluster 8") ax.scatter(xs_scaled[label == 9,0],xs_scaled[label == 9,1],xs_scaled[label == 9,2], s = 40 , color = 'indigo', label = "cluster 9") ax.scatter(xs_scaled[label == 10,0],xs_scaled[label == 10,1],xs_scaled[label == 10,2], s = 40 , color = 'lime', label = "cluster 10") ax.scatter(xs_scaled[label == 11,0],xs_scaled[label == 11,1],xs_scaled[label == 11,2], s = 40 , color = 'bisque', label = "cluster 11") ax.scatter(xs_scaled[label == 12,0],xs_scaled[label == 12,1],xs_scaled[label == 12,2], s = 40 , color = 'peru', label = "cluster 12") ax.scatter(xs_scaled[label == 13,0],xs_scaled[label == 13,1],xs_scaled[label == 13,2], s = 40 , color = 'orchid', label = "cluster 13") ax.scatter(xs_scaled[label == 14,0],xs_scaled[label == 14,1],xs_scaled[label == 14,2], s = 40 , color = 'gold', label = "cluster 14") ax.scatter(xs_scaled[label == 15,0],xs_scaled[label == 15,1],xs_scaled[label == 15,2], s = 40 , color = 'darkcyan', label = "cluster 15") ax.scatter(xs_scaled[label == 16,0],xs_scaled[label == 16,1],xs_scaled[label == 16,2], s = 40 , color = 'aqua', label = "cluster 16") ax.scatter(xs_scaled[label == 17,0],xs_scaled[label == 17,1],xs_scaled[label == 17,2], s = 40 , color = 'navy', label = "cluster 17") ax.scatter(xs_scaled[label == 18,0],xs_scaled[label == 18,1],xs_scaled[label == 18,2], s = 40 , color = 'royalblue', label = "cluster 18") ax.scatter(xs_scaled[label == 19,0],xs_scaled[label == 19,1],xs_scaled[label == 19,2], s = 40 , color = 'deepskyblue', label = "cluster 19") #ax.scatter(x[y_clusters == 4,0],x[y_clusters == 4,1],x[y_clusters == 4,2], s = 40 , color = 'purple', label = "cluster 4") #ax.set_xlabel('Age of a customer-->') #ax.set_ylabel('Anual Income-->') #ax.set_zlabel('Spending Score-->') ax.legend() plt.show() ###Output _____no_output_____
udemy/python-regular-expressions/my_progress/Project 1 - Robocopy Log File Parsing with Regex.ipynb
###Markdown Open one of the files above ###Code def open_file(file): with open(file, 'r', encoding='utf-8') as f: lines = f.readlines() return lines ###Output _____no_output_____ ###Markdown Pattern to Identify Header ###Code PATTERN_SOURCE_DESTN = re.compile(r'\s+(?P<type>Source|Dest) : (?P<dir>C.*\b)') def identify_header(lines, pattern): header_dict_regex = {'type': [], 'dir': []} for line in lines: source_destn_data_from_regex = re.finditer(pattern, line) for match in source_destn_data_from_regex: header_dict_regex['type'].append(match.group('type')) header_dict_regex['dir'].append(match.group('dir')) return header_dict_regex headers = identify_header(open_file(file1), PATTERN_SOURCE_DESTN) for key, val in headers.items(): print(key, val) ###Output type ['Source', 'Dest'] dir ['C:\\RegularExpressionsWithDotNet\\robocopytest\\source\\தமிழ்\\हिन्दी\\English', 'C:\\RegularExpressionsWithDotNet\\robocopytest\\destn'] ###Markdown Pattern to Capture Error Message ###Code PATTERN_ERROR_MSG = re.compile(r'(?P<ts>\b\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}) ERROR '\ r'(?P<error>.*\b)') def capture_error_msg(lines, pattern): error_dict_regex = {'ts': [], 'error': []} for line in lines: error_msg_data_from_regex = re.finditer(pattern, line) for match in error_msg_data_from_regex: error_dict_regex['ts'].append(match.group('ts')) error_dict_regex['error'].append(match.group('error')) return error_dict_regex errors = capture_error_msg(open_file(file2), PATTERN_ERROR_MSG) for key, val in errors.items(): print(key, val) ###Output ts ['2016/05/28 19:24:30'] error ['2 (0x00000002) Accessing Source Directory C:\\RegularExpressionsWithDotNet\\robocopytest\\source2'] ###Markdown Pattern to Capture Metrics Table ###Code PATTERN_METRICS_TBL = re.compile(r'\s+(?P<type>Dirs|Files|Bytes) :\s+'\ r'(?P<total>\d+)\s+'\ r'(?P<copied>\d+)\s+'\ r'(?P<skipped>\d+)\s+'\ r'(?P<mismatch>\d+)\s+'\ r'(?P<failed>\d+)\s+'\ r'(?P<extras>\d+)') def capture_metrics_tbl(lines, pattern): metrics_dict_regex = {'type': [], 'total': [], 'copied': [], 'skipped': [], \ 'mismatch': [], 'failed': [], 'extras': []} for line in lines: metrics_data_from_regex = re.finditer(pattern, line) for match in metrics_data_from_regex: metrics_dict_regex['type'].append(match.group('type')) metrics_dict_regex['total'].append(match.group('total')) metrics_dict_regex['copied'].append(match.group('copied')) metrics_dict_regex['skipped'].append(match.group('skipped')) metrics_dict_regex['mismatch'].append(match.group('mismatch')) metrics_dict_regex['failed'].append(match.group('failed')) metrics_dict_regex['extras'].append(match.group('extras')) return metrics_dict_regex metrics = capture_metrics_tbl(open_file(file1), PATTERN_METRICS_TBL) for key, val in metrics.items(): print(key, val) ###Output type ['Dirs', 'Files', 'Bytes'] total ['7', '29', '133567'] copied ['6', '29', '133567'] skipped ['1', '0', '0'] mismatch ['0', '0', '0'] failed ['0', '0', '0'] extras ['0', '0', '0'] ###Markdown Convert dictionaries to json format ###Code print(json.dumps(headers)) print(json.dumps(errors)) print(json.dumps(metrics)) with open('headers.json','w', encoding='utf-8') as wr: json.dump(headers, wr, ensure_ascii=False, indent=True) with open('errors.json','w', encoding='utf-8') as wr: json.dump(errors, wr, ensure_ascii=False, indent=True) with open('metrics.json','w', encoding='utf-8') as wr: json.dump(metrics, wr, ensure_ascii=False, indent=True) ###Output _____no_output_____
notebooks/Examples_cf.ipynb
###Markdown Load PAT module and packages ###Code %load_ext autoreload %autoreload 2 from pat import data, utils import json, os, glob import pandas as pd, numpy as np %matplotlib inline import matplotlib.pyplot as plt, seaborn as sns sns.set_style('white') from tqdm import tqdm ###Output _____no_output_____ ###Markdown Load data directly as pose2D format ###Code # Extract keypoints.json files into one composite csv Frame x 75 Keypoints from pat.data import pose2d_cols from joblib import Parallel, delayed os.chdir('C:/Users/Catarina/Desktop/MIND2019/pat/notebooks/') fnames = np.sort(glob.glob('output/json/*_keypoints.json')) new_df_fname = 'output/Sherlock_full_par.csv' if not os.path.exists(new_df_fname): result = Parallel(n_jobs=3)(delayed(utils.load_keypoints_2d)(fname, os.path.split(fname)[1].split('_')[1], new_df_fname) for fname in fnames) else: # Load the data back with multiindex and column names. df = pd.read_csv(new_df_fname, header=None, index_col=[0, 1], names=pose2d_cols) df.index.names=['frame','personID'] df = df.sort_values(['frame','personID']) print(df.pat._type) ###Output Pose2D ###Markdown Set some functions ###Code def center_of_mass(pose_df): '''Calculate centre of mass of one person Args: pose_df: pose_2d dataframe Return: Dataframe with center of mass (calculated as average x,y of pose keypoints) for each frame ''' pose_x = [] pose_y = [] x_index = pose_df.columns.str.startswith('x_') y_index = pose_df.columns.str.startswith('y_') for frame_ix in range(len(pose_df)): x_coords = pose_df.iloc[frame_ix,x_index] y_coords = pose_df.iloc[frame_ix,y_index] av_x = x_coords[x_coords != 0].mean() av_y = y_coords[y_coords != 0].mean() pose_x.append(av_x) pose_y.append(av_y) com = np.transpose(np.array([pose_x, pose_y])) return pd.DataFrame(com, index = pose_df.index.get_level_values('frame'), columns = ['mean_x','mean_y']) def pose_diff (p0, p1, keypoints): ''' Args: p0: pose_2d dataframe from person 1 p1: pose_2d dataframe from person 2 keypoints: on what keypoints to compute difference options 'all' or keypoint number (between 0 and 24 or any combination of those) NOTE: 75 keypoints in (x1,y1,c1, x2,y2,c2,...) where x is x coord, y is y coord, and c is confidence function assumes Pose Output Format (BODY_25) from OpenPose Return: Data frames with differences in x coordinate and then in y coordinate ''' x_diff = []; y_diff = [] idx = p0.index.get_level_values('frame') if len(keypoints) == 1 and keypoints == ['all']: x_index = p0.columns.str.startswith('x_') y_index = p0.columns.str.startswith('y_') col_x = p0.columns[p0.columns.str.startswith('x_')] col_y = p0.columns[p0.columns.str.startswith('y_')] elif len(keypoints) == 1 and keypoints != ['all']: x_index = p0.columns.str.startswith('x_'+keypoints[0]) y_index = p0.columns.str.startswith('y_'+keypoints[0]) col_x = p0.columns[p0.columns.str.startswith('x_'+keypoints[0])] col_y = p0.columns[p0.columns.str.startswith('y_'+keypoints[0])] for frame_ix in range(len(p0)): x = p0.iloc[frame_ix,x_index]-p1.iloc[frame_ix,x_index] y = p0.iloc[frame_ix,y_index]-p1.iloc[frame_ix,y_index] x_diff.append(x) y_diff.append(y) x = pd.DataFrame(np.array(x_diff), index = idx, columns = col_x) y = pd.DataFrame(np.array(y_diff), index = idx, columns = col_y) return pd.concat([x,y],axis=1, sort=False) #get center of mass for two different people, filtered by confidence p0 = df.loc[df.index.get_level_values('personID') == -1] p1 = df.loc[df.index.get_level_values('personID') == -0] p0_filt = p0.pat.filter_pose_confidence() p1_filt = p1.pat.filter_pose_confidence() p0_com = center_of_mass (p0_filt) p1_com = center_of_mass (p1_filt) #quick and dirty plot just to check if com actually seems to be in the center of the pose (one frame, one person) x_coords = p0_filt.iloc[2,0:72:3].values x_coords = x_coords[x_coords != 0] y_coords = p0_filt.iloc[2,1:73:3].values y_coords = y_coords[y_coords != 0] x_com = p0_com['mean_x'].iloc[2] y_com = p0_com['mean_y'].iloc[2] fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.clear() ax.scatter(x_coords,y_coords, c = 'k') ax.scatter(x_com,y_com, c = 'r') ax.set(xlim=[50,300],ylim=[150,300]) #get the distance between two people for all keypoints in each frame diff_all = pose_diff(p0_filt, p1_filt, ['all']) #get the distance between two people for one of the keypoints in each frame diff_nose = pose_diff(p0_filt, p1_filt, ['Nose']) #note that some keypoints will have information for left AND right. In those cases, you might want to specify what side diff_rWrist = pose_diff(p0_filt, p1_filt, ['RWrist']) #STILL WORKING ON THIS ONE (just figuring out the best way to store the information) #get the distance between two people for a few different keypoints in each frame #diff_nose = pose_diff(p0_filt, p1_filt, ['Nose', 'Neck', 'RWrist']) ###Output _____no_output_____
HW1/Climate Model - CMIP5/CMIP5-Dylan_edits_Lance.ipynb
###Markdown My edits to this notebook mainly focus on making the code for the plots more efficient by creating some functions that will hopefully be useful to other notebooks as well. I also shortened opening the dataset by making a list of the urls so that everything is contained in one dataset, which will be more efficient, especially if you deal with big data.A global coupled ocean-atmosphere general circulation model - [CMIP5 (Coupled Model Intercomparison Project Phase 5)](https://esgf-node.llnl.gov/projects/esgf-llnl/)This particular output that I chose starts in 1985 and ends in 1995, with monthly frequency and has atmosphereic variables. There are 42 files associated with this output, one file for each variable. Other versions of CMIP5 have sea ice and ocean variables. The search filter for different products is found at [ESGF@DOE/LLNL](https://esgf-node.llnl.gov/projects/esgf-llnl/)(The Earth System Grid Federation @ the Department of Energy/ Lawrence Livermore National Laboratory).Only certain data products are unrestricted and can be downloaded via Globus, HTTP, and OpenDAP download. The typical format is NetCDF. This notebook reads in the 42 files as a multi-file dataset. *** Direct Link [CMIP5 Data](https://esgf-node.llnl.gov/search/cmip5/) ###Code # Some variables urls = ['http://aims3.llnl.gov/thredds/dodsC/cmip5_css02_data/cmip5/output1/CMCC/CMCC-CM/decadal1985/mon/atmos/Amon/r1i2p1/pr/1/pr_Amon_CMCC-CM_decadal1985_r1i2p1_198511-199512.nc', 'http://aims3.llnl.gov/thredds/dodsC/cmip5_css02_data/cmip5/output1/CMCC/CMCC-CM/decadal1985/mon/atmos/Amon/r1i2p1/ts/1/ts_Amon_CMCC-CM_decadal1985_r1i2p1_198511-199512.nc', 'http://aims3.llnl.gov/thredds/dodsC/cmip5_css02_data/cmip5/output1/CMCC/CMCC-CM/decadal1985/mon/atmos/Amon/r1i2p1/zg/1/zg_Amon_CMCC-CM_decadal1985_r1i2p1_198511-199512.nc', 'http://aims3.llnl.gov/thredds/dodsC/cmip5_css02_data/cmip5/output1/CMCC/CMCC-CM/decadal1985/mon/atmos/Amon/r1i2p1/ps/1/ps_Amon_CMCC-CM_decadal1985_r1i2p1_198511-199512.nc' ] ds = xr.open_mfdataset(urls) ###Output _____no_output_____ ###Markdown I think this code can be cleaned up and generalized a bit. I don't think using meshgrid to extract the coordinates is the most efficient way to do this, you could use xarray style syntax and just use .load() or .compute to pull out the values. Also, this colorbar for this code only matches the default image size and will not scale linearly with the figure size if changed. Here's an example: ###Code lat = ds.lat.load() lon = ds.lon.load() fig = plt.figure() ax = fig.add_axes([0.06, 0.01, 0.93, 0.95], projection=ccrs.Mercator()) mappable = ax.pcolormesh(lon, lat, ds.pr[1,:,:], cmap = cmo.rain, transform = ccrs.PlateCarree()) cbar = fig.colorbar(mappable) cbar.set_label('Precipitation [kg m-2 s-1]') ax.add_feature(cfeature.COASTLINE) ax.add_feature(cfeature.BORDERS, linestyle='-', edgecolor='0.2') ###Output _____no_output_____ ###Markdown If we know we're making the same plot type for different data, write a function to help reduce the code length. Obviously you can make this much more complicated so than you can change the outputs after the fact, or change the inputs to include vmin & vmax, additional labels, etc. ###Code plt.rcParams.update({'font.size': 16}) def make_plots(lon, lat, var, cmap, label): ''' This function creates generalized code for plotting output from the CMIP5 dataset. Note that I don't return anything because the figure is already generated. This format is extremely flexible to include different plot attributes. Inputs: ------ lon: loaded grid cell longitude (xarray dataarray) lat: loaded grid cell latitude (xarray dataarray) var: variable of interest cmap: colomap for pcolormesh label: colorbar label Outputs: ------ specific figure ''' fig = plt.figure(figsize = (8,8)) ax = fig.add_axes([0.06, 0.01, 0.93, 0.95], projection=ccrs.Mercator()) mappable = ax.pcolormesh(lon, lat, var, cmap = cmap, transform = ccrs.PlateCarree()) cbar = fig.colorbar(mappable, shrink = 0.67,pad=0.025) #This is an approximate way to solve the problem with colorbar matching figsize #But doing it properly would be mutch more complicated cbar.set_label(label) ax.add_feature(cfeature.COASTLINE) ax.add_feature(cfeature.BORDERS, linestyle='-', edgecolor='0.2') make_plots(lon, lat, ds.pr[1,:,:], cmo.rain, 'Precipitation [kg m-2 s-1]') make_plots(lon, lat, ds.ts[1,:,:], cmo.thermal, 'Surface Temperature [K]') make_plots(lon, lat, ds.ps[1,:,:], cmo.diff, 'Surface Pressure [Pa]') ###Output _____no_output_____
Chapter 2.ipynb
###Markdown Section Two - Data WranglingYou get some real world data from your team, and it looks nothing like the toy datasets you see in tutorials. We tour you through the process of cleaning data. ###Code import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Types of messy data and how to clean them ###Code s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat']) s s.str.lower() s.str.upper() s.str.len() # Common usecase is to clean up column names df = pd.DataFrame( np.random.randn(5, 2), columns=[' First Name ', 'Last Name'], index=range(5)) # df.columns is an index type(df.columns) df.columns.str.strip().str.lower().str.replace(" ", "_") df.columns = df.columns.str.strip().str.lower().str.replace(" ", "_") df # create a series of lists pd.Series(['a_b_c', 'c_d_e', 'f_g_h']).str.split('_') # access individual elements pd.Series(['a_b_c', 'c_d_e', 'f_g_h']).str.split('_').str.get(1) # turn split lists into columns pd.Series(['a_b_c', 'c_d_e', 'f_g_h']).str.split('_', expand=True) # regex replacements pd.Series(['a_b_c', 'c_a_e', 'f_a_a']).str.replace('^a', 'xxxxxx', case=False) # literal replace pd.Series(['a_b_c', 'c_a_e', 'f_a_a']).str.replace('^a', 'xxxxxx', case=False, regex=False) ###Output _____no_output_____ ###Markdown Parsing timestamps and splitting columns ###Code pd.to_datetime(['1/1/2019', np.datetime64('2019-01-01')]) pd.date_range('2019-01-01', periods=3, freq='H') pd.date_range('2019-01-01', periods=3, freq='H').tz_localize('UTC') pd.date_range('2019-01-01', periods=3, freq='H').tz_localize('UTC').tz_convert('US/Pacific') # resampling ts = pd.Series(range(5), index=pd.date_range('2019-01-01', periods=5, freq='H')) ts ts.resample("2H").mean() # common usecase: create day of week names to feed into ML model pd.Timestamp('2019-01-04').day_name() pd.Timestamp('2019-01-04') + pd.Timedelta('1 day') pd.Timestamp("2019-01-11") + pd.offsets.BDay() ts['1/1/2019 01:00'] import datetime ts[datetime.datetime(2019, 1, 1, 1)] ###Output _____no_output_____ ###Markdown Loading data from Excel, CSVs, and SQL ###Code # import a simple excel file df = pd.read_excel("chapter1.xlsx") df df = pd.read_excel(open("chapter1.xlsx", "rb")) df # import a specific sheet with the name of sheet df = pd.read_excel("chapter1.xlsx", sheet_name="Sheet2") df # import a specific sheet by the sheet ordering df = pd.read_excel("chapter1.xlsx", sheet_name=1) df # import a few sheets with a list df = pd.read_excel("chapter1.xlsx", sheet_name=[0, 1, "Sheet3"]) df # import all sheets df = pd.read_excel("chapter1.xlsx", sheet_name=None) df # creating an SQLAlchemy engine from sqlalchemy import create_engine engine = create_engine('sqlite:///:memory:') # writing data from pandas to SQL df[0].to_sql('data', engine) # reading data from SQL to pandas df2 = pd.read_sql_table('data', engine) df2 # a csv with a header df = pd.read_csv("chapter1.csv", header=0) df # a csv with no headers, self define headers df = pd.read_csv("chapter1.csv", header=None, names=["A", "B", "C", "D", "E", "F"]) df # a csv with all useful columns df = pd.read_csv("chapter1.csv") df ###Output _____no_output_____ ###Markdown **1. Create a variable ** `phrase` ** containing a list of words. Review the operations described in the previous chapter, including addition, multiplication, indexing, slicing, and sorting.** ###Code phrase = ['hello', 'natural', 'language', 'processing'] phrase.append('python') print(phrase + phrase) print(phrase * 3) print(phrase[0]) print(phrase[1:]) print(sorted(phrase)) ###Output ['hello', 'natural', 'language', 'processing', 'python', 'hello', 'natural', 'language', 'processing', 'python'] ['hello', 'natural', 'language', 'processing', 'python', 'hello', 'natural', 'language', 'processing', 'python', 'hello', 'natural', 'language', 'processing', 'python'] hello ['natural', 'language', 'processing', 'python'] ['hello', 'language', 'natural', 'processing', 'python'] ###Markdown **2. Use the corpus module to explore ** `austen-persuasion.txt`**. How many word tokens does this book have? How many word types?** ###Code persuasion = gutenberg.words('austen-persuasion.txt') print(len([word for word in persuasion if word.isalpha()])) print(len(set(word.lower() for word in persuasion if word.isalpha()))) ###Output 84121 5739 ###Markdown **3. Use the Brown corpus reader ** `nltk.corpus.brown.words()` ** or the Web text corpus reader ** `nltk.corpus.webtext.words()` ** to access some sample text in two different genres.** ###Code print(brown.words(categories='reviews')) print(brown.words(categories='humor')) print(webtext.words(fileids='firefox.txt')) ###Output ['It', 'is', 'not', 'news', 'that', 'Nathan', ...] ['It', 'was', 'among', 'these', 'that', 'Hinkle', ...] ['Cookie', 'Manager', ':', '"', 'Don', "'", 't', ...] ###Markdown **4. Read in the texts of the ** *State of the Union* ** addresses, using the ** `state_union` ** corpus reader. Count occurrences of ** `men`**,** `women`**, and** `people` ** in each document. What has happened to the usage of these words over time?** ###Code cfd = nltk.ConditionalFreqDist( (target, fileid[:4]) for fileid in state_union.fileids() for w in state_union.words(fileid) for target in ['men', 'women', 'people'] if w.lower().startswith(target)) cfd.plot() ###Output _____no_output_____ ###Markdown **5. Investigate the holonym-meronym relations for some nouns. Remember that there are three kinds of holonym-meronym relation, so you need to use: ** `member_meronyms()`, `part_meronyms()`, `substance_meronyms()`, `member_holonyms()`, `part_holonyms()`**, and ** `substance_holonyms()` **.** ###Code computer = wn.synset('computer.n.01') print(computer.part_meronyms()) print(computer.part_holonyms()) print() people = wn.synset('people.n.01') print(people.member_meronyms()) print(people.member_holonyms()) print() paper = wn.synset('paper.n.01') print(paper.substance_meronyms()) ###Output [Synset('busbar.n.01'), Synset('cathode-ray_tube.n.01'), Synset('central_processing_unit.n.01'), Synset('chip.n.07'), Synset('computer_accessory.n.01'), Synset('computer_circuit.n.01'), Synset('data_converter.n.01'), Synset('disk_cache.n.01'), Synset('diskette.n.01'), Synset('hardware.n.03'), Synset('keyboard.n.01'), Synset('memory.n.04'), Synset('monitor.n.04'), Synset('peripheral.n.01')] [Synset('platform.n.03')] [Synset('person.n.01')] [Synset('world.n.08')] [Synset('cellulose.n.01')] ###Markdown **6. In the discussion of comparative wordlists, we created an object called ** `translate` ** which you could look up using words in both German and Spanish in order to get corresponding words in English. What problem might arise with this approach? Can you suggest a way to avoid this problem?** If there're some words both appear in German and Spanish, then the dictionary would have ambiguity.Add 'de'/'es' before the words and use that as keys in the dictionary. **7. According to Strunk and White's ** *Elements of Style* **, the word ** *however* **, used at the start of a sentence, means "in whatever way" or "to whatever extent", and not "nevertheless". They give this example of correct usage: ** *However you advise him, he will probably do as he thinks best.* **(http://www.bartleby.com/141/strunk3.html) Use the concordance tool to study actual usage of this word in the various texts we have been considering. See also the ** *LanguageLog* ** posting "Fossilized prejudices about 'however'" at http://itre.cis.upenn.edu/~myl/languagelog/archives/001913.html** ###Code nltk.Text(persuasion).concordance('However') ###Output Displaying 25 of 89 matches: onceited , silly father . She had , however , one very intimate friend , a sens early custom . But these measures , however good in themselves , were insuffici ellynch Hall was to be let . This , however , was a profound secret , not to be t immediate neighbourhood , which , however , had not suited him ; that acciden e dues of a tenant . It succeeded , however ; and though Sir Walter must ever l h , the former curate of Monkford , however suspicious appearances may be , but good character and appearance ; and however Lady Russell might have asked yet f siness no evil . She was assisted , however , by that perfect indifference and h the others . Something occurred , however , to give her a different duty . Ma , but can never alter plain ones . However , at any rate , as I have a great d l what is due to you as my sister . However , we may as well go and sit with th o means of her going . She wished , however to see the Crofts , and was glad to ithout any approach to coarseness , however , or any want of good humour . Anne ll be in question . She could not , however , reach such a degree of certainty al to Anne ' s nerves . She found , however , that it was one to which she must re gone , she hoped , to be happy , however oddly constructed such happiness mi once more in the same room . Soon , however , she began to reason with herself ! It would be but a new creation , however , and I never think much of your ne rove of Uppercross ." Her husband , however , would not agree with her here ; f re presently ." Captain Wentworth , however , came from his window , apparently d Walter stir . In another moment , however , she found herself in the state of at once . After a short struggle , however , Charles Hayter seemed to quit the rything being to be done together , however undesired and inconvenient . She tr , nobody answered her . Winthrop , however , or its environs -- for young men fore they were beyond her hearing , however , Louisa spoke again . " Mary is go ###Markdown **8. Define a conditional frequency distribution over the Names corpus that allows you to see which initial letters are more frequent for males vs. females (cf. 4.4).** ###Code names = nltk.corpus.names cfd = nltk.ConditionalFreqDist( (fileid, name[0]) for fileid in names.fileids() for name in names.words(fileid)) cfd.plot() ###Output _____no_output_____ ###Markdown From the figure, we can find that 'w' are more frequent for males. **9. Pick a pair of texts and study the differences between them, in terms of vocabulary, vocabulary richness, genre, etc. Can you find pairs of words which have quite different meanings across the two texts, such as ** *monstrous* ** in ** *Moby Dick* ** and in ** *Sense and Sensibility* **?** ###Code news_text = brown.words(categories='news') romance_text = brown.words(categories='romance') print("Vocabulary of news: ", len(set(news_text))) print("Vocabulary of romance:", len(set(romance_text))) print("---------------------------") print("Vocabulary richness of news:\t", len(set(news_text)) / len(news_text)) print("Vocabulary richness of romance:\t", len(set(romance_text)) / len(romance_text)) print("----------------------------------------------------") print("'Address' in news:") nltk.Text(news_text).similar('address') print() print("'Address' in romance:") nltk.Text(romance_text).similar('address') ###Output Vocabulary of news: 14394 Vocabulary of romance: 8452 --------------------------- Vocabulary richness of news: 0.14314696580941583 Vocabulary richness of romance: 0.12070492131044529 ---------------------------------------------------- 'Address' in news: administration legislature date state welfare administrators wife daughter back he texas speaker face first battle congress just down bill lawyer 'Address' in romance: mass boredom sounds door smell bay dogs heat dead wife bill front back first place thought head smoothness taste passion ###Markdown **10. Read the BBC News article: ** *UK's Vicky Pollards 'left behind'* ** http://news.bbc.co.uk/1/hi/education/6173441.stm. The article gives the following statistic about teen language: "the top 20 words used, including yeah, no, but and like, account for around a third of all words." How many word types account for a third of all word tokens, for a variety of text sources? What do you conclude about this statistic? Read more about this on ** *LanguageLog* **, at http://itre.cis.upenn.edu/~myl/languagelog/archives/003993.html.** Word types: 20 Word tokens: `len(text)` The proportion is close to zero. The teen language is simplified nowadays. **11. Investigate the table of modal distributions and look for other patterns. Try to explain them in terms of your own impressionistic understanding of the different genres. Can you find other closed classes of words that exhibit significant differences across different genres?** ###Code # the code is a bit different from that in the book # because I convert the words to lower case modals = ['can', 'could', 'may', 'might', 'must', 'will'] cfd = nltk.ConditionalFreqDist( (genre, word.lower()) # to count the Uppercased words as well, or the statistics would be inconsistent for genre in brown.categories() for word in brown.words(categories=genre)) genres = ['news', 'religion', 'hobbies', 'science_fiction', 'romance', 'humor'] # genres = brown.categories() cfd.tabulate(conditions=genres, samples=modals) ###Output can could may might must will news 94 87 93 38 53 389 religion 84 59 79 12 54 72 hobbies 276 59 143 22 84 269 science_fiction 16 49 4 12 8 17 romance 79 195 11 51 46 49 humor 17 33 8 8 9 13 ###Markdown In news, many events would lead to predicable consequences, so the use of 'will' is most frequent. In religion, the use of modals are of little difference. In hobbies, we can often benefit from our hobbies and get something. Thus, the use of 'can', 'may' and 'will' is high. In science_fiction and humor, the use of modals is rare. There contexts generally don't need modals. In romance, the use of 'could' is frequent maybe because it's tactful. Well, these are just my personal idea. **12. The CMU Pronouncing Dictionary contains multiple pronunciations for certain words. How many distinct words does it contain? What fraction of words in this dictionary have more than one possible pronunciation?** ###Code # entries = nltk.corpus.cmudict.entries() to count distinct, use dict() rather than entries() prondict = nltk.corpus.cmudict.dict() print('Distinct words:', len(prondict)) # count words in the dictionary that have more than one possible pronunciation # iterate over the dict and find those whose values' length is greater than 1 wordPron = 0 for key in prondict: if len(prondict[key]) > 1: wordPron += 1 print('Fractions of words with more than one possible pronunciation:', wordPron / len(prondict)) ###Output Distinct words: 123455 Fractions of words with more than one possible pronunciation: 0.07485318537118789 ###Markdown **13. What percentage of noun synsets have no hyponyms? You can get all noun synsets using** `wn.all_synsets('n')`. ###Code noun_synsets = len(list(wn.all_synsets('n'))) # the number of noun synsets cnt = 0 # counter for noun synsets with no hyponyms for synset in wn.all_synsets('n'): if (synset.hyponyms() == []): cnt += 1 print(cnt / noun_synsets) ###Output 0.7967119283931072 ###Markdown **14. Define a function ** `supergloss(s)` ** that takes a synset ** `s` ** as its argument and returns a string consisting of the concatenation of the definition of ** `s`**, and the definitions of all the hypernyms and hyponyms of ** `s`**.** ###Code def supergloss(s): defis = '' defis = defis + s.name() + ': ' + s.definition() + '\n' for synset in s.hypernyms(): defis = defis + synset.name() + ': ' + synset.definition() + '\n' for synset in s.hyponyms(): defis = defis + synset.name() + ': ' + synset.definition() + '\n' return defis print(supergloss(computer)) ###Output computer.n.01: a machine for performing calculations automatically machine.n.01: any mechanical or electrical device that transmits or modifies energy to perform or assist in the performance of human tasks analog_computer.n.01: a computer that represents information by variable quantities (e.g., positions or voltages) digital_computer.n.01: a computer that represents information by numerical (binary) digits home_computer.n.01: a computer intended for use in the home node.n.08: (computer science) any computer that is hooked up to a computer network number_cruncher.n.02: a computer capable of performing a large number of mathematical operations per second pari-mutuel_machine.n.01: computer that registers bets and divides the total amount bet among those who won predictor.n.03: a computer for controlling antiaircraft fire that computes the position of an aircraft at the instant of a shell's arrival server.n.03: (computer science) a computer that provides client stations with access to files and printers as shared resources to a computer network turing_machine.n.01: a hypothetical computer with an infinitely long memory tape web_site.n.01: a computer connected to the internet that maintains a series of web pages on the World Wide Web ###Markdown **15. Write a program to find all words that occur at least three times in the Brown Corpus.** ###Code wordSet = [] # create an empty list # frequency distribution for words in Brown Corpus fdist = FreqDist(w.lower() for w in brown.words() if w.isalpha()) # iterate over the samples for sample in fdist: if fdist[sample] >=3: wordSet.append(sample) # add to the list the the frequency is greater than or equal to 3 # print(wordSet) ###Output _____no_output_____ ###Markdown **16. Write a program to generate a table of lexical diversity scores (i.e. token/type ratios), as we saw in 1.1. Include the full set of Brown Corpus genres (** `nltk.corpus.brown.categories()` **). Which genre has the lowest diversity (greatest number of tokens per type)? Is this what you would have expected?** ###Code for category in brown.categories(): tokens = len(brown.words(categories=category)) types = len(set(brown.words(categories=category))) diversity = types / tokens # the computation of diversity in the second version of the book seems to be different to that in the first version # I just use the second version's calculation formula print(category, diversity) ###Output adventure 0.1279743878169075 belles_lettres 0.10642071451679992 editorial 0.16054152327770924 fiction 0.1358194136199042 government 0.11667641228232811 hobbies 0.14493897625842492 humor 0.23125144042406084 learned 0.09268890745953554 lore 0.13148804612915801 mystery 0.12212912592488936 news 0.14314696580941583 religion 0.1617553745018909 reviews 0.21192020440251572 romance 0.12070492131044529 science_fiction 0.22342778161713892 ###Markdown The genre `learned`(Mosteller: *Probability with Statistical Applications*) has the lowest diversity. **17. Write a function that finds the 50 most frequently occurring words of a text that are not stopwords.** ###Code def find_50_most_frequent_words(text): fdist = FreqDist(w.lower() for w in text if w.isalpha() and w.lower() not in stopwords.words('english')) return fdist.most_common(50) find_50_most_frequent_words(text1) ###Output _____no_output_____ ###Markdown **18. Write a program to print the 50 most frequent bigrams (pairs of adjacent words) of a text, omitting bigrams that contain stopwords.** ###Code def find_50_most_frequent_bigrams(text): bigram = list(nltk.bigrams(text)) fdist = FreqDist(b for b in bigram if b[0].isalpha() and b[1].isalpha() and b[0] not in stopwords.words('english') and b[1] not in stopwords.words('english')) return fdist.most_common(50) find_50_most_frequent_bigrams(text1) ###Output _____no_output_____ ###Markdown **19. Write a program to create a table of word frequencies by genre, like the one given in 1 for modals. Choose your own words and try to find words whose presence (or absence) is typical of a genre. Discuss your findings.** ###Code cfd = nltk.ConditionalFreqDist( (genre, word) for genre in brown.categories() for word in brown.words(categories=genre)) genres = brown.categories() my_words = ['love', 'like', 'peace', 'hate', 'war', 'fight', 'battle'] cfd.tabulate(conditions=genres, samples=my_words) ###Output love like peace hate war fight battle adventure 9 136 5 8 18 10 3 belles_lettres 68 169 29 4 84 13 22 editorial 13 49 30 0 54 10 3 fiction 16 147 3 9 24 7 6 government 1 21 11 0 7 0 0 hobbies 6 66 3 0 12 1 2 humor 4 34 1 0 1 0 2 learned 13 83 8 2 16 7 4 lore 19 86 11 2 23 15 13 mystery 7 136 0 2 2 4 1 news 3 46 4 1 20 14 15 religion 13 18 19 3 14 3 1 reviews 7 36 2 2 17 5 3 romance 32 185 7 9 11 7 3 science_fiction 3 25 0 0 2 1 0 ###Markdown **20. Write a function ** `word_freq()` ** that takes a word and the name of a section of the Brown Corpus as arguments, and computes the frequency of the word in that section of the corpus.** ###Code def word_freq(section): fdist = FreqDist(w.lower() for w in brown.words(categories=section)) return fdist # word_freq('news') ###Output _____no_output_____ ###Markdown **21. Write a program to guess the number of syllables contained in a text, making use of the CMU Pronouncing Dictionary.** ###Code def number_of_syllables(text): prondict = nltk.corpus.cmudict.dict() # syllables = [] # an empty list for other functions number = 0 for w in text: if w.lower() in prondict.keys(): # to avoid KeyError number += len(prondict[w.lower()][0]) # though a word may have multiple prouns, we chose the first return number # len(number_of_syllables(testText)) number_of_syllables(text1) ###Output _____no_output_____ ###Markdown **22. Define a function ** `hedge(text)` ** which processes a text and produces a new version with the word ** 'like' ** between every third word.** ###Code def hedge(text): new_version = list(text) # convert the type from nltk.Text to list # to take advantage of insert() method for i in range(2, len(text) + len(text) // 3, 3): # loop over every third word # remember to add the length new_version.insert(i, 'like') # and this is a simple version that # regards punctuations as words return nltk.Text(new_version) # restore to nltk.Text hedge(text1) ###Output _____no_output_____ ###Markdown **23. Zipf's Law: Let ** *f(w)* ** be the frequency of a word ** *w* ** in free text. Suppose that all the words of a text are ranked according to their frequency, with the most frequent word first. Zipf's law states that the frequency of a word type is inversely proportional to its rank (i.e. ** *f × r = k* **, for some constant ** *k* **). For example, the 50th most common word type should occur three times as frequently as the 150th most common word type.** a. **Write a function to process a large text and plot word frequency against word rank using pylab.plot. Do you confirm Zipf's law? (Hint: it helps to use a logarithmic scale). What is going on at the extreme ends of the plotted line?** b. **Generate random text, e.g., using ** `random.choice("abcdefg ")` **, taking care to include the space character. You will need to ** `import random` ** first. Use the string concatenation operator to accumulate characters into a (very) long string. Then tokenize this string, and generate the Zipf plot as before, and compare the two plots. What do you make of Zipf's Law in the light of this?** ###Code def zipf_law(text): fdist = FreqDist([w.lower() for w in text if w.isalpha()]) fdist = fdist.most_common() # sort the frequency distribution # note that it converts the type from dict to list rank = [] freq = [] n = 1 # the variable records the rank for i in range(len(fdist)): freq.append(fdist[i][1]) # fdist[i][0] is the word # and fdist[i][1] is the corresponding frequency rank.append(n) n += 1 # I use matplotlib.pyplot istead, since it seems that pylab is discouraged nowadays plt.plot(rank, freq, 'bs') plt.xscale('log') # set the x axis to log scale # the above two statements are equivalent to: plt.semilogx(rank, freq, 'bs') plt.title("Zipf's law") plt.xlabel('word rank') plt.ylabel('word frequency') plt.show() zipf_law(brown.words()) ###Output _____no_output_____ ###Markdown The frequency of 1st ranked word is approximately 2 times of the frequency of 2nd ranked word and 7 times of the frequency of 7st ranked word. (Well, the frequency of 3rd to 6th words is a bit high) Generally the Zipf's law applies. ###Code randomText = '' for i in range(10000000): randomText = randomText + random.choice("abcdefg ") randomWord = randomText.split() zipf_law(randomWord) ###Output _____no_output_____ ###Markdown Since the text is generated randomly, the Zipf's law does not apply. Zipf's law is a empirical law based on human language. **24. Modify the text generation program in 2.2 further, to do the following tasks:** a. **Store the n most likely words in a list ** `words` ** then randomly choose a word from the list using** `random.choice()`**. (You will need to ** `import random` ** first.)** b. **Select a particular genre, such as a section of the Brown Corpus, or a genesis translation, one of the Gutenberg texts, or one of the Web texts. Train the model on this corpus and get it to generate random text. You may have to experiment with different start words. How intelligible is the text? Discuss the strengths and weaknesses of this method of generating random text.** c. **Now train your system using two distinct genres and experiment with generating text in the hybrid genre. Discuss your observations.** ###Code def generate_random_text_on_n_most_likely_words(text, n): fdist = FreqDist(text) fdist = fdist.most_common(n) for i in range(n): print(random.choice(fdist)[0], end=' ') generate_random_text_on_n_most_likely_words(text1, 200) generate_random_text_on_n_most_likely_words(brown.words(categories='news'), 200) generate_random_text_on_n_most_likely_words(brown.words(categories=['news','romance']), 200) ###Output . me state day And long not She about should might with at before : a say they next should her came two looked through his `` felt but with can They may than our -- well three out without then left most some told -- by here -- didn't there these she told even You back didn't since three Mr. for but such home got right Mrs. think way at here his In against under asked other but put state asked four know he no go know own : looked told can his New will which President might know under was asked came who while under who come school day four against own don't knew their -- and too American when young I me get my don't under at take been This that Mr. came could while work could : three first didn't next it only left here just work think I felt an But first who In state see didn't where young year Mrs. could into off off but say come There what New still did first American ! get how they week little being say people I don't take how and `` all before many many Mrs. ###Markdown **25. Define a function ** `find_language()` ** that takes a string as its argument, and returns a list of languages that have that string as a word. Use the ** `udhr` ** corpus and limit your searches to files in the Latin-1 encoding.** ###Code def find_language(s): langs = [] for lang in udhr.fileids(): if lang.endswith('Latin1') and s in udhr.words(lang): langs.append(lang) return langs find_language('world') ###Output _____no_output_____ ###Markdown **26. What is the branching factor of the noun hypernym hierarchy? I.e. for every noun synset that has hyponyms — or children in the hypernym hierarchy — how many do they have on average? You can get all noun synsets using ** `wn.all_synsets('n')` **.** ###Code # noun_synsets = len(list(wn.all_synsets('n'))) # the number of noun synsets cnt = 0 hypos = 0 for synset in wn.all_synsets('n'): if synset.hyponyms() != []: hypos += len(synset.hyponyms()) cnt += 1 print(hypos / cnt) ###Output 4.543820763194153 ###Markdown **27. The polysemy of a word is the number of senses it has. Using WordNet, we can determine that the noun ** *dog* ** has 7 senses with: ** `len(wn.synsets('dog', 'n'))` **. Compute the average polysemy of nouns, verbs, adjectives and adverbs according to WordNet.** ###Code # Well, I tried to store the number of senses in dict() # but after many trials I still failed... or say, stuck in a nested for loop. ###Output _____no_output_____ ###Markdown **28. Use one of the predefined similarity measures to score the similarity of each of the following pairs of words. Rank the pairs in order of decreasing similarity. How close is your ranking to the order given here, an order that was established experimentally by (Miller & Charles, 1998): car-automobile, gem-jewel, journey-voyage, boy-lad, coast-shore, asylum-madhouse, magician-wizard, midday-noon, furnace-stove, food-fruit, bird-cock, bird-crane, tool-implement, brother-monk, lad-brother, crane-implement, journey-car, monk-oracle, cemetery-woodland, food-rooster, coast-hill, forest-graveyard, shore-woodland, monk-slave, coast-forest, lad-wizard, chord-smile, glass-magician, rooster-voyage, noon-string.** ###Code def similarities(w1, w2): print('Path similarity:', w1.path_similarity(w2)) print('Leacock-Chodorow similarity:', w1.lch_similarity(w2)) print('Wu-Palmer similarity:', w1.wup_similarity(w2)) car = wn.synset('car.n.01') automobile = wn.synset('automobile.n.01') gem = wn.synset('gem.n.01') jewel = wn.synset('jewel.n.01') journey = wn.synset('journey.n.01') voyage = wn.synset('voyage.n.01') boy = wn.synset('boy.n.01') lad = wn.synset('lad.n.01') coast = wn.synset('coast.n.01') shore = wn.synset('shore.n.01') asylum = wn.synset('asylum.n.01') madhouse = wn.synset('madhouse.n.01') magician = wn.synset('magician.n.01') wizard = wn.synset('wizard.n.01') midday = wn.synset('midday.n.01') noon = wn.synset('noon.n.01') furnace = wn.synset('furnace.n.01') stove = wn.synset('stove.n.01') food = wn.synset('food.n.01') fruit = wn.synset('fruit.n.01') bird = wn.synset('bird.n.01') cock = wn.synset('cock.n.01') crane = wn.synset('crane.n.01') tool = wn.synset('tool.n.01') implement = wn.synset('implement.n.01') brother = wn.synset('brother.n.01') monk = wn.synset('monk.n.01') oracle = wn.synset('oracle.n.01') cemetery = wn.synset('cemetery.n.01') woodland = wn.synset('woodland.n.01') rooster = wn.synset('rooster.n.01') hill = wn.synset('hill.n.01') forest = wn.synset('forest.n.01') graveyard = wn.synset('graveyard.n.01') slave = wn.synset('slave.n.01') chord = wn.synset('chord.n.01') smile = wn.synset('smile.n.01') glass = wn.synset('glass.n.01') string = wn.synset('string.n.01') similarities(boy, lad) ###Output Path similarity: 0.3333333333333333 Leacock-Chodorow similarity: 2.538973871058276 Wu-Palmer similarity: 0.6666666666666666 ###Markdown Binomial Distribution ###Code import numpy as np import seaborn from scipy.stats import binom import matplotlib.pyplot as plt data = binom.rvs(n=17, p=0.7, loc=0, size=1010) ax = seaborn.distplot(data, kde=True, color= 'blue', hist_kws= {'linewidth':22, 'alpha':0.77}) ax.set(xlabel='Binomial', ylabel='Frequency') #Initializing values n = 6 x = 4 p = 0.25 q = 0.75 binomial = binom.pmf(x, n, p) print("The probability of 4 patients recover: ", np.round(binomial, 6)) #I ask python to return 10000 binomial random variables binom_sim = binom.rvs(n=6, p=0.25, size=10000) print("Mean: ", np.mean(binom_sim)) print("SD: ", np.std(binom_sim, ddof=1)) #plotting the histogram plt.hist(binom_sim, bins = 6, density=True) plt.xlabel("x") plt.ylabel("density") plt.show() patients = {"recovery" : list(range(0, n+1))} patients['recovery'] #importing pandas to create data frame import pandas as pd binom_table = pd.DataFrame(patients) binom_table prob = lambda x: binom.pmf(x, n, p) binom_table['probability'] = binom_table['recovery'].apply(prob) binom_table binom_table.head() binom_table['probability'].plot.bar(x="recovery", y="Probability") plt.title("Probability distribution of patients recovery") plt.xlabel("No of patients") plt.ylabel("Probability") # Initializing the values p = 0.8 n = 10 x = 7 binomial = binom.pmf(x, n, p) print("The probability of having 7 successes in 10 attempts: ", np.round(binomial, 4)) ###Output The probability of having 7 successes in 10 attempts: 0.2013 ###Markdown Poisson Distribution ###Code #initializing values p = 0.1 n = 9 x = 9 q = 0.9 binomial = binom.pmf(x, n, q) print("Probability that all the nine wells fail: ", np.round(binomial, 4)) # importing poisson function from scipy.stats module from scipy.stats import poisson prob = poisson.pmf(4, 3) print("Probability:", np.round(prob, 4)) # average birds observed singing in a minute lambd = 3 # generating a range of k values k_values = np.arange(0, 25) # create a distribution variable distribution = np.zeros(k_values.shape[0]) #writing a for loop for i in range(k_values.shape[0]): distribution[i] = poisson.pmf(i, lambd) #plotting a bar plot plt.bar(k_values, distribution) # initializing the k and λ values λ = 3/20 k = 1 prob = poisson.pmf(k, λ) print("Probability: ", np.round(prob, 4)) ###Output Probability: 0.1291 ###Markdown chapter2 pandas 시작하기판다스는 데이터프레임과 시르즈라는 자료형과 데이터 분석을 위한 다양한 기능을 제공하는 파이썬 라이브러리 입니다. 또 판다스는 파이썬 언어만 사용할 줄 알아도 데이터 분석을 바로 시작할 수 있을뿐만 아니라 반복되는 데이터 분석 작업을 프로그램으로 만들어 쉽게 해결 할 수 있다는 장점이 있습니다. 이번 장에서는 판다스의 **기초 개념**을 정리하고 몇 가지 간단한 실습을 통해 판다스가 어떻게 동작하는지 알아보겠습니다. 목차는 다음과 같습니다. - 2-1 데이터 집합 불러오기 - 2-2 데이터 추출하기 - 2-3 기초적인 통계 계산하기 - 2-4 그래프 그리기 2-1 데이터 집합 불러오기 데이터 분석의 시작은 데이터 불러오기부터데이터 분석을 위해 가장 먼저 해야 할 일은 무엇일까요? 갭마인더 데이터 집합 불러오기 1. 판다스의 여러 기능을 사용하려면 판다스 라이브러리를 불러와야 합니다. 다음과 같이 입력하여 판다스 라이브러리를 불러오세요. ###Code import pandas ###Output _____no_output_____ ###Markdown 2.갭마인더 데이터 집합을 불러오려면 read_csv 메서드를 사용해야 합니다. read_csv메서드는 기본적으로 쉼표(,)로 열이 구분되어 있는 데이터를 불러옵니다. 하지만 갭마인더는 열이 탭(tap)으로 구분되어 있기 때문에 read_csv 메서드를 호출할 때 열이 탭으로 구분되어 있다고 미리 알려주어야 합니다.sep 속성값으로 \t를 지정하세요. ###Code df = pandas.read_csv('./data/gapminder.tsv', sep = '\t') df ###Output _____no_output_____ ###Markdown 3.판다스에 있는 메서드를 호출하려면 pandas와 점(,) 연산자를 사용해야 합니다. 그런데 매번 pandas라고 입력하려면 번거롭겠죠. 그래서 이를 해결하기 위해 관습적으로 pandas를 pd로 줄여 사용합니다. 다음과 같이 입력하면 pandas를 pd로 줄여 사용할 수 있습니다. 앞으로는 이 방법을 사용하겠습니다. ###Code import pandas as pd df = pd.read_csv('./data/gapminder.tsv', sep = '\t') df ###Output _____no_output_____ ###Markdown 시리즈와 데이터프레임갭마인더 데이터 집합을 잘 불러왔나요? 이번에는 판다스에서 사용되는 자료형을 알아볼 차례입니다. 판다스는 데이터를 효율적으로 다루기 위해 시리즈(Series)와 데이터프레임(DataFrame)이라는 자료형을 사용합니다. 데이터프레임은 엑셀에서 볼 수 있는 시트(Sheet)와 동일한 개념이며 시리즈는 시트의 열 1개를 의미합니다. 파이썬으로 비유하여 설명하면 데이터프레임은 시리즈들이 각 요소가 되는 딕셔너리(Dictionary)라고 생각하면 됩니다. 1.이번에는 df에 저장된 값이 정말 데이터프레임이라는 자료형인지 확인해 보겠습니다. 실행 결과를 보면 판다스의 데이터프레임이라는 것을 알 수 있습니다. type 메서드는 자료형을 출력해 줍니다. 앞으로 자주 사용할 메서드이므로 꼭 기억해 두기 바랍니다. ###Code print(type(df)) ###Output <class 'pandas.core.frame.DataFrame'> ###Markdown 2.데이터프레임은 자신이 가지고 있는 데이터의 행과 열의 크기에 대한 정보를 shape라는 속성에 저장하고 있습니다. 다음을 입력하여 실행하면 갭마인더의 행과 열의 크기를 확인할 수 있습니다. 1번째 값은 행의 크기이고 2번째 값은 열의 크기입니다. ###Code print(df.shape) ###Output (1704, 6) ###Markdown 3.이번에는 갭마인더에 어떤 정보가 들어 있는지 알아보겠습니다. 먼저 열을 살펴보겠습니다. 과정 3에서 shape 속성을 사용했던 것처럼 columns 속성을 사용하면 데이터프레임의 열 이름을 확인할 수 있습니다. 갭마인더를 구성하는 열 이름은 각각 country,continent,year,lifeExp,pop,gdpPercap 입니다. ###Code print(df.columns) ###Output Index(['country', 'continent', 'year', 'lifeExp', 'pop', 'gdpPercap'], dtype='object') ###Markdown 4.데이터프레임을 구성하는 값의 자료형은 데이터프레임의 dtypes 속성이나 info 메서드로 쉽게 확인할 수 있습니다. ###Code print(df.dtypes) print(df.info()) ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1704 entries, 0 to 1703 Data columns (total 6 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 country 1704 non-null object 1 continent 1704 non-null object 2 year 1704 non-null int64 3 lifeExp 1704 non-null float64 4 pop 1704 non-null int64 5 gdpPercap 1704 non-null float64 dtypes: float64(2), int64(2), object(2) memory usage: 80.0+ KB None ###Markdown 판다스와 파이썬 자료형 비교판다스와 파이썬은 같은 자료형도 다르게 인식합니다. 02-2 데이터 추출하기 열 단위 데이터 추출하기 1.다음은 데이터프레임(df)에서 열이름이 country인 열을 추출하여 country_df에 저장한 것입니다. type 메서드를 사용하면 country_df에 저장된 데이터의 자료형이 시리즈라는 것을 확인할 수 있습니다. 시리즈도 head, tail 메서드를 가지고 있기 때문에 head,tail 메서드로 가장 앞이나 뒤에 있는 5개의 데이터를 출력할 수 있습니다. ###Code country_df = df['country'] print(type(country_df)) print(country_df.head()) print(country_df.tail()) ###Output 1699 Zimbabwe 1700 Zimbabwe 1701 Zimbabwe 1702 Zimbabwe 1703 Zimbabwe Name: country, dtype: object ###Markdown 2.리스트에 열 이름을 전달하면 여러 개의 열을 한 번에 추출할 수 있습니다.이때 1개의 열이 아니라 2개 이상의 열을 추출했기 때문에 시리즈가 아니라 데이터프레임을 얻을 수 있습니다. ###Code subset = df[['country','continent','year']] print(type(subset)) print(subset.head()) print(subset.tail()) ###Output country continent year 1699 Zimbabwe Africa 1987 1700 Zimbabwe Africa 1992 1701 Zimbabwe Africa 1997 1702 Zimbabwe Africa 2002 1703 Zimbabwe Africa 2007 ###Markdown 행 단위 데이터 추출하기데이터를 행단위로 추출하려면 **loc,iloc** 속성을 사용해야 합니다. 인덱스와 행 번호 개념 알아보기 loc 속성으로 행 데이터 추출하기 ###Code print(df.loc[0]) print(df.loc[99]) print(df.loc[2]) ###Output country Afghanistan continent Asia year 1962 lifeExp 31.997 pop 10267083 gdpPercap 853.10071 Name: 2, dtype: object ###Markdown 2.만약 데이터프레임의 마지막 행 데이터를 추출하여면 어떻게 해야 할까요?마지막 행데이터의 인덱스를 알아내야 합니다. ###Code number_of_rows = df.shape[0] last_row_index = number_of_rows - 1 print(df.loc[last_row_index]) ###Output country Zimbabwe continent Africa year 2007 lifeExp 43.487 pop 12311143 gdpPercap 469.709298 Name: 1703, dtype: object ###Markdown 3.또 다른 방법 ###Code print(df.tail(n=1)) ###Output country continent year lifeExp pop gdpPercap 1703 Zimbabwe Africa 2007 43.487 12311143 469.709298 ###Markdown 4. 만약 인덱스가 0,99,999인 데이터를 한 번에 추출하려면 리스트에 원하는 인덱스를 담아 loc 속성에 전달하면 됩니다. ###Code print(df.loc[[0,99,999]]) ###Output country continent year lifeExp pop gdpPercap 0 Afghanistan Asia 1952 28.801 8425333 779.445314 99 Bangladesh Asia 1967 43.453 62821884 721.186086 999 Mongolia Asia 1967 51.253 1149500 1226.041130 ###Markdown iooc 속성으로 행 데이터 추출하기 1. 이번에는 iloc 속성으로 행데이터를 추출하는 방법에 대해 알아보겠습니다. loc 속성은 데이터프레임의 인덱스를 사용하여 데이터를 추출했지만 iloc 속성은 데이터 순서를 의미하는 행 번호를 사용하여 데이터를 추출합니다. 지금은 인덱스와 행 번호가 동일하여 동일한 결괏값이 출력됩니다. 다음은 iloc속성에 1을 전달하여 데이터를 추출한 것입니다. ###Code print(df.iloc[1]) print(df.iloc[99]) ###Output country Bangladesh continent Asia year 1967 lifeExp 43.453 pop 62821884 gdpPercap 721.186086 Name: 99, dtype: object ###Markdown 2.iloc 속성은 음수를 사용해도 데이터를 추출할 수 있습니다.다음은 -1을 전달하여 마지막 행 데이터를 추출한 것입니다. 하지만 데이터프레임에 아예 존재하지 않는 행 번호를 전달하면 오류가 발생합니다. ###Code print(df.iloc[-1]) print(df.iloc[1710]) ###Output _____no_output_____ ###Markdown 3. iloc 속성도 여러 데이터를 한 번에 추출할 수 있습니다. loc 속성을 사용했던 것처럼 원하는 데이터의 행 번호를 리스트에 담아 전달하면 됩니다. ###Code print(df.iloc[[0, 99, 999]]) ###Output country continent year lifeExp pop gdpPercap 0 Afghanistan Asia 1952 28.801 8425333 779.445314 99 Bangladesh Asia 1967 43.453 62821884 721.186086 999 Mongolia Asia 1967 51.253 1149500 1226.041130 ###Markdown loc, iloc 속성 자유자재로 사용하기loc, iloc 속성을 좀더 자유자재로 사용하려면 추출할 데이터의 행과 열을 지정하는 방법을 알아야 합니다. 두 속성 모두 추출할 데이터의 행을 먼저 지정하고 그런 다음 열을 지정하는 방법으로 데이터를 추출합니다. 즉 df.loc[[행],[열]]이나 df.iloc[[행],[열]]과 같은 방법으로 코드를 작성하면 됩니다. 행과 열을 지정하는 방법은 슬라이싱 구문을 사용하는 방법과 range 메서드를 사용하는 방법이 있습니다. 데이터 추출하기 - 슬라이싱 구문, range 메서드 1. 슬라이싱 구문으로 데이터 추출하기다음은 모든 행(:)의 데이터에 대해 year,pop 열을 추출하는 방법입니다. 이때 loc와 iloc 속성에 전달하는 열 지정값은 반드시 형식에 맞게 전달해야 합니다. 예를 들어 loc 속성의 열 지정값에 정수 리스트를 전달하면 오류가 발생합니다. ###Code subset = df.loc[:,['year','pop']] print(subset.head()) subset = df.iloc[:,[2,3,-1]] print(subset.head()) ###Output year lifeExp gdpPercap 0 1952 28.801 779.445314 1 1957 30.332 820.853030 2 1962 31.997 853.100710 3 1967 34.020 836.197138 4 1972 36.088 739.981106 ###Markdown 2. range 메서드로 데이터 추출하기이번에는 iloc 속성과 파이썬 내장 메서드인 range를 응용하는 방법을 알아보겠습니다.range메서드는 지정한 구간의 정수 리스트를 반환해 줍니다. iloc 속성의 열 지정값에는 정수 리스트를 전달해야 한다는 점과 range 메서드의 반환값이 정수 리스트인 점을 이용하여 원하는 데이터를 추출하는 것이죠. 그런데 range 메서드는 조금 더 정확하게 말하면 지정한 범위의 정수 리스트를 반환하는 것이 아니라 제네레이터를 반환합니다. ###Code small_range = list(range(5)) print(small_range) print(type(small_range)) subset = df.iloc[:,small_range] print(subset.head()) small_range = list(range(3,6)) print(small_range) subset = df.iloc[:,small_range] print(subset.head()) ###Output lifeExp pop gdpPercap 0 28.801 8425333 779.445314 1 30.332 9240934 820.853030 2 31.997 10267083 853.100710 3 34.020 11537966 836.197138 4 36.088 13079460 739.981106 ###Markdown 3.range 메서드에 range(0, 6, 2) 와 같은 방법으로 3개의 인자를 전달하면 어떻게 될까요? 0부터 5까지 2만큼 건너뛰는 제네레이터를 생성합니다.이 제네레이터를 리스트로 변환하면 범위는 0~5이고 짝수로 된 정수 리스트를 얻을 수 있죠. ###Code small_range = list(range(0,6,2)) subset = df.iloc[:,small_range] print(subset.head()) ###Output country year pop 0 Afghanistan 1952 8425333 1 Afghanistan 1957 9240934 2 Afghanistan 1962 10267083 3 Afghanistan 1967 11537966 4 Afghanistan 1972 13079460 ###Markdown 4. 슬라이싱 구문과 range 메서드 비교하기그런데 실무에서는 range 메서드보다는 간편하게 하숑할 수 있는 파이썬 슬라이싱 구문을 더 선호합니다. range메서드가 반환한 제네레이터를 리스트로 변환하는 등의 과정을 거치지 않아도 되기 때문이죠. 예를 들어 list(range(3))과 [:3]의 결괏값은 동일합니다. ###Code subset = df.iloc[:,:3] print(subset.head()) ###Output country continent year 0 Afghanistan Asia 1952 1 Afghanistan Asia 1957 2 Afghanistan Asia 1962 3 Afghanistan Asia 1967 4 Afghanistan Asia 1972 ###Markdown 5.0:6:2를 열 지정값에 전달하면 과정 3에서 얻은 결괏값과 동일한 결괏값을 얻을 수 있습니다. ###Code subset = df.iloc[:,0:6:2] print(subset.head()) ###Output country year pop 0 Afghanistan 1952 8425333 1 Afghanistan 1957 9240934 2 Afghanistan 1962 10267083 3 Afghanistan 1967 11537966 4 Afghanistan 1972 13079460 ###Markdown 6. loc, iloc 속성 자유자재로 사용하기만약 iloc 속성으로 0,99,999 번째 행의 0,3,5번째 열 데이터를 추출하려면 다음과 같이 코드를 작성하면 됩니다. ###Code print(df.iloc[[0,99,999],[0,3,5]]) ###Output country lifeExp gdpPercap 0 Afghanistan 28.801 779.445314 99 Bangladesh 43.453 721.186086 999 Mongolia 51.253 1226.041130 ###Markdown 7. iloc 속성의 열 지정값으로 정수 리스트를 전달하는 것이 간편해 보일 수 있지만 이렇게 작성한 코드는 나중에 어떤 데이터를 추출하기 위한 코드인지 파악하지 못 할 수도 있습니다. 그래서 보통은 다음과 같은 방법으로 loc 속성을 이용하여 열 지정값으로 열 이름을 전달합니다. ###Code print(df.loc[[0, 99, 999], ['country','lifeExp','gdpPercap']]) ###Output country lifeExp gdpPercap 0 Afghanistan 28.801 779.445314 99 Bangladesh 43.453 721.186086 999 Mongolia 51.253 1226.041130 ###Markdown 8.앞으로 배운 내용을 모두 응용하여 데이터를 추출해 볼까요? 다음은 인덱스가 10인 행부터 13인 행의 country, lifeExp, gdpPercap 열 데이터를 추출하는 코드입니다. ###Code print(df.loc[10:13,['country','lifeExp','gdpPercap']]) ###Output country lifeExp gdpPercap 10 Afghanistan 42.129 726.734055 11 Afghanistan 43.828 974.580338 12 Albania 55.230 1601.056136 13 Albania 59.280 1942.284244 ###Markdown 2-3 기초적인 통계 계산하기 ###Code print(df.head(n=10)) ###Output country continent year lifeExp pop gdpPercap 0 Afghanistan Asia 1952 28.801 8425333 779.445314 1 Afghanistan Asia 1957 30.332 9240934 820.853030 2 Afghanistan Asia 1962 31.997 10267083 853.100710 3 Afghanistan Asia 1967 34.020 11537966 836.197138 4 Afghanistan Asia 1972 36.088 13079460 739.981106 5 Afghanistan Asia 1977 38.438 14880372 786.113360 6 Afghanistan Asia 1982 39.854 12881816 978.011439 7 Afghanistan Asia 1987 40.822 13867957 852.395945 8 Afghanistan Asia 1992 41.674 16317921 649.341395 9 Afghanistan Asia 1997 41.763 22227415 635.341351 ###Markdown 그룹화한 데이터의 평균 구하기 1. lifeExp 열을 연도별로 그룹화하여 평균 계산하기예를 들어 연도별 lifeExp열의 평균을 계산하려면 어떻게 해야 할까요? 데이터를 year 열로 그룹화하고 lifeExp 열의 평균을 구하면 됩니다. ###Code print(df.groupby('year')['lifeExp'].mean()) ###Output year 1952 49.057620 1957 51.507401 1962 53.609249 1967 55.678290 1972 57.647386 1977 59.570157 1982 61.533197 1987 63.212613 1992 64.160338 1997 65.014676 2002 65.694923 2007 67.007423 Name: lifeExp, dtype: float64 ###Markdown 2. 과정 1에서 작성한 코드를 작은 단위로 나누어 살펴보겠습니다. 먼저 데이터프레임을 연도별로 그룹화한 결과를 살펴보겠습니다. groupby 메서드에 year열 이름을 전달하면 연도별로 그룹화한 country, continent, ..., gdpPercap 열을 모은 데이터프레임을 얻을 수 있습니다. ###Code grouped_year_df = df.groupby('year') print(type(grouped_year_df)) ###Output <class 'pandas.core.groupby.generic.DataFrameGroupBy'> ###Markdown 3.grouped_year_df를 출력하려면 과정 2에서 얻은 데이터프레임이 저장된 메모리의 위치를 알 수 있습니다. 이 결과를 통해 연도별로 그룹화한 데이터는 데이터프레임 형태로 현재 메모리의 0x10d9340f0이라는 위치에 저장되어 있음을 알 수 있습니다. ###Code print(grouped_year_df) ###Output <pandas.core.groupby.generic.DataFrameGroupBy object at 0x000001A4086832B0> ###Markdown 4.이어서 lifeExp 열을 추출한 결과를 살펴보겠습니다. ###Code grouped_year_df_lifeExp = grouped_year_df['lifeExp'] print(type(grouped_year_df_lifeExp)) ###Output <class 'pandas.core.groupby.generic.SeriesGroupBy'> ###Markdown 5.마지막으로 평균을 구하는 mean 메서드를 사용한 결과를 살펴보겠습니다. ###Code mean_lifeExp_by_year = grouped_year_df_lifeExp.mean() print(mean_lifeExp_by_year) ###Output year 1952 49.057620 1957 51.507401 1962 53.609249 1967 55.678290 1972 57.647386 1977 59.570157 1982 61.533197 1987 63.212613 1992 64.160338 1997 65.014676 2002 65.694923 2007 67.007423 Name: lifeExp, dtype: float64 ###Markdown 6.lifeExp, gdpPercap 열의 평균값을 연도, 지역별로 그룹화하여 한 번에 계산하기 ###Code multi_group_var = df.groupby(['year','continent'])[['lifeExp','gdpPercap']].mean() print(multi_group_var) ###Output lifeExp gdpPercap year continent 1952 Africa 39.135500 1252.572466 Americas 53.279840 4079.062552 Asia 46.314394 5195.484004 Europe 64.408500 5661.057435 Oceania 69.255000 10298.085650 1957 Africa 41.266346 1385.236062 Americas 55.960280 4616.043733 Asia 49.318544 5787.732940 Europe 66.703067 6963.012816 Oceania 70.295000 11598.522455 1962 Africa 43.319442 1598.078825 Americas 58.398760 4901.541870 Asia 51.563223 5729.369625 Europe 68.539233 8365.486814 Oceania 71.085000 12696.452430 1967 Africa 45.334538 2050.363801 Americas 60.410920 5668.253496 Asia 54.663640 5971.173374 Europe 69.737600 10143.823757 Oceania 71.310000 14495.021790 1972 Africa 47.450942 2339.615674 Americas 62.394920 6491.334139 Asia 57.319269 8187.468699 Europe 70.775033 12479.575246 Oceania 71.910000 16417.333380 1977 Africa 49.580423 2585.938508 Americas 64.391560 7352.007126 Asia 59.610556 7791.314020 Europe 71.937767 14283.979110 Oceania 72.855000 17283.957605 1982 Africa 51.592865 2481.592960 Americas 66.228840 7506.737088 Asia 62.617939 7434.135157 Europe 72.806400 15617.896551 Oceania 74.290000 18554.709840 1987 Africa 53.344788 2282.668991 Americas 68.090720 7793.400261 Asia 64.851182 7608.226508 Europe 73.642167 17214.310727 Oceania 75.320000 20448.040160 1992 Africa 53.629577 2281.810333 Americas 69.568360 8044.934406 Asia 66.537212 8639.690248 Europe 74.440100 17061.568084 Oceania 76.945000 20894.045885 1997 Africa 53.598269 2378.759555 Americas 71.150480 8889.300863 Asia 68.020515 9834.093295 Europe 75.505167 19076.781802 Oceania 78.190000 24024.175170 2002 Africa 53.325231 2599.385159 Americas 72.422040 9287.677107 Asia 69.233879 10174.090397 Europe 76.700600 21711.732422 Oceania 79.740000 26938.778040 2007 Africa 54.806038 3089.032605 Americas 73.608120 11003.031625 Asia 70.728485 12473.026870 Europe 77.648600 25054.481636 Oceania 80.719500 29810.188275 ###Markdown 7. 그룹화한 데이터 개수 세기이번에는 그룹화한 데이터 개수가 몇 개인지 알아보겠습니다. 이를 통계에서는 '빈도수'라고 부릅니다. nunique 메서드를 사용하면 쉽게 구할 수 있습니다. 다음은 continent를 기준으로 데이터프레임을 만들고 country 열만 추출하여 데이터의 빈도수를 계산한 것입니다. ###Code print(df.groupby('continent')['country'].nunique()) ###Output continent Africa 52 Americas 25 Asia 33 Europe 30 Oceania 2 Name: country, dtype: int64 ###Markdown 2-4 그래프 그리기 1. 먼저 그래프와 연관된 라이브러리를 불러옵니다. ###Code %matplotlib inline import matplotlib.pyplot as plt global_yearly_life_expectancy = df.groupby('year')['lifeExp'].mean() print(global_yearly_life_expextancy) global_yearly_life_expectancy.plot() ###Output _____no_output_____ ###Markdown Logical Operation ###Code a, b, c = 1,2,3 ###Output _____no_output_____ ###Markdown AND operationboth the statements below are sameAND operation both the conditions should be satisfied ###Code c > a and c > b (c > a) & (c > b) ###Output _____no_output_____ ###Markdown both the statements below are sameOR operation either one condition can be satisfied ###Code c < a or c > b (c < a) | (c > b) i = 10 i % 2 == 0 i = 11 i % 2 == 0 # not operator, response is inverse print(i) not( i < 20) ##comparision operator ## all the logical operations output will be boolean - true or false a == b a != b a > c b > a a >= c b <= a ###Output _____no_output_____ ###Markdown string ###Code ##string place holder nm = "annworks" nm nm + " " +"is 1 years old company" age = 1 #nm + " is " + age + " old" # this will throw an error, string and int cannot be added nm + " is " + str(age) + " old" # convert age as string and concatenate nm = "Eddy" greet = ("%s you are welcome!!") greet greet%(nm) greet1 = ( "you are welcome!! %s") greet1 greet1%(nm) greet2 = ( "Hey %s , you are welcome!! ") greet2 greet2%(nm) cmp = "%s %s is great company to work with" cmp%('ann','works') cmp = "%s is %d a old company" cmp%("annworks",1) cmp%("annworks","awesome") # throws an error, 2nd variable should be number and not string ###Output _____no_output_____ ###Markdown Tuples - denoted by () ###Code name = ("ann","work") print(type(name)) print("last name : {}".format(name[1])) print(type(format(name[1]))) ## ways of displaying the variables print("first name : {}".format(name[0])) print("first name : ",(name[0])) print("first name : %s" %(name[0])) print(name[1]) mytup = ( 'welcome', 21.223, "hello world", 70.2 ) print(mytup) print(mytup[2]) ## displays the 3rd value as the index starts from 0 age = (40,35,8,5) s = sorted(age) print(s) age[1] s[1] ## sort the elements which is string fruit = ("orange","apple","grapes") f = sorted(fruit) print(f) ## output is list f1 = sorted(fruit) print(tuple(f1)) # output is tuple ## tuple data slicing print(age) print(age[1]) print(age[0:]) print(age[:len(age)]) print(age[:-1]) print(age[-0:-1]) ## add multiple tuples together ad_tup = age + name ad_tup del age[1] # tuple cannot delete an object in it del age # tuple can be deleted completly age # its not available after deleting print(len(name)) ## repeation is possible name * 5 ###Output 2 ###Markdown set - denoted by {} ###Code ## set uses {} ## add new objects to set # set by default remove duplicates ss = {"apple", "banana", "apple", "grapes", "grapes", "cherry"} print(ss ) ## add an element ss.add("mango") print(ss) ## remove an element ss.remove("apple") print(ss) ###Output {'apple', 'banana', 'grapes', 'cherry'} {'grapes', 'apple', 'mango', 'banana', 'cherry'} {'grapes', 'mango', 'banana', 'cherry'} ###Markdown List - denoted by [ ] ###Code fruits = ["apple","bananna","orange","grapes","kiwi"] print(fruits) print(type(fruits)) ## index starts from zero print(fruits[1]) # gets the second object from the list print(fruits[0:]) print(fruits[0:-2]) # remove the last 3 objects from the list print(fruits[-3:-1]) print(fruits[:-3:-1]) print(fruits[::-1]) ## add new objects to the list fruits.append("berry") fruits ## add new element to the list, where as we cannot add or remove an element from tuple fruits.remove("grapes") fruits # replace berry with cherry fruits[4] = "cherry" fruits # finds the length of the list len(fruits) ## delete an object form the list del fruits[3] # 4th objects grapes is removed form the list fruits ## new list snack = ["chocolate","jam","nuts","apple"] snack ## check if both the length are equal len(fruits) == len(snack) ## get the complete list of objects from both the list new = fruits + snack new ## set will help to remove the duplicates set(new) # list with mixed variables emp = [100,"Eddy","annworks",25,24000,"chennai"] print(emp) print(emp[2]) numbers = [3,1,6,2,8,5,6,1] numbers print(max(numbers)) print(min(numbers)) print(sum(numbers)) print(len(numbers)) 5 in numbers numbers.index(5) ## find if the element is present in the variable print(fruits) "orange" in fruits fruits.index("orange") ## sort in list a = [10,2,3,6,4,8] print(a) sorted(a) ## this will not change the original value a.sort() # original values gets changed print(a) ## pop function fruit = [ "apple","banana","orange"] #fruit.pop(2) return_value = fruit.pop(2) print("return value:",return_value) # displays and removes the object print(fruit) # Joining list ## add a symbol or newline or table before each elements in the list # add a new lines for each object print(fruits) aw = "\n".join(fruits) print(aw) # add a tab before each object aw = "\t".join(fruits) print(aw) aww = "\n".join(["Hey there ","How are doing ?","Its raining here"]) print(aww) aww = "\t".join(["Hey there","How are doing ?","Its raining here"]) print(aww) # join not works for integer but only for strings i = [1,2,3] aw = "\n".join(i) print(aw) ###Output ['apple', 'bananna', 'orange', 'grapes', 'kiwi'] apple bananna orange grapes kiwi apple bananna orange grapes kiwi Hey there How are doing ? Its raining here Hey there How are doing ? Its raining here ###Markdown Dictonary - denoted by {} ###Code students = {"Eddy":26,"Maria":23,"Jack":28} print(students) type(students) print(sorted(students)) # sort dict print(sorted(students.values())) print(sorted(students.items())) students["Jack"] # pass the name of the student and get the age ## update a dictonary students["Jack"] = 27 # jack age is updated from 28 to 27 students ## delete an object in a dic del students["jack"] ## its case sensitive students ## delete an object in a dic del students["Jack"] ## its case sensitive students # length of dict len(students) ## each key should be unique std = {"Eddy":26,"Eddy":23,"Eddy":28} print(std) # holds only the last value when the keys are same std["Eddy"] ## dictonary of dictonary ## one level of drill down elements = {'hydrogen': 1, 'helium': 2, 'carbon': 6} elements['carbon'] ## two level of drill down elements1 = {'hydrogen': {'number': 1, 'weight': 1.00794, 'symbol': 'H'}, 'helium': {'number': 2, 'weight': 4.002602, 'symbol': 'He'}} elements1['helium']['symbol'] # three level of drill down elements2 = {'hydrogen': {'number': 1, 'weight': 1.00794, 'symbol': 'H'}, 'helium': {'number': 2, 'weight': 4.002602, 'symbol': {'hydrogen': 1, 'helium': 2, 'carbon': 16}}} elements2['helium']['symbol']['carbon'] ###Output _____no_output_____ ###Markdown Multi-armedBandits - Chapter 2 A k-armed Bandit ProblemBandit problem is a simplified setting for reinforcement learning in which you are faced repeatedly with a choice among k different options (actions).After each choice you receive a numerical reward chosen from a stationary probability distribution that depends on the action you selected. Your objective is to maximize the expected totalreward over some time period, for example, over 1000 action selections, or time steps. In our k-armed bandit problem, each of the k actions has an expected or mean reward given thatthat action is selected; let us call this the value of that action. We denote the action selected on timestep t as At , and the corresponding reward as Rt . The value then of an arbitrary **action a**, denotedq∗ (a), is the **expected reward** given that a is selected:$$q_{*}(a) = E \{ \ R_t \ | \ A_t=a \} $$ If you knew the value of each action, then it would be trivial to solve the k-armed bandit problem: youwould always select the action with highest value. We assume that you do not know the action valueswith certainty, although you may have estimates. We denote the **estimated value** of action a at timestep t as ** $Q_t(a)$ **. We would like $Q_t(a)$ to be close to $q_∗$(a).If you maintain estimates of the action values, then at any time step there is at least one action whoseestimated value is greatest. We call these the **greedy actions**. When you select one of these actions,we say that you are **exploiting** your current knowledge of the values of the actions. If instead youselect one of the nongreedy actions, then we say you are **exploring**, because this enables you to improveyour estimate of the nongreedy action’s value. **Exploitation** $\to$ Maximize the expected reward on the **one step**.**Exploration** $\to$ Produce the greater total reward in the **long run**.Whether it is better to explore or exploit depends in a complex way on the precise values of the estimates, uncertainties, and the number of remaining steps. ###Code import numpy as np np.warnings.filterwarnings('ignore') from enum import Enum class EstimationMethod(Enum): """This is an enum for representing different methods of estimating the value of each action""" SAMPLE_AVERAGE_INCREMENTAL = 1 CONSTANT_STEP_SIZE = 2 GRADIENT_BANDIT = 3 class ActionSelectionMethod(Enum): """This is an enum representing different methods of selecting the action""" EPSILON_GREEDY = 1 UPPER_CONFIDENCE_BOUND = 2 GRADIENT_BANDIT = 3 class Bandit: """This class represents a Bandit problem k-armed bandit problem is a problem in reinforcement learning with simplified setting. You are faced repeatedly with a choice among k different options, or actions. After each choice you receive a numerical reward chosen from a stationary probability distribution that depends on the action you selected. Attributes: arms (int) : number of arms of the problem (k) estimation_method (EstimationMethod) : the method of estimating the action value action_selection_method (ActionSelectionMethod) : the method of selecting next action c (float) : it is the coefficient of UCB (c > 0) epsilon (float) : it is the probability of acting randomly in epsilon-greedy algorithm alpha (float) : is the step size in gradient abd constant step size algorithm random_walk (float) : is the standard derivation of normal distribution for adding to Q* start_equal (bool) : a boolean representing whether all q* start in equal values (Q*(a)=0) initial_value_estimation (float) : initial value for actions estimation value q_star_mean (float) : normal distribution with given mean for Q*(a) gradient_baseline (bool) : whether the gradient has baseline or not time (int) : time step which we are in action_probability (np,array) : the probability of choosing each action (gradient algorithm) average_reward (float) : average reward until the current time q_estimations (np.array) : estimation of values for each action action_counts (np.array) : number of times which each action have been chosen q_true_values (np.array) : the true value of each action (Q*(a)) """ def __init__(self, arms=10, estimation_method=EstimationMethod.SAMPLE_AVERAGE_INCREMENTAL, epsilon=0.1,gradient_baseline=False,alpha=0.1, start_equal=False, random_walk=0, action_selection_method=ActionSelectionMethod.EPSILON_GREEDY, c=2, initial_value_estimation=0.0, start_time=0,q_star_mean=0): # parameters self.arms = arms self.estimation_method = estimation_method self.action_selection_method = action_selection_method self.c = c self.epsilon = epsilon self.alpha = alpha self.random_walk = random_walk self.start_equal = start_equal self.initial_value_estimation = initial_value_estimation self.start_time = start_time self.q_star_mean = q_star_mean self.gradient_baseline = gradient_baseline # environment self.time = self.start_time self.action_probability = np.zeros(self.arms) self.action_probability.fill(1) self.average_reward = 0 self.q_estimations = np.zeros(self.arms) self.q_estimations.fill(self.initial_value_estimation) self.action_counts = np.zeros(self.arms) self.q_true_values = np.zeros(self.arms) if not self.start_equal: self.q_true_values = np.random.normal(self.q_star_mean, 1, self.arms) def start_new_environment(self): """reset all the environment parameters""" self.time = self.start_time self.action_probability = np.zeros(self.arms) self.action_probability.fill(1) self.average_reward = 0 self.q_estimations = np.zeros(self.arms) self.q_estimations.fill(self.initial_value_estimation) self.action_counts = np.zeros(self.arms) self.q_true_values = np.zeros(self.arms) if not self.start_equal: self.q_true_values = np.random.normal(self.q_star_mean, 1, self.arms) def select_action(self): """select one of the action according to action selection method""" if self.action_selection_method == ActionSelectionMethod.EPSILON_GREEDY: if np.random.random_sample() < self.epsilon: # select one of the actions randomly return np.random.choice(np.arange(self.arms)) else: # argmax by break tie randomly return np.random.choice(np.flatnonzero(self.q_estimations==self.q_estimations.max())) elif self.action_selection_method == ActionSelectionMethod.UPPER_CONFIDENCE_BOUND: ucb_value = self.q_estimations + \ self.c * np.sqrt(np.log(self.time) / self.action_counts) return np.argmax(ucb_value) elif self.action_selection_method == ActionSelectionMethod.GRADIENT_BANDIT: exp_h = np.exp(self.q_estimations) self.action_probability = exp_h / exp_h.sum() return np.random.choice(np.arange(self.arms), p=self.action_probability) def step(self, action): """simulate one step and return the reward""" reward = np.random.normal(self.q_true_values[action], scale=1.0) self.time += 1 self.average_reward = (self.average_reward * (self.time - 1) + reward) / self.time self.action_counts[action] += 1 if self.random_walk != 0: mu, sigma = 0, self.random_walk # mean and standard deviation self.q_true_values += np.random.normal(mu, sigma, self.arms) if self.estimation_method == EstimationMethod.SAMPLE_AVERAGE_INCREMENTAL: self.q_estimations[action]+=(reward-self.q_estimations[action])/self.action_counts[action] elif self.estimation_method == EstimationMethod.CONSTANT_STEP_SIZE: self.q_estimations[action] += self.alpha * (reward - self.q_estimations[action]) elif self.estimation_method == EstimationMethod.GRADIENT_BANDIT: if self.gradient_baseline: baseline = self.average_reward else: baseline = 0 one = np.zeros(self.arms) one[action] = 1 self.q_estimations -= self.alpha*(reward-baseline)*(self.action_probability-one) return reward def simulate(self, runs=2000, time_steps=1000): """simulate for given number of runs with the given number of time steps""" best_action_counts = np.zeros((runs, time_steps)) rewards = np.zeros((runs, time_steps)) for r in range(runs): self.start_new_environment() for t in range(time_steps): action = self.select_action() reward = self.step(action) rewards[r, t] = reward if action == np.argmax(self.q_true_values): # it was the best action best_action_counts[r, t] = 1 # mean of all rewards and best action of all runs count in a particular time step return best_action_counts.mean(axis=0), rewards.mean(axis=0) ###Output _____no_output_____ ###Markdown Action-value MethodsWe begin by looking more closely at some simple methods for estimating the values of actions and for using the estimates to make action selection decisions. Recall that the true value of an action is the mean reward when that action is selected. One natural way to estimate this is by **averaging the rewards** actually received:$$ Q_t(a) = \frac{sum \ of \ rewards\ when \ a \ taken \ prior \ to \ t}{number \ of \ times \ a \ taken \ prior \ to \ t} = \frac{\sum_{i=1}^{t-1}R_i \ . \ 1_{A_i=a}}{\sum_{i=1}^{t-1} 1_{A_i=a}}$$*$1_{predicate}$ denotes the random variable that is 1 if predicate is true and 0 if it is not.*We call this the **sample-average** method for **estimating action values**. Of course this is just one way to estimate action values, and not necessarily the best one.The simplest action selection rule is to select **one of the actions** with the **highest estimated value**,that is, one of the greedy actions as defined in the previous section. If there is **more than one greedyaction**, then a selection is made among them in some arbitrary way, perhaps **randomly**. We write thisgreedy action selection method as:$$A_t = argmax \ Q_t(a)$$A simple alternative is to behave **greedily most of the time**, but every once in a while, say with small**probability ε**, instead **select randomly from among all the actions** with equal probability, independentlyof the action-value estimates. We call methods using this near-greedy action selection rule **ε-greedy**methods.**Exercise 2.1 In $\epsilon$-greedy action selection, for the case of two actions and $\epsilon$ = 0.5, what isthe probability that the greedy action is selected?**0.5 (there is a $\epsilon$ probablity of choosing greedy) + 0.5 (there is a 1-$\epsilon$ probability of choosing randomly ) * 0.5 (choosing one of the action is 1/2) = 0.75 The 10-armed Testbed ###Code import matplotlib.pyplot as plt import numpy as np #Figure 2.1 %matplotlib notebook #a sample from normal distribution with mean zero and unit variance for each action q_star_value = np.random.randn(10) # a mean q_star_value, unit variance normal distribution rewards = np.random.randn(500,10) + q_star_value plt.violinplot(dataset=rewards , showmeans=True) plt.xlabel("Action") plt.ylabel("Reward distribution") import matplotlib.pyplot as plt #Figure 2.2 epsilons = [0, 0.1, 0.01] epsilon_greedy_bandits = [Bandit(arms=10, estimation_method=EstimationMethod.SAMPLE_AVERAGE_INCREMENTAL, action_selection_method=ActionSelectionMethod.EPSILON_GREEDY, epsilon=eps) for eps in epsilons] epsilon_greedy_best_action, epsilon_greedy_rewards = [], [] for bandit in epsilon_greedy_bandits: result = bandit.simulate(2000, 1000) epsilon_greedy_best_action.append(result[0]) epsilon_greedy_rewards.append(result[1]) %matplotlib inline plt.figure() plt.subplot(2, 1, 1) for eps, rewards in zip(epsilons, epsilon_greedy_rewards): plt.plot(rewards, label='epsilon = {}'.format(eps)) plt.xlabel('steps') plt.ylabel('average reward') plt.legend() plt.subplot(2, 1, 2) for eps, counts in zip(epsilons, epsilon_greedy_best_action): plt.plot(counts * 100, label='epsilon = {}'.format(eps)) plt.xlabel('steps') plt.ylabel('% optimal action') plt.legend() ###Output _____no_output_____ ###Markdown **Exercise 2.2: Bandit example Consider a k-armed bandit problem with k = 4 actions,denoted 1, 2, 3, and 4. Consider applying to this problem a bandit algorithm using$\epsilon$-greedy action selection, sample-average action-value estimates, and initial estimatesof Q1 (a) = 0, for all a. Suppose the initial sequence of actions and rewards is A1 = 1,R1 = 1, A2 = 2, R2 = 1, A3 = 2, R3 = 2, A4 = 2, R4 = 2, A5 = 3, R5 = 0. On someof these time steps the " case may have occurred, causing an action to be selected atrandom. On which time steps did this definitely occur? On which time steps could thispossibly have occurred?** after one time step : $A_1 = 1 , R_1 = -1 \to Q_2(1) = \frac{-1}{1} = -1 , Q_2(2) = 0 , Q_2(3) = 0 , Q_2(4) = 0$ (this choice of action could be both random and greedy)after two time step : $A_2 = 2 , R_2 = 1 \to Q_3(1) = -1 , Q_3(2) = \frac{1}{1} = 1 , Q_3(3) = 0, Q_3(4)=0$ (this choice of action could be both random and greedy)after three time step : $A_3 = 2 , R_3 = 2 \to Q_4(1) = -1 , Q_4(2) = \frac{1+2}{1+1} = 1.5 , Q_4(3) = 0 , Q_4(4) = 0$(this choice of action could be both random and greedy) after four time step : $A_4 = 2 , R_4 = 2 \to Q_5(1) = -1 , Q_5(2) = \frac{1+2+2}{1+1+1} = \frac{5}{3} , Q_5(3) = 0, Q_5(4) = 0$(this choice of action could be both random and greedy)after five time step : $A_5 = 3 , R_5 = 0 \to Q_6(1) = -1 , Q_6(2) = \frac{5}{3} , Q_6(3)=0,Q_6(4)=0$ (this one definitely was chosen randomly)**Exercise 2.3 In the comparison shown in Figure 2.2, which method will perform best inthe long run in terms of cumulative reward and probability of selecting the best action?How much better will it be? Express your answer quantitatively.**plotting both the cumlative average reward and the probility of slecting the best action we see that in both case strategiy with exploration will do better however smaller epsilon like 0.01 will improve slowly but eventually would perform better than epsilon=0.1. For 1000 steps the epsilon=0.1 startegy obtain a cumulative total reward of about 1300 while the greedy strategy obtains a total reward of about 1000 or an improvement of 30%. Incremental ImplementationThe action-value methods we have discussed so far all estimate action values as sampleaverages of observed rewards. We now turn to the question of how these averages can becomputed in a computationally **efficient** manner, in particular, with **constant memory**and constant per-time-step computation.$Q_n \ = \ \frac{R_1 \ + R_2 \ + \ ... \ + \ R_{n-1}}{n-1}$$Q_{n+1} \ = \ \frac{1}{n} \ \sum_{i=1}^{n} \ R_i$$ = \ \frac{1}{n} \ (R_n \ + \ \sum_{i=1}^{n-1} \ R_i )$$ = \ \frac{1}{n} \ ( \ R_n \ + \ (n-1) \ \frac{1}{n-1} \ \sum_{i=1}^{n-1} \ R_i \ )$$ = \ \frac{1}{n} \ ( \ R_n \ + \ (n-1) \ Q_n \ )$$ = \ \frac{1}{n} \ ( \ R_n \ + \ n Q_n \ - \ Q_n \ )$$ = \ Q_n \ + \ \frac{1}{n} \ [ \ R_n \ - \ Q_n \ ]$**NewEstimate <- OldEstimate + StepSize \[ Target - OldEstimatei \]** Tracking a Nonstationary ProblemWe someties face problems which are not stationary (which means **rewards probability change** over time) so it is a good idea to give **more weight** to **recent rewards** than the past rewards. One of ways to do so is by using a **constant step-size** parameter.$ Q_{n+1} \ = \ Q_n \ + \ \alpha \ [ \ R_n \ - \ Q_n \ ]$$ = \ \alpha \ R_n \ + \ (1-\alpha) \ Q_n$$ = \ \alpha \ R_n \ + \ (1-\alpha) \ [ \ \alpha \ R_{n-1} \ + \ (1-\alpha) \ Q_{n-1} \ ]$$ = \alpha \ R_n \ + \ (1-\alpha) \ \alpha \ R_{n-1} \ + \ (1-\alpha)^{2} \ Q_{n-1}$$ = \alpha \ R_n \ + \ (1-\alpha) \ \alpha \ R_{n-1} \ + \ (1-\alpha)^2 \ \alpha \ R_{n-2} \ + \ ... \ + \ (1-\alpha)^{n-1} \ \alpha \ R_1 \ + (1-\alpha)^n \ Q_1$$ = \ (1-\alpha)^n \ Q_1 \ + \sum_{i=1}^{n} \ \alpha \ (1-\alpha)^{n-i} \ R_i$**exponential recency-weighted average** $\to$ *weight decays exponentially according to the exponent on* $1-\alpha$**for ensuring convergence wih probability 1 : **$$ \sum_{n=1}^{\infty} \ \alpha_{n}(a) \ = \ \infty \ \ and \ \ \sum_{n=1}^{\infty} \ \alpha_{n}^{2}(a) \ < \ \infty$$**first conditon** $\to$ *steps are large enough to eventually overcome any initial conditions***second conditon** $\to$ *eventually the steps become small enough to assure convergence.*$\alpha_n(a) \ = \ \frac{1}{n}$ (sample average) $\to$ converges$\alpha_n(a) \ = \ \alpha$ (constan step-size) $\to$ never completely converge but continue to vary in response to the most recently received rewards and this is a good thing for nonstationary problems**Exercise 2.4 If the step-size parameters, $\alpha_n$ , are not constant, then the estimate $Q_n$ isa weighted average of previously received rewards with a weighting different from thatgiven by (2.6). What is the weighting on each prior reward for the general case, analogousto (2.6), in terms of the sequence of step-size parameters?**$Q_{n+1} \ = \ Q_n \ + \ \alpha_n \ [ \ R_n \ - \ Q_n \ ]$$ = \ \alpha_n \ R_n \ + \ (1-\alpha_n) \ Q_n$$ = \ \alpha_n \ R_n \ + \ (1-\alpha_n) \ [ \ \alpha_{n-1} \ R_{n-1} \ + \ (1-\alpha_{n-1}) \ Q_{n-1} \ ]$$ = \ \alpha_n \ R_n \ + \ (1-\alpha_n) \ \alpha_{n-1} \ R_{n-1} \ + \ (1-\alpha_n) \ (1-\alpha_{n-1}) \ Q_{n-1}$$ = \ \alpha_n \ R_n \ + \ (1-\alpha_n) \ \alpha_{n-1} \ R_{n-1} \ + \ (1-\alpha_n) \ (1-\alpha_{n-1}) \ \alpha_{n-2} \ R_{n-2} \ + \ ... \ + \ (1-\alpha_n) \ (1-\alpha_{n-1}) \ ... \ (1-\alpha_1) \ Q_1$ $ = \ \prod_{i=1}^{n} \ (1-\alpha_i) \ Q_1 + \ \sum_{i=1}^{n} \ \alpha_i \ \prod_{j=i+1}^{n} \ (1-\alpha_j) \ R_i$ **Exercise 2.5 (programming) Design and conduct an experiment to demonstrate thedifficulties that sample-average methods have for nonstationary problems. Use a modifiedversion of the 10-armed testbed in which all the $q_*$(a) start out equal and then takeindependent random walks (say by adding a normally distributed increment with meanzero and standard deviation 0.01 to all the $q_*$(a) on each step). Prepare plots likeFigure 2.2 for an action-value method using sample averages, incrementally computed,and another action-value method using a constant step-size parameter, $\alpha$ = 0.1. Use $\epsilon$ = 0.1 and longer runs, say of 10,000 steps.** ###Code import matplotlib.pyplot as plt #Exercise 2.5 sample_average_bandit = Bandit(arms=10, estimation_method=EstimationMethod.SAMPLE_AVERAGE_INCREMENTAL, action_selection_method=ActionSelectionMethod.EPSILON_GREEDY, epsilon=0.1,start_equal=True, random_walk=0.01) constant_step_size_bandit = Bandit(arms=10, estimation_method=EstimationMethod.CONSTANT_STEP_SIZE, alpha=0.1,random_walk=0.01,epsilon=0.1, start_equal=True, action_selection_method=ActionSelectionMethod.EPSILON_GREEDY) epsilon_greedy_best_action, epsilon_greedy_rewards = [], [] sample_average_result = sample_average_bandit.simulate(2000, 10000) sample_average_best_action = sample_average_result[0] sample_average_rewards = sample_average_result[1] constant_step_size_result = constant_step_size_bandit.simulate(2000, 10000) constant_step_size_best_action = constant_step_size_result[0] constant_step_size_rewards = constant_step_size_result[1] %matplotlib inline plt.figure() plt.subplot(2, 1, 1) plt.plot(sample_average_rewards, label='epsilon = {}'.format(0.1)) plt.plot(constant_step_size_rewards, label='epsilon = {}, alpha = {}'.format(0.1, 0.1)) plt.xlabel('steps') plt.ylabel('average reward') plt.legend() plt.subplot(2, 1, 2) plt.plot(sample_average_best_action * 100, label='epsilon = {}'.format(0.1)) plt.plot(constant_step_size_best_action * 100, label='epsilon = {}, alpha = {}'.format(0.1, 0.1)) plt.xlabel('steps') plt.ylabel('% optimal action') plt.legend() ###Output _____no_output_____ ###Markdown Optimistic Initial ValuesInitial action values can be used as a simple way to encourage **exploration**. Suppose that instead of setting the initial values to zero, we set them all to 5 then because it is really **optimistic** ($q_*(a)$ is a normal distribution with mean 0 and variance 1) then the reward will be leass than the starting estimates so the agent will switch to other actions and will explore more actions, but it is **not well suited** to **nonstationary** problems because its drive for exploration is inherently temporary. ###Code import matplotlib.pyplot as plt #Figure 2.3 optimistic_initial_value_bandit =Bandit(arms=10, estimation_method=EstimationMethod.CONSTANT_STEP_SIZE, alpha=0.1, action_selection_method=ActionSelectionMethod.EPSILON_GREEDY, epsilon=0,initial_value_estimation=5.0) realistic_initial_value_bandit =Bandit(arms=10, estimation_method=EstimationMethod.CONSTANT_STEP_SIZE, alpha=0.1, action_selection_method=ActionSelectionMethod.EPSILON_GREEDY, epsilon=0.1,initial_value_estimation=0) optimistic_result = optimistic_initial_value_bandit.simulate(2000, 1000) optimistic_best_action = optimistic_result[0] realistic_result = realistic_initial_value_bandit.simulate(2000, 1000) realistic_best_action = realistic_result[0] %matplotlib inline plt.plot(optimistic_best_action * 100, label='Q1 = {} , epsilon = {}'.format(5, 0)) plt.plot(realistic_best_action * 100, label='Q1 = {} , epsilon = {}'.format(0, 0.1)) plt.xlabel('steps') plt.ylabel('% optimal action') plt.legend() ###Output _____no_output_____ ###Markdown **Exercise 2.6: Mysterious Spikes The results shown in Figure 2.3 should be quite reliablebecause they are averages over 2000 individual, randomly chosen 10-armed bandit tasks.Why, then, are there oscillations and spikes in the early part of the curve for the optimisticmethod? In other words, what might make this method perform particularly better orworse, on average, on particular early steps?**If the initial action selected when using the optimistic method are by chance ones of the betterchoices then the action value estimates Q(a) for these plays will be magnified resulting in anemphasis to continue playing this action. This results in large actions values being receivedon the initial draws and consequently very good initial play. In the same way, if the algorithminitially selects poor plays then initially the algorithm will perform poorly resulting in verypoor initial play.**Exercise 2.7: Unbiased Constant-Step-Size Trick In most of this chapter we have usedsample averages to estimate action values because sample averages do not produce theinitial bias that constant step sizes do (see the analysis leading to (2.6)). However, sampleaverages are not a completely satisfactory solution because they may perform poorlyon nonstationary problems. Is it possible to avoid the bias of constant step sizes whileretaining their advantages on nonstationary problems? One way is to use a step size of** Upper-Confidence-Bound Action Selection ###Code import matplotlib.pyplot as plt #Figure 2.4 ucb_bandit = Bandit(arms=10, estimation_method=EstimationMethod.SAMPLE_AVERAGE_INCREMENTAL, action_selection_method=ActionSelectionMethod.UPPER_CONFIDENCE_BOUND, c=2, start_time=1) epsilon_greedy_bandit = Bandit(arms=10, estimation_method=EstimationMethod.SAMPLE_AVERAGE_INCREMENTAL, action_selection_method=ActionSelectionMethod.EPSILON_GREEDY, epsilon=0.1, start_time=1) ucb_result = ucb_bandit.simulate(2000, 1000) ucb_rewards = ucb_result[1] epsilon_greedy_result = epsilon_greedy_bandit.simulate(2000, 1000) epsilon_greedy_rewards = epsilon_greedy_result[1] %matplotlib inline plt.plot(ucb_rewards, label='c = {}'.format(2)) plt.plot(epsilon_greedy_rewards, label='epsilon = {}'.format(0.1)) plt.xlabel('steps') plt.ylabel('average reward') plt.legend() ###Output _____no_output_____ ###Markdown Gradient Bandit Algorithms ###Code import matplotlib.pyplot as plt #Figure 2.5 gradient_small_alpha_base_bandit=Bandit(arms=10, estimation_method=EstimationMethod.GRADIENT_BANDIT, alpha=0.1,gradient_baseline=True, q_star_mean=4, action_selection_method=ActionSelectionMethod.GRADIENT_BANDIT) gradient_small_alpha_base_best_action=gradient_small_alpha_base_bandit.simulate(2000, 1000)[0] gradient_big_alpha_base_bandit=Bandit(arms=10, estimation_method=EstimationMethod.GRADIENT_BANDIT, alpha=0.4,gradient_baseline=True, q_star_mean=4, action_selection_method=ActionSelectionMethod.GRADIENT_BANDIT) gradient_big_alpha_base_best_action=gradient_big_alpha_base_bandit.simulate(2000, 1000)[0] gradient_small_alpha_bandit=Bandit(arms=10, estimation_method=EstimationMethod.GRADIENT_BANDIT, alpha=0.1, gradient_baseline=False, q_star_mean=4, action_selection_method=ActionSelectionMethod.GRADIENT_BANDIT) gradient_small_alpha_best_action=gradient_small_alpha_bandit.simulate(2000,1000)[0] gradient_big_alpha_bandit = Bandit(arms=10, estimation_method=EstimationMethod.GRADIENT_BANDIT, alpha=0.4, gradient_baseline=False, q_star_mean=4, action_selection_method=ActionSelectionMethod.GRADIENT_BANDIT) gradient_big_alpha_best_action = gradient_big_alpha_bandit.simulate(2000, 1000)[0] %matplotlib inline plt.plot(gradient_small_alpha_base_best_action * 100,label='with baseline , alpha = {}'.format(0.1)) plt.plot(gradient_big_alpha_base_best_action * 100,label='with baseline , alpha = {}'.format(0.4)) plt.plot(gradient_small_alpha_best_action * 100, label='without baseline , alpha = {}'.format(0.1)) plt.plot(gradient_big_alpha_best_action * 100, label='without baseline , alpha = {}'.format(0.4)) plt.xlabel('steps') plt.ylabel('% optimal action') plt.legend() ###Output _____no_output_____ ###Markdown Associative Search (Contextual Bandits) Summary ###Code import matplotlib.pyplot as plt #Figure 2.6 epsilon_greedy_bandits = [Bandit(action_selection_method=ActionSelectionMethod.EPSILON_GREEDY, epsilon=2**power, estimation_method=EstimationMethod.SAMPLE_AVERAGE_INCREMENTAL) for power in range(-7, -1)] average_rewards_epsilon_greedy = [] for bandit in epsilon_greedy_bandits: reward = bandit.simulate(2000,1000)[1].mean() average_rewards_epsilon_greedy.append(reward) gradient_bandit_bandits = [Bandit(action_selection_method=ActionSelectionMethod.GRADIENT_BANDIT, alpha=2**power, estimation_method=EstimationMethod.GRADIENT_BANDIT, gradient_baseline=True) for power in range(-5, 3)] average_rewards_gradient = [] for bandit in gradient_bandit_bandits: reward = bandit.simulate(2000, 1000)[1].mean() average_rewards_gradient.append(reward) ucb_bandits = [Bandit(estimation_method=EstimationMethod.SAMPLE_AVERAGE_INCREMENTAL, action_selection_method=ActionSelectionMethod.UPPER_CONFIDENCE_BOUND, c=2**power,start_time=1) for power in range(-4, 3)] average_rewards_ucb = [] for bandit in ucb_bandits: reward = bandit.simulate(2000, 1000)[1].mean() average_rewards_ucb.append(reward) optimistic_bandits = [Bandit(arms=10, estimation_method=EstimationMethod.CONSTANT_STEP_SIZE, alpha=0.1, action_selection_method=ActionSelectionMethod.EPSILON_GREEDY, epsilon=0, initial_value_estimation=2**power) for power in range(-2, 3)] average_rewards_optimistic = [] for bandit in optimistic_bandits: reward = bandit.simulate(2000, 1000)[1].mean() average_rewards_optimistic.append(reward) %matplotlib inline plt.xticks(range(-7, 3), ('1/128', '1/64', '1/32', '1/16', '1/8', '1/4', '1/2', '1', '2', '4')) plt.plot(range(-7, -1), average_rewards_epsilon_greedy, label='epsilon-greedy parameter:epsilon') plt.plot(range(-5, 3), average_rewards_gradient, label='gradient bandit parameter:alpha') plt.plot(range(-4, 3), average_rewards_ucb, label='UCB parameter:c') plt.plot(range(-2, 3), average_rewards_optimistic, label='greedy with optimistic initialization α = 0.1parameter:Q1') plt.xlabel('Parameter') plt.ylabel('Average reward') plt.legend() ###Output _____no_output_____ ###Markdown **Exercise 2.11 (programming) Make a figure analogous to Figure 2.6 for the nonstationarycase outlined in Exercise 2.5. Include the constant-step-size $\epsilon$-greedy algorithm with$\alpha$ = 0.1. Use runs of 200,000 steps and, as a performance measure for each algorithm andparameter setting, use the average reward over the last 100,000 steps.** ###Code import matplotlib.pyplot as plt #excercise 2.11 runs=500 times=20000 epsilon_greedy_bandits = [ Bandit(action_selection_method=ActionSelectionMethod.EPSILON_GREEDY, epsilon=2 ** power, estimation_method=EstimationMethod.SAMPLE_AVERAGE_INCREMENTAL, start_equal=True, random_walk=0.01) for power in range(-7, -1)] average_rewards_epsilon_greedy = [] for bandit in epsilon_greedy_bandits: reward = bandit.simulate(runs, times)[1][int(times / 2):].mean() average_rewards_epsilon_greedy.append(reward) gradient_bandit_bandits = [ Bandit(action_selection_method=ActionSelectionMethod.GRADIENT_BANDIT, alpha=2 ** power, estimation_method=EstimationMethod.GRADIENT_BANDIT, gradient_baseline=True, start_equal=True,random_walk=0.01) for power in range(-5, 3)] average_rewards_gradient = [] for bandit in gradient_bandit_bandits: reward = bandit.simulate(runs, times)[1][int(times / 2):].mean() average_rewards_gradient.append(reward) ucb_bandits = [Bandit(estimation_method=EstimationMethod.SAMPLE_AVERAGE_INCREMENTAL, action_selection_method=ActionSelectionMethod.UPPER_CONFIDENCE_BOUND, c=2 ** power, start_equal=True, random_walk=0.01, start_time=1) for power in range(-4, 3)] average_rewards_ucb = [] for bandit in ucb_bandits: reward = bandit.simulate(runs, times)[1][int(times / 2):].mean() average_rewards_ucb.append(reward) optimistic_bandits = [Bandit(arms=10, estimation_method=EstimationMethod.CONSTANT_STEP_SIZE, alpha=0.1, action_selection_method=ActionSelectionMethod.EPSILON_GREEDY, epsilon=0, start_equal=True, random_walk=0.01, initial_value_estimation=2 ** power) for power in range(-2, 3)] average_rewards_optimistic = [] for bandit in optimistic_bandits: reward = bandit.simulate(runs, times)[1][int(times / 2):].mean() average_rewards_optimistic.append(reward) constant_step_size_bandits = [Bandit(arms=10, estimation_method=EstimationMethod.CONSTANT_STEP_SIZE, alpha=0.1, random_walk=0.01, action_selection_method=ActionSelectionMethod.EPSILON_GREEDY, epsilon=2**power, start_equal=True) for power in range(-7, -1)] average_rewards_step_size = [] for bandit in constant_step_size_bandits: reward = bandit.simulate(runs, times)[1][int(times / 2):].mean() average_rewards_step_size.append(reward) %matplotlib inline plt.xticks(range(-7, 3), ('1/128', '1/64', '1/32', '1/16', '1/8', '1/4', '1/2', '1', '2', '4')) plt.plot(range(-7, -1), average_rewards_epsilon_greedy, label='epsilon-greedy parameter:epsilon') plt.plot(range(-7, -1), average_rewards_step_size, label='constant alpha=0.1 parameter:epsilon') plt.plot(range(-5, 3), average_rewards_gradient, label='gradient bandit parameter:alpha') plt.plot(range(-4, 3), average_rewards_ucb, label='UCB parameter:c') plt.plot(range(-2, 3), average_rewards_optimistic, label='greedy with optimistic initialization α = 0.1 parameter:Q1') plt.xlabel('Parameter') plt.ylabel('Average reward over 10000 time step') plt.legend() ###Output _____no_output_____ ###Markdown Chapter 2: Evaluation Versus Instruction Exercise 2.1> In ε-greedy action selection, for the case of two actions and ε = 0.5, what is the probability that the greedy action is selected?$$\begin{align*}P(greedy) &= P(\text{greedy strategy}) + P(greedy \vert \text{explore strategy}) \\&= 0.5 + (0.5*0.5) \\&= 0.75 \\\end{align*}$$ Exercise 2.2: Bandit Example> Consider a k-armed bandit problem with $k = 4$ actions, denoted 1, 2, 3, and 4. Consider applying to this problem a bandit algorithm using ε-greedy action selection, sample-average action-value estimates, and initial estimates of $Q_1(a) = 0$, for all $a$. Suppose the initial sequence of actions and rewards is $A_1$ = 1, $R_1$ = 1, $A_2$ = 2, $R_2$ = 1, $A_3$ = 2, $R_3$ = 2, $A_4$ = 2, $R_4$ = 2, $A_5$ = 3, $R_5$ = 0. On some of these time steps the ε case may have occurred, causing an action to be selected at random. On which time steps did this definitely occur? On which time steps could this possibly have occurred?| $k$ | $A_k$ | $R_k$ | $Q_k$ | Random? ||-----|-------|-------|----------|---------|| 1 | 1 | 1 | 1 | Yes || 2 | 2 | 1 | 1 | Yes || 3 | 2 | 2 | $3/2$ | Maybe || 4 | 2 | 2 | $5/3$ | Maybe || 5 | 3 | 0 | 0 | Yes | ###Code from collections import defaultdict import math import random import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib import animation %matplotlib inline import seaborn as sns sns.set() def seed(s): random.seed(s) np.random.seed(s) import gym from gym import spaces from typing import * class Info(NamedTuple): optimal_play: bool class Observation(NamedTuple): state: None reward: float done: bool info: Info class BanditEnv(gym.Env): metadata = {'render.modes': ['human']} def __init__(self, n_arms, q_mean=0): self.n_arms = n_arms self.q_mean = q_mean self.action_space = spaces.Discrete(n_arms) self.observation_space = None self.reset() def reset(self): self.arms = [random.gauss(self.q_mean, 1.) for i in range(self.n_arms)] return None # Don't leak the secret state sauce. def step(self, action): optimal_action = self.arms.index(max(self.arms)) info = Info(action == optimal_action) q = self.arms[action] reward = random.gauss(q, 1.0) return Observation(None, reward, False, info) def render(self, mode='human'): print('Arms: {}'.format(self.arms)) ten_arm = BanditEnv(10) ten_arm.step(ten_arm.action_space.sample()) class BanditAgent(object): def __init__(self, env): self.env = env self.reset() def reset(self): self.q_sum = [0] * self.env.n_arms self.k = [0] * self.env.n_arms self.q = [0] * self.env.n_arms def choose(self): raise NotImplemented() def act(self): action = self.choose() observation = self.env.step(action) self.q_sum[action] += observation.reward self.k[action] += 1 self.q[action] = self.q_sum[action] / self.k[action] return observation class EpsilonGreedyAgent(BanditAgent): def __init__(self, env, epsilon): super().__init__(env) self.epsilon = epsilon def choose(self): explore = np.random.random() < self.epsilon if explore: return self.env.action_space.sample() else: return self.q.index(max(self.q)) class TrialResult(NamedTuple): rewards: np.ndarray optimal_play: np.ndarray def plot_rewards(self, ax, label=None): ax.set_xlabel('Plays') ax.set_ylabel('Reward (avg)') avg_reward = self.rewards.mean(axis=0) xs = np.arange(0, len(avg_reward)) ax.plot(xs, avg_reward, label=label) def plot_optimal(self, ax, label=None): ax.set_xlabel('Plays') ax.set_ylabel('% Optimal Play') avg_optimal = self.optimal_play.mean(axis=0) xs = np.arange(0, len(avg_optimal)) ax.plot(xs, avg_optimal, label=label) class Trial(object): def __init__(self, env, agent): self.env = env self.agent = agent def run(self, trials=2000, plays=1000): shape = (trials, plays) results = TrialResult(np.zeros(shape), np.zeros(shape)) for trial in range(trials): self.env.reset() self.agent.reset() for play in range(plays): observation = self.agent.act() results.rewards[trial, play] = observation.reward results.optimal_play[trial, play] = 1. if observation.info.optimal_play else 0. return results agent = EpsilonGreedyAgent(ten_arm, 0.1) results = Trial(ten_arm, agent).run(200, 100) results.plot_rewards(plt.gca()) plt.show() results.plot_optimal(plt.gca()) ###Output _____no_output_____ ###Markdown Exercise 2.3> In the comparison shown in Figure 2.2, which method will perform best in the long run in terms of cumulative reward and probability of selecting the best action? How much better will it be? Express your answer quantitatively.$\epsilon = 0.01$ will perform better in the long run, because while all values of $\epsilon$ will converge to $q_*(a)$, smaller values of $\epsilon$ will choose the greedy action more often.For any value of $\epsilon > 0$, the % optimal play will converge to $(1-\epsilon) + (\frac{\epsilon}{n})$. $\epsilon = 0.1$ will converge to 0.91, and $\epsilon = 0.01$ will converge to ###Code fig, axes = plt.subplots(2, 1, sharex=True, figsize=(15,10)) for epsilon in [0., 0.01, 0.1]: agent = EpsilonGreedyAgent(ten_arm, epsilon) results = Trial(ten_arm, agent).run() results.plot_rewards(axes[0], str(epsilon)) results.plot_optimal(axes[1], str(epsilon)) for ax in axes: ax.legend() plt.title('Figure 2.2') ###Output _____no_output_____ ###Markdown Exercise 2.4> If the step-size parameters, $α_n$, are not constant, then the estimate $Q_n$ is a weightedaverage of previously received rewards with a weighting different from that given by (2.6). What isthe weighting on each prior reward for the general case, analogous to (2.6), in terms of the sequence ofstep-size parameters? Exercise 2.2> How does the softmax action selection method using the Gibbs distribution fare on the 10-armed testbed? ###Code class SoftmaxAgent(BanditAgent): def __init__(self, env, temp): super().__init__(env) self.temp = temp def choose(self): denom = sum(math.exp(q_b/self.temp) for q_b in self.q) weights = [math.exp(q_a/self.temp) / denom for q_a in self.q] choice = random.choices(self.q, weights=weights, k=1) return self.q.index(choice[0]) fig, axes = plt.subplots(2, 1, sharex=True, figsize=(15,10)) for temp in [0.01, 0.1, 1.]: agent = SoftmaxAgent(ten_arm, temp) results = Trial(ten_arm, agent).run() results.plot_rewards(axes[0], str(temp)) results.plot_optimal(axes[1], str(temp)) for ax in axes: ax.legend() ###Output _____no_output_____ ###Markdown Python Crash Course Adding "\\" in a string ###Code not_tab_string = r"\t" print(not_tab_string) ###Output \t ###Markdown Formatting string ###Code first_name = "Joel" last_name = "Grus" full_name1 = first_name + " " + last_name # string addition full_name2 = "{0} {1}".format(first_name, last_name) # string.format full_name3 = f"{first_name} {last_name}" ###Output _____no_output_____ ###Markdown Exception handling ###Code try: print(0 / 0) except ZeroDivisionError: print("cannot divide by zero") ###Output cannot divide by zero ###Markdown Defaultdict ###Code from collections import defaultdict word_counts = defaultdict(int) # int() produces 0 document=["a","b","a"] for word in document: word_counts[word] += 1 dd_list = defaultdict(list) # list() produces an empty list dd_list[2].append(1) # now dd_list contains {2: [1]} dd_dict = defaultdict(dict) # dict() produces an empty dict dd_dict["Joel"]["City"] = "Seattle" # {"Joel" : {"City": Seattle"}} dd_pair = defaultdict(lambda: [0, 0]) dd_pair[2][1] = 1 # now dd_pair contains {2: [0, 1]} ###Output _____no_output_____ ###Markdown Counter ###Code from collections import Counter c = Counter([0, 1, 2, 0]) # recall, document is a list of words word_counts = Counter(document) # print the 10 most common words and their counts for word, count in word_counts.most_common(10): print(word, count) ###Output a 2 b 1 ###Markdown Set is very fast for "in" operation Python Class ###Code class CountingClicker: def __init__(self,count=0): self.count=count def __repr__(self): return f"CountingClicker(count={self.count})" def click(self, num_times = 1): """Click the clicker some number of times.""" self.count += num_times def read(self): return self.count def reset(self): self.count = 0 clicker1 = CountingClicker() # initialized to 0 clicker2 = CountingClicker(100) # starts with count=100 clicker3 = CountingClicker(count=100) # more explicit way of doing the same clicker1.__repr__() clicker = CountingClicker() assert clicker.read() == 0, "clicker should start with count 0" clicker.click() clicker.click() assert clicker.read() == 2, "after two clicks, clicker should have count 2" clicker.reset() assert clicker.read() == 0, "after reset, clicker should be back to 0" # A subclass inherits all the behavior of its parent class. class NoResetClicker(CountingClicker): # This class has all the same methods as CountingClicker # Except that it has a reset method that does nothing. def reset(self): pass clicker2 = NoResetClicker() assert clicker2.read() == 0 clicker2.click() assert clicker2.read() == 1 clicker2.reset() assert clicker2.read() == 1, "reset shouldn't do anything" ###Output _____no_output_____ ###Markdown Iterables and Generators ###Code def generate_range(n): i = 0 while i < n: yield i # every call to yield produces a value of the generator i += 1 for i in generate_range(10): print(f"i: {i}") ###Output i: 0 i: 1 i: 2 i: 3 i: 4 i: 5 i: 6 i: 7 i: 8 i: 9 ###Markdown Randomness ###Code import random random.seed(10) # this ensures we get the same results every time four_uniform_randoms = [random.random() for _ in range(4)] random.seed(10) # set the seed to 10 print(random.random()) # 0.57140259469 random.seed(10) # reset the seed to 10 print(random.random()) # 0.57140259469 again random.randrange(10) # choose randomly from range(10) = [0, 1, ..., 9] random.randrange(3, 6) # choose randomly from range(3, 6) = [3, 4, 5] up_to_ten = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] random.shuffle(up_to_ten) print(up_to_ten) my_best_friend = random.choice(["Alice", "Bob", "Charlie"]) # "Bob" for me lottery_numbers = range(60) winning_numbers = random.sample(lottery_numbers, 6) # [16, 36, 10, 6, 25, 9] four_with_replacement = [random.choice(range(10)) for _ in range(4)] print(four_with_replacement) # [9, 4, 4, 2] ###Output [2, 9, 5, 6] ###Markdown Zipping ###Code list1 = ['a', 'b', 'c'] list2 = [1, 2, 3] # zip is lazy, so you have to do something like the following [pair for pair in zip(list1, list2)] # is [('a', 1), ('b', 2), ('c', 3)] pairs = [('a', 1), ('b', 2), ('c', 3)] letters, numbers = zip(*pairs) print(letters) ###Output ('a', 'b', 'c') ###Markdown \* is used to unpacking a listzip is used to packing 2 list args and kwargs ###Code def doubler(f): # Here we define a new function that keeps a reference to f def g(x): return 2 * f(x) # And return that new function return g def f1(x): return x + 1 g = doubler(f1) assert g(3) == 8, "(3 + 1) * 2 should equal 8" assert g(-1) == 0, "(-1 + 1) * 2 should equal 0" def f2(x, y): return x + y g = doubler(f2) try: g(1, 2) except TypeError: print("as defined, g only takes one argument") def magic(*args, **kwargs): print("unnamed args:", args) print("keyword args:", kwargs) magic(1, 2, key="word", key2="word2") def other_way_magic(x, y, z): return x + y + z x_y_list = [1, 2] z_dict = {"z": 3} assert other_way_magic(*x_y_list, **z_dict) == 6, "1 + 2 + 3 should be 6" def doubler_correct(f): """works no matter what kind of inputs f expects""" def g(*args, **kwargs): """whatever arguments g is supplied, pass them through to f""" return 2 * f(*args, **kwargs) return g g = doubler_correct(f2) assert g(1, 2) == 6, "doubler should work now" ###Output _____no_output_____ ###Markdown 2-1 Insertion sort on small arrays in merge sort ###Code def mergeSort(A): n = len(A) if n == 0 or n == 1: return A left = mergeSort(A[:n // 2]) right = mergeSort(A[n // 2:]) result = [] i, j = 0, 0 while i < len(left) or j < len(right): if i >= len(left) and j < len(right): result.append(right[j]) j += 1 if i < len(left) and j >= len(right): result.append(left[i]) i += 1 if i < len(left) and j < len(right): if left[i] < right[j]: result.append(left[i]) i += 1 else: result.append(right[j]) j += 1 return result sortTest(mergeSort) ###Output [418, 824, 491, 613, 127, 536, 890, 868, 99, 495, 30, 605, 421, 368, 66, 444, 131, 963, 442, 813] [30, 66, 99, 127, 131, 368, 418, 421, 442, 444, 491, 495, 536, 605, 613, 813, 824, 868, 890, 963] ###Markdown Insertion Sort ###Code def insertionSort(A): for i in range(len(A)): j = i - 1 t = A[i] while j >= 0 and A[j] >= t: A[j + 1] = A[j] j -= 1 A[j + 1] = t return A sortTest(insertionSort) def mergeSortwInsertion(A, k = 100): # merge k sorted list if len(A) <= k: insertionSort(A) return A ###Output _____no_output_____ ###Markdown 2 - 2 Bubblesort[Bubblesort](http://interactivepython.org/runestone/static/pythonds/SortSearch/TheBubbleSort.html) is a popular, but inefficient sorting algorithm. It works by repeatedly swapping adjacent element that are out of order.![bubblesort](http://interactivepython.org/runestone/static/pythonds/_images/bubblepass.png) ###Code def swap(A, i, j): A[i], A[j] = A[j], A[i] return def bubblesort(A): # the modified algorithm to detect if the input is out of order # if the input is in the right order, then one pass and bale out. # otherwise, continue. bInoder = True for i in range(len(A)): if i == 0 or not bInorder: for j in reversed(range(i + 1, len(A))): if A[j] < A[j - 1]: swap(A, j - 1, j) bInorder = False return A sortTest(bubblesort) ###Output [345, 464, 119, 68, 883, 221, 787, 997, 731, 46, 711, 337, 397, 921, 674, 367, 520, 78, 470, 209] [46, 68, 78, 119, 209, 221, 337, 345, 367, 397, 464, 470, 520, 674, 711, 731, 787, 883, 921, 997] ###Markdown Selection sort[Selection sort](http://interactivepython.org/runestone/static/pythonds/SortSearch/TheSelectionSort.html) runs n passes and each pass picks the smallest element and put in the nth place. This makes an O(n^2) algorithm.![selectionsort](http://interactivepython.org/runestone/static/pythonds/_images/selectionsortnew.png) ###Code def selectionsort(A): # this algorithm is inplace. for i in range(len(A)): x = i for j in range(i, len(A)): if A[x] > A[j]: x = j swap(A, x, i) return A sortTest(selectionsort) ###Output [69, 747, 14, 199, 607, 190, 43, 966, 824, 623, 298, 157, 632, 867, 281, 799, 1, 140, 200, 338] [1, 14, 43, 69, 140, 157, 190, 199, 200, 281, 298, 338, 607, 623, 632, 747, 799, 824, 867, 966] ###Markdown Shellsort![Shellsort](http://interactivepython.org/runestone/static/pythonds/_images/shellsortB.png) ###Code def shellsort(A): k = 2 gap = len(A) // k while gap > 0: for i in range(gap): insertionsortwithgap(A, i, gap) print("gap={}".format(gap)) print(A) gap = gap // k return A def insertionsortwithgap(A, start, gap): for i in range(start + gap, len(A), gap): t = A[i] j = i while j >= gap and A[j - gap] > t: A[j] = A[j - gap] j -= gap A[j] = t sortTest(shellsort) ###Output [129, 724, 156, 632, 200, 11, 464, 305, 842, 296, 66, 399, 979, 431, 768, 926, 982, 75, 746, 800] gap=10 [66, 399, 156, 431, 200, 11, 464, 75, 746, 296, 129, 724, 979, 632, 768, 926, 982, 305, 842, 800] gap=5 [11, 399, 75, 431, 200, 66, 464, 156, 632, 296, 129, 724, 305, 746, 768, 926, 982, 979, 842, 800] gap=2 [11, 66, 75, 156, 129, 296, 200, 399, 305, 431, 464, 724, 632, 746, 768, 800, 842, 926, 982, 979] gap=1 [11, 66, 75, 129, 156, 200, 296, 305, 399, 431, 464, 632, 724, 746, 768, 800, 842, 926, 979, 982] [11, 66, 75, 129, 156, 200, 296, 305, 399, 431, 464, 632, 724, 746, 768, 800, 842, 926, 979, 982] ###Markdown Quick sortFind a pivot(p), partition the array into two sub-arrays: one smaller than p, the other for the rest. ###Code def quicksort(A, q, r): if q < r: #print(A[q:r]) p = partition(A, q, r) quicksort(A, q, p) quicksort(A, p + 1, r) def partition(A, q, r): x = A[q] j = q for i in range(q + 1, r): if A[i] < x: j += 1 swap(A, j, i) swap(A, j, q) return j def qsort(A): x = random.choice(range(0, len(A))) swap(A, 0, x) quicksort(A, 0, len(A)) return A sortTest(qsort) ###Output [737, 928, 479, 562, 655, 233, 44, 544, 689, 393, 618, 602, 743, 188, 901, 999, 454, 125, 915, 53] [44, 53, 125, 188, 233, 393, 454, 479, 544, 562, 602, 618, 655, 689, 737, 743, 901, 915, 928, 999] ###Markdown Example 6 heads in 9 coin tossesdbinom(6, size=9, p=0.5) = 0.164Formula: ###Code def factorial(x): f = 1 for i in range(1, x+1): f = f*i return f factorial(5) def combinations(n, r): c = factorial(n) / (factorial(r)*(factorial(n-r))) return c combinations(5,2) # Testing with the example combinations(9,6)*(0.5**6)*((1-0.5)**(9-6)) def dbinom(x, size, prob): c = combinations(size, x) p = c*((prob)**x)*(1-prob)**(size-x) return p dbinom(6,9,0.5) ###Output _____no_output_____ ###Markdown Python3 ###Code print('hello') print("hello pythong interpreter") message = "Hello Python world" print(message) message = "Hello Python world" print(message) message = "second message" print(message) name = "homer simpson" print(name.title()) name = "homer simpson" print(name.title()) print(name.upper()) print(name.lower()) ###Output Homer Simpson HOMER SIMPSON homer simpson ###Markdown Visualizing summary statistics on your datasets Visualizing data with third party packages ###Code %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd x = np.linspace(0, 10, 1000) y = np.cumsum(np.random.randn(1000, 6), 0) plt.plot(x, y) plt.show() import seaborn as sns sns.set() plt.plot(x, y) plt.show() data = pd.DataFrame(np.random.randn(1000, 2), columns=['a', 'b']) for dataset in 'ab': plt.hist(data[dataset], normed=True, alpha=0.5) plt.show() for dataset in 'ab': sns.kdeplot(data[dataset], shade=True) plt.show() sns.distplot(data['a']) sns.jointplot("a", "b", data, kind='kde') sns.jointplot("a", "b", data, kind='hex') ###Output _____no_output_____ ###Markdown Exploring a dataset with Pandas ###Code import pandas as pd from sklearn.datasets import load_iris features, labels = load_iris(return_X_y=True) df = pd.DataFrame(features) df from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( features, labels, test_size=0.75, random_state=42) df = pd.DataFrame(X_train) df["label"] = y_train df df.head() df.mean(0) df.mean(1) df.describe() print(df.idxmax()) print(df.idxmin()) df.loc[:, 2].value_counts() pd.cut(df.loc[:, 2],4) pd.cut(df.loc[:, 2],4).value_counts() ###Output _____no_output_____ ###Markdown Visualizing Summary Statistics ###Code import matplotlib.pyplot as plt import numpy as np import pandas as pd %matplotlib inline ts = pd.Series(np.random.randn(1000)) ts.plot() df = pd.DataFrame(np.random.randn(1000, 4), columns=list('ABCD')) df = df.cumsum() df.plot(); df.plot(x='A', y='B'); df.iloc[15].plot(kind='bar'); df = pd.DataFrame(np.random.randn(10, 4), columns=list('ABCD')) df.plot.bar(); df.plot.bar(stacked=True); df.plot.barh(stacked=True); df.plot.hist(); df.plot.hist(stacked=True); df.plot.box() pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd']).plot.area(); ###Output _____no_output_____
MarketSearch/Analysis.ipynb
###Markdown ChartPatternsLets see the chart patterns of the dataset. - (1) flat: not so much changes during he day, mostly consolidating- (2) downfall shape, strong start-of-day and weak end-of-day- (3) bell shape closer to the market open with some high strength during regular hours but not close to the bells- (4) bell shape closer to the market close with some high strength during regular hours but not close to the bells- (5) uprise: almost constant growth during the day and strong close near high of the day- (6) recovery: going down and trying to recover toward the end- (7) mexican hat: going down and trying to recover but again going down toward the end-of-day ###Code data['ChartPatterns-0'] = data['ChartPatterns-0'].astype(int) ax = sns.countplot(x='ChartPatterns-0', data=data).set_xlabel('ChartPatterns for today') ###Output _____no_output_____ ###Markdown Since most of the samples are of type 5, it means that the conditions by which I am scanning the tickers selects more bullish patterns. **Lets see if they result in some immediate gains as well?** The figure suggests that with 80% chance we will make more than 10% gain in 4 days with no optimiziation or training, just by the rues by which I filter out the market ! ###Code #plt.subplots() #sns.ecdfplot(data[target_keys[0]]) plt.subplots() #sns.kdeplot(data[target_keys[0]]) $ for PDF sns.kdeplot(data[target_keys[0]], cumulative=True) # for CDF plot plt.xlim([-60,100]) plt.xticks(range(-30,100,10)) plt.legend(loc='upper left', labels=['CDF of max-gain-in-4-days']) sns.kdeplot(data['marketCap']) crr = data[list(data.keys())].corr() Thr = 0.9 key_list = target_keys + Feature_keys print(key_list) plt.subplots(figsize=(20,10)) # Correlation matrix between numerical values (SibSp Parch Age and Fare values) and Survived crr=data[key_list].corr() g = sns.heatmap(crr, vmin=-1, vmax=1, center=0, annot=True, fmt = ".2f", cmap = sns.diverging_palette(20, 220, n=200)) ###Output _____no_output_____ ###Markdown Now we see some pretty good patterns and correlations. For example, `ChartPatterns-0` and `ChartPatterns-1`, `VolumeIndicators` have a good correlation. `averageVolume` seems to have a good correlation, but I cannot really trust it yet, since its value it not correct due to the problems we have mentioned with Yahoo Finance API. Once we are happy with the models (to be learned), we can test it for a week or so using live data and paper-trading. The gain is also confirmed by categorical outputs, suitable for learning simple classification models like decision tree. Change target to categoryWe can change the target type to category and we choose the following bins:1) larger than 251) between 10 to 25 2) positive and less than 103) negative and larger than -74) less than -20 ###Code cut_labels_4 = [-5, -3, -1, 1, 3, 5, 7] cut_bins = [-1000,-20, -7, 0, 10, 25, 50, 1000] data['Target_cat'] = pd.cut(data[target_keys[0]], bins=cut_bins, labels=cut_labels_4) g=sns.countplot(data.Target_cat) ###Output C:\ProgramData\Anaconda3\envs\stock\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. warnings.warn( ###Markdown Decision tree ###Code # Split dataset into training set and test set data_copy = data[Feature_keys + ['Target_cat']] # Handle Non values data_copy = data_copy.dropna(how='any') X = data_copy[Feature_keys] y = data_copy[['Target_cat']] ToNormalize_keys = ['averageVolume','OverDayVolumeIndicator-0', 'OverDayVolumeIndicator-1'] #Normalize columns #X[ToNormalize_keys]=(X[ToNormalize_keys]-X[ToNormalize_keys].mean())/X.std() X[ToNormalize_keys] = X[ToNormalize_keys]/1000 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1) # 70% training and 30% test X.head() # Create Decision Tree classifer object clf = DecisionTreeClassifier() # Train Decision Tree Classifer clf = clf.fit(X_train,y_train) #Predict the response for test dataset y_pred = clf.predict(X_test) Result = pd.DataFrame({'True_label':list(y_test['Target_cat']), 'Prediction':y_pred}) th=5 Correct_Bull = Result[['True_label','Prediction']][(Result['True_label']>=th) & (Result['Prediction']>=th)] False_Bull = Result[['True_label','Prediction']][(Result['True_label']<th) & (Result['Prediction']>=th)] Correct_Bear = Result[['True_label','Prediction']][(Result['True_label']<th) & (Result['Prediction']<th)] False_Bear = Result[['True_label','Prediction']][(Result['True_label']>=th) & (Result['Prediction']<th)] print('TP/FP: ',len(Correct_Bull)/len(False_Bull)) print('Acc (TP+TN)/(P+N):',(len(Correct_Bull) + len(Correct_Bull))/(len(Result))) print('Sensitivity TPR TP/P:',len(Correct_Bull)/(len(Correct_Bull) + len(False_Bear))) ###Output TP/FP: 0.6666666666666666 Acc (TP+TN)/(P+N): 0.26900584795321636 Sensitivity TPR TP/P: 0.4742268041237113 ###Markdown Conclusion ###Code X = data[Feature_keys+['ticker']] # Handle Non values X = X.dropna(how='any') #Normalize columns #X[Feature_keys] = (X[Feature_keys]-X[Feature_keys].mean())/X[Feature_keys].std() X[ToNormalize_keys] = X[ToNormalize_keys]/1000 pred = clf.predict(X[Feature_keys]) Selected_Tickers = list(set(X['ticker'][(pred >=5 )])) print(Selected_Tickers) ###Output ['HOFV', 'SBFM', 'ARPO', 'IPSI', 'GRLB', 'AAU', 'NBRV', 'POLA', 'NAK', 'SNDD', 'MTNB', 'BDR', 'IQST', 'RSSV', 'PLRTF', 'JAN', 'MBIO', 'CHS', 'CHUC', 'TRNX', 'SOS', 'KERN', 'IGC', 'SGMD', 'KALA', 'INPX', 'AIHS', 'UUU', 'CFMS', 'WDLF', 'GRNQ', 'MAXD', 'TAUG', 'GMER', 'GPORQ', 'AMTX', 'GRST', 'ASTI', 'AMIH', 'NES', 'ALRN', 'ATDS', 'PMCB', 'GNPX', 'ADMA', 'DECN', 'IZEA', 'DS', 'SMME', 'AITX', 'NSPR', 'TOPS', 'RSCF', 'NTN', 'VISM', 'TTOO', 'NAKD', 'SPI', 'TANH', 'GTLL', 'BTU', 'ZSAN', 'CBNT', 'EYES', 'MBRX', 'HCMC', 'DTEA', 'SNCA', 'AMMJ', 'ID', 'PHUN', 'NMTR', 'LVVV', 'EHVVF', 'ARTL', 'POTN', 'ALNA', 'CCRC', 'AMPE', 'SING', 'SLGG', 'SFOR', 'ATNF', 'IVFH', 'IGPK', 'SYN', 'IPV', 'BRQS', 'DFFN', 'LUVU', 'KOPN', 'PCTL', 'GDSI', 'BORR', 'DGLY', 'BTCS', 'WARM', 'POAI', 'ENG', 'PMPG', 'REI', 'TMBR', 'IMMR', 'CBBT', 'ASDN', 'POWW', 'LMFA', 'CLRB', 'HEPA', 'TRCH', 'AESE', 'MARK', 'BCDA', 'NXTD', 'TKOI', 'NVIV', 'STAF', 'SPLP', 'VALPQ', 'AXXA', 'VNUE', 'SQFT'] ###Markdown Check tickers ###Code %%capture start = dt.datetime.today() - dt.timedelta(30) end = dt.datetime.today() cl_Price = pd.DataFrame() for ticker in Selected_Tickers: cl_Price[ticker] = yf.download(ticker, start, end)['Adj Close'] i=2 plt.plot(cl_Price[Selected_Tickers[i]]) plt.title(Selected_Tickers[i]) ###Output _____no_output_____ ###Markdown Check new tickers ###Code X_ = pd.read_json('./Data/Test/EoD-dataset2021-01-27905734.json') print(X_.ticker) asda tkrs = X_.ticker X_ = X_[Feature_keys+['ticker']] # Handle Non values X_ = X_.dropna(how='any') #Normalize columns X_[ToNormalize_keys] = X_[ToNormalize_keys]/1000 #X_[Feature_keys] = (X_[Feature_keys]-X_[Feature_keys].mean())/X_[Feature_keys].std() pred_ = clf.predict(X_[Feature_keys]) new_res = pd.DataFrame({'Ticker':tkrs, 'Prediction':pred_}) print(new_res) ###Output Series([], Name: ticker, dtype: float64)
Notebooks/1_Smarket-Descriptive Analysis.ipynb
###Markdown 1. Introduction To The DataWe will begin by examining some numerical and graphical summaries of the **Smarket** data, which is part of the ISLR library for our textbook. This data set consists of percentage returns for the S&P 500 stock index over 1,250 days, from the beginning of 2001 until the end of 2005. For each date, we have recorded the percentage returns for each of the five previous trading days, *Lag1* through *Lag5*. We have also recorded *Volume* (the number of shares traded on the previous day, in billions), *Today* (the percentage return on the date in question) and *Direction* (whether the market was Up or Down on this date). In this example, we will fit a logistic regression model in order to predict *Direction* using *Lag1* through *Lag5* and *Volume*. ###Code # Read Smarket.csv into a Dataframe named stocks stocks = pd.read_csv('Data/Smarket.csv') # print the information of the dataset print(stocks.info()) # print # of rows, # of columns print(stocks.shape) # print the first row print(stocks.loc[0]) # print the first five rows print(stocks.head()) # convert Direction to dummy variables stocks_up = pd.get_dummies(stocks['Direction']) # Join the dummy variables to the main dataframe stocks_new = pd.concat([stocks, stocks_up], axis=1) stocks_new.head() # run the following cells for descriptive statistics var_list = ["Year", "Lag1", "Lag2", "Lag3", "Lag4", "Lag5", "Volume"] ###Output _____no_output_____ ###Markdown Scatter plot ###Code # using Pandas plt.scatter(stocks_new['Year'],stocks_new['Volume']) ###Output _____no_output_____ ###Markdown Line plot ###Code # use line plot to show the ups and downs plt.plot(stocks_new.index, stocks_new['Today']) ###Output _____no_output_____ ###Markdown Histgram plot ###Code stocks_new.hist(column="Up") ###Output _____no_output_____ ###Markdown Boxplot ###Code # boxplots: the middle 2 quartiles are located within the box in the middle ### (with the median represented as a line in the box) ### and the lower and upper quartiles are represented as lines (resembling whiskers) ### protruding from either side of the box box = stocks_new[['Volume', 'Today']].boxplot() ###Output _____no_output_____ ###Markdown Pivot plot ###Code # average Up days and Downs Volume Volume_by_Up = stocks_new.pivot_table(index="Up", values="Volume", aggfunc=np.mean) plt.bar(Volume_by_Up.index,Volume_by_Up.Volume) plt.xlabel('Up') plt.ylabel('Average Previous day trading Volume') plt.title("Distribution of Volume") # obvious trend Return_by_Up = stocks_new.pivot_table(index="Up", values="Lag1", aggfunc=np.mean) plt.bar(Return_by_Up.index,Return_by_Up.Lag1) plt.xlabel('Up') plt.ylabel('Average Previous day return') plt.title("Distribution of return") Return_by_Up ###Output _____no_output_____
Logistic-regression-PS.ipynb
###Markdown Copyright 2020 Andrew M. Olney and made available under [CC BY-SA](https://creativecommons.org/licenses/by-sa/4.0) for text and [Apache-2.0](http://www.apache.org/licenses/LICENSE-2.0) for code. Logistic Regression: Problem solvingIn this session, you will predict whether or not a candy is popular based on its other properties.This dataset [was collected](http://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) to discover the most popular Halloween candy.| Variable | Type | Description ||:-----------------|:------------------|:--------------------------------------------------------------|| chocolate | Numeric (binary) | Does it contain chocolate? || fruity | Numeric (binary) | Is it fruit flavored? || caramel | Numeric (binary) | Is there caramel in the candy? || peanutalmondy | Numeric (binary) | Does it contain peanuts, peanut butter or almonds? || nougat | Numeric (binary) | Does it contain nougat? || crispedricewafer | Numeric (binary) | Does it contain crisped rice, wafers, or a cookie component? || hard | Numeric (binary) | Is it a hard candy? || bar | Numeric (binary) | Is it a candy bar? || pluribus | Numeric (binary) | Is it one of many candies in a bag or box? || sugarpercent | Numeric (0 to 1) | The percentile of sugar it falls under within the data set. || pricepercent | Numeric (0 to 1) | The unit price percentile compared to the rest of the set. || winpercent | Numeric (percent) | The overall win percentage according to 269,000 matchups || popular | Numeric (binary) | 1 if win percentage is over 50% and 0 otherwise |**Source:** This dataset is Copyright (c) 2014 ESPN Internet Ventures and distributed under an MIT license. Load the dataFirst import `pandas`. Load a dataframe with `"datasets/candy-data.csv"` and display it. Notice there is a bogus variable `competitorname` that is actually an ID, also known as an **index**. We saw the same thing in KNN regression with the `mpg` dataset, but that time it was the car name.Load the dataframe again, but this time use `index_col="competitorname"` to fix this. Explore the data Descriptive statisticsDescribe the data. Remember that for the 0/1 variables, the mean reflects the average presence of an ingredient in candy.For example, `chocolate` is in 43.5% of candy. -----------**QUESTION:**What is the least common ingredient (there may be more than one that is the same)? **ANSWER: (click here to edit)**----------- **QUESTION:**What is the most common ingredient? **ANSWER: (click here to edit)**----------- **QUESTION:**Do you see any problems with the data, e.g. missing data? **ANSWER: (click here to edit)**----------- CorrelationsCreate and display a correlation matrix. -----------**QUESTION:**What property is most positively related to being popular?What property is most negatively related to being popular? **ANSWER: (click here to edit)**----------- Create a heatmap for the correlation matrix.Start by importing `plotly.express`. Create the heatmap figure -----------**QUESTION:**What color is strongly negative, what color is zero, and what color is strongly positive? **ANSWER: (click here to edit)**----------- **QUESTION:**What's going on in the lower right corner? **ANSWER: (click here to edit)**----------- HistogramsFor binary variables, histograms don't tell us anything that the descriptives don't already tell us.However, there are two percent-type variables to plot, `sugarpercent` and `pricepercent`.Plot a histogram of `sugarpercent`. Plot a histogram of `pricepercent`. -----------**QUESTION:**What can you say about the distributions of `sugarpercent` and `pricepercent`?Is there anything we should be concerned about? **ANSWER: (click here to edit)**----------- Prepare train/test setsYou need to split the dataframe into training data and testing data, and also separate the predictors from the class labels.Start by dropping the label, `popular`, and its counterpart, `winpercent`, to make a new dataframe called `X`. Save a dataframe with just `popular` in `Y`. Import `sklean.model_selection` to split `X` and `Y` into train and test sets. Now do the splits. Logistic regression modelImport libraries for:- Logistic regression- Metrics- Ravel**NOTE: technically we don't need to scale anything and so don't need a pipeline.** -----------**QUESTION:**Why don't we need to scale anything? **ANSWER: (click here to edit)**----------- Create the logistic regression model. Train the logistic regression model using the splits. Get predictions from the model using the test data. Assessing the modelPrint the model accuracy. -----------**QUESTION:**How does this compare to the average value of `popular`? Is this a good accuracy? **ANSWER: (click here to edit)**----------- Print precision, recall, and F1. -----------**QUESTION:**How to the precision/recall/f1 compare for unpopular (0) and popular (1)? **ANSWER: (click here to edit)**----------- Make an ROC plot. -----------**QUESTION:**If we decreased the recall to .66, what would the false positives be? HINT: hover your mouse over the plot line at that value. **ANSWER: (click here to edit)**----------- This last part is something we didn't really get to develop in the first session, so just run the code.The odds ratio shows how much more likely a property makes the candy `popular`.For many of these, the property is just presence/absence.For example, the odds ratio of 3.06 on chocolate means that having chocolate as an ingredient makes the candy 3.06 times more popular than candy without chocolate. ###Code pd.DataFrame( {"variable":X.columns, "odds_ratio":np.exp(np.ravel(lm.coef_)) }) ###Output _____no_output_____ ###Markdown Copyright 2020 Andrew M. Olney and made available under [CC BY-SA](https://creativecommons.org/licenses/by-sa/4.0) for text and [Apache-2.0](http://www.apache.org/licenses/LICENSE-2.0) for code. Logistic Regression: Problem solvingIn this session, you will predict whether or not a candy is popular based on its other properties.This dataset [was collected](http://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) to discover the most popular Halloween candy.| Variable | Type | Description ||:-----------------|:------------------|:--------------------------------------------------------------|| chocolate | Numeric (binary) | Does it contain chocolate? || fruity | Numeric (binary) | Is it fruit flavored? || caramel | Numeric (binary) | Is there caramel in the candy? || peanutalmondy | Numeric (binary) | Does it contain peanuts, peanut butter or almonds? || nougat | Numeric (binary) | Does it contain nougat? || crispedricewafer | Numeric (binary) | Does it contain crisped rice, wafers, or a cookie component? || hard | Numeric (binary) | Is it a hard candy? || bar | Numeric (binary) | Is it a candy bar? || pluribus | Numeric (binary) | Is it one of many candies in a bag or box? || sugarpercent | Numeric (0 to 1) | The percentile of sugar it falls under within the data set. || pricepercent | Numeric (0 to 1) | The unit price percentile compared to the rest of the set. || winpercent | Numeric (percent) | The overall win percentage according to 269,000 matchups || popular | Numeric (binary) | 1 if win percentage is over 50% and 0 otherwise |**Source:** This dataset is Copyright (c) 2014 ESPN Internet Ventures and distributed under an MIT license. Load the dataFirst import `pandas`. Load a dataframe with `"datasets/candy-data.csv"` and display it. Notice there is a bogus variable `competitorname` that is actually an ID, also known as an **index**. We saw the same thing in KNN regression with the `mpg` dataset, but that time it was the car name.Load the dataframe again, but this time use `index_col="competitorname"` to fix this. Explore the data Descriptive statisticsDescribe the data. Remember that for the 0/1 variables, the mean reflects the average presence of an ingredient in candy.For example, `chocolate` is in 43.5% of candy. **QUESTION:**What is the least common ingredient (there may be more than one that is the same)? **ANSWER: (click here to edit)** **QUESTION:**What is the most common ingredient? **ANSWER: (click here to edit)** **QUESTION:**Do you see any problems with the data, e.g. missing data? **ANSWER: (click here to edit)** CorrelationsCreate and display a correlation matrix. **QUESTION:**What property is most positively related to being popular?What property is most negatively related to being popular? **ANSWER: (click here to edit)** Create a heatmap for the correlation matrix.Start by importing `plotly.express`. Create the heatmap figure **QUESTION:**What color is strongly negative, what color is zero, and what color is strongly positive? **ANSWER: (click here to edit)** **QUESTION:**What's going on in the lower right corner? **ANSWER: (click here to edit)** HistogramsFor binary variables, histograms don't tell us anything that the descriptives don't already tell us.However, there are two percent-type variables to plot, `sugarpercent` and `pricepercent`.Plot a histogram of `sugarpercent`. Plot a histogram of `pricepercent`. **QUESTION:**What can you say about the distributions of `sugarpercent` and `pricepercent`?Is there anything we should be concerned about? **ANSWER: (click here to edit)** Prepare train/test setsYou need to split the dataframe into training data and testing data, and also separate the predictors from the class labels.Start by dropping the label, `popular`, and its counterpart, `winpercent`, to make a new dataframe called `X`. Save a dataframe with just `popular` in `Y`. Import `sklean.model_selection` to split `X` and `Y` into train and test sets. Now do the splits. Logistic regression modelImport libraries for:- Logistic regression- Metrics- Ravel**NOTE: technically we don't need to scale anything and so don't need a pipeline.** **QUESTION:**Why don't we need to scale anything? **ANSWER: (click here to edit)** Create the logistic regression model. Train the logistic regression model using the splits. Get predictions from the model using the test data. Assessing the modelPrint the model accuracy. **QUESTION:**How does this compare to the average value of `popular`? Is this a good accuracy? **ANSWER: (click here to edit)** Print precision, recall, and F1. **QUESTION:**How to the precision/recall/f1 compare for unpopular (0) and popular (1)? **ANSWER: (click here to edit)** Make an ROC plot. **QUESTION:**If we decreased the recall to .66, what would the false positives be? HINT: hover your mouse over the plot line at that value. **ANSWER: (click here to edit)** This last part is something we didn't really get to develop in the first session, so just run the code.The odds ratio shows how much more likely a property makes the candy `popular`.For many of these, the property is just presence/absence.For example, the odds ratio of 3.06 on chocolate means that having chocolate as an ingredient makes the candy 3.06 times more popular than candy without chocolate. ###Code pd.DataFrame( {"variable":X.columns, "odds_ratio":np.exp(np.ravel(lm.coef_)) }) ###Output _____no_output_____ ###Markdown Logistic Regression: Problem solvingIn this session, you will predict whether or not a candy is popular based on its other properties.This dataset [was collected](http://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) to discover the most popular Halloween candy.| Variable | Type | Description ||:-----------------|:------------------|:--------------------------------------------------------------|| chocolate | Numeric (binary) | Does it contain chocolate? || fruity | Numeric (binary) | Is it fruit flavored? || caramel | Numeric (binary) | Is there caramel in the candy? || peanutalmondy | Numeric (binary) | Does it contain peanuts, peanut butter or almonds? || nougat | Numeric (binary) | Does it contain nougat? || crispedricewafer | Numeric (binary) | Does it contain crisped rice, wafers, or a cookie component? || hard | Numeric (binary) | Is it a hard candy? || bar | Numeric (binary) | Is it a candy bar? || pluribus | Numeric (binary) | Is it one of many candies in a bag or box? || sugarpercent | Numeric (0 to 1) | The percentile of sugar it falls under within the data set. || pricepercent | Numeric (0 to 1) | The unit price percentile compared to the rest of the set. || winpercent | Numeric (percent) | The overall win percentage according to 269,000 matchups || popular | Numeric (binary) | 1 if win percentage is over 50% and 0 otherwise |**Acknowledgements:**This dataset is Copyright (c) 2014 ESPN Internet Ventures and distributed under an MIT license. Load the dataFirst import `pandas`. ###Code import pandas as pd #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="lfN=$uzFcxs-6^)2j+oc">pd</variable></variables><block type="importAs" id="^+Nghe{{_uq{G7tc)sr/" x="130" y="233"><field name="libraryName">pandas</field><field name="libraryAlias" id="lfN=$uzFcxs-6^)2j+oc">pd</field></block></xml> ###Output _____no_output_____ ###Markdown Load a dataframe with `"datasets/candy-data.csv"` and display it. ###Code dataframe = pd.read_csv('datasets/candy-data.csv') dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</variable><variable id="lfN=$uzFcxs-6^)2j+oc">pd</variable></variables><block type="variables_set" id="fF~/UXvqG^U(f~JGIDy`" x="80" y="398"><field name="VAR" id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</field><value name="VALUE"><block type="varDoMethod" id="!;Ht!#J?fh-20leIrwzD"><field name="VAR" id="lfN=$uzFcxs-6^)2j+oc">pd</field><field name="MEMBER">read_csv</field><data>pd:read_csv</data><value name="INPUT"><block type="lists_create_with" id="TrI}GSSE8V$xub^riM5u"><mutation items="1"></mutation><value name="ADD0"><block type="text" id="l*avp11qT(YO1^[}vhMQ"><field name="TEXT">datasets/candy-data.csv</field></block></value></block></value></block></value></block><block type="variables_get" id="QrXVu@{ddyF~%Ex(x?D+" x="77" y="525"><field name="VAR" id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</field></block></xml> ###Output _____no_output_____ ###Markdown Notice there is a bogus variable `competitorname` that is actually an ID, also known as an **index**. We saw the same thing in KNN regression with the `mpg` dataset, but that time it was the car name.Load the dataframe again, but this time use `index_col="competitorname"` to fix this. ###Code dataframe = pd.read_csv('datasets/candy-data.csv', index_col="competitorname") dataframe #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</variable><variable id="lfN=$uzFcxs-6^)2j+oc">pd</variable></variables><block type="variables_set" id="fF~/UXvqG^U(f~JGIDy`" x="80" y="398"><field name="VAR" id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</field><value name="VALUE"><block type="varDoMethod" id="!;Ht!#J?fh-20leIrwzD"><field name="VAR" id="lfN=$uzFcxs-6^)2j+oc">pd</field><field name="MEMBER">read_csv</field><data>pd:read_csv</data><value name="INPUT"><block type="lists_create_with" id="TrI}GSSE8V$xub^riM5u"><mutation items="2"></mutation><value name="ADD0"><block type="text" id="l*avp11qT(YO1^[}vhMQ"><field name="TEXT">datasets/candy-data.csv</field></block></value><value name="ADD1"><block type="dummyOutputCodeBlock" id="T!2=Rsgm?2MK$m(1gSoc"><field name="CODE">index_col="competitorname"</field></block></value></block></value></block></value></block><block type="variables_get" id="QrXVu@{ddyF~%Ex(x?D+" x="77" y="525"><field name="VAR" id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</field></block></xml> ###Output _____no_output_____ ###Markdown Explore the data Descriptive statisticsDescribe the data. ###Code dataframe.describe() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</variable></variables><block type="varDoMethod" id="a_9X+LWLGiJK*RB6qVf?" x="-25" y="188"><field name="VAR" id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</field><field name="MEMBER">describe</field><data>dataframe:describe</data></block></xml> ###Output _____no_output_____ ###Markdown Remember that for the 0/1 variables, the mean reflects the average presence of an ingredient in candy.For example, `chocolate` is in 43.5% of candy. **QUESTION:**What is the least common ingredient (there may be more than one that is the same)? **ANSWER: (click here to edit)***`nougat` and `crispedricewafer`* **QUESTION:**What is the most common ingredient? **ANSWER: (click here to edit)***`fruity` is, surprisingly.* **QUESTION:**Do you see any problems with the data, e.g. missing data? **ANSWER: (click here to edit)***No* CorrelationsCreate and display a correlation matrix. ###Code corr = dataframe.corr() corr #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="dT?/6EKjd+)rT`THc*Qp">corr</variable><variable id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</variable></variables><block type="variables_set" id="KhhK42MjtJx4;6+jZhID" x="-33" y="130"><field name="VAR" id="dT?/6EKjd+)rT`THc*Qp">corr</field><value name="VALUE"><block type="varDoMethod" id=")@pt@`v52VowO=q3vMew"><field name="VAR" id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</field><field name="MEMBER">corr</field><data>dataframe:corr</data></block></value></block><block type="variables_get" id="4(P[r{;~m!FcLB/$J@oH" x="-33" y="184"><field name="VAR" id="dT?/6EKjd+)rT`THc*Qp">corr</field></block></xml> ###Output _____no_output_____ ###Markdown **QUESTION:**What property is most positively related to being popular?What property is most negatively related to being popular? **ANSWER: (click here to edit)***`chocolate` is most positively related to popularity and `hard` is most negatively related.* Create a heatmap for the correlation matrix.Start by importing `plotly.express`. ###Code import plotly.express as px #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="+yh,Zg{hON]zy6D~.rH#">px</variable></variables><block type="importAs" id="JKd}DIrGfV*IelKZ!4ls" x="129" y="219"><field name="libraryName">plotly.express</field><field name="libraryAlias" id="+yh,Zg{hON]zy6D~.rH#">px</field></block></xml> ###Output _____no_output_____ ###Markdown Create the heatmap figure ###Code fig = px.imshow(corr) #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="`MlX9tv$x9^+8hq@.?!W">fig</variable><variable id="+yh,Zg{hON]zy6D~.rH#">px</variable><variable id="dT?/6EKjd+)rT`THc*Qp">corr</variable></variables><block type="variables_set" id="24~lvUEzpxJ.@DlUnzGj" x="39" y="263"><field name="VAR" id="`MlX9tv$x9^+8hq@.?!W">fig</field><value name="VALUE"><block type="varDoMethod" id="7PkTI5@+A6#C}J7CTmyF"><field name="VAR" id="+yh,Zg{hON]zy6D~.rH#">px</field><field name="MEMBER">imshow</field><data>px:imshow</data><value name="INPUT"><block type="lists_create_with" id="g^2EaX(wo6etoMYs:5}O"><mutation items="1"></mutation><value name="ADD0"><block type="variables_get" id="DrBaD`rGEl+a=qpT5bB%"><field name="VAR" id="dT?/6EKjd+)rT`THc*Qp">corr</field></block></value></block></value></block></value></block></xml> ###Output _____no_output_____ ###Markdown And show it. ###Code fig.show() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="`MlX9tv$x9^+8hq@.?!W">fig</variable></variables><block type="varDoMethod" id="Y3*:nG98oP04XKV=JbYK" x="8" y="188"><field name="VAR" id="`MlX9tv$x9^+8hq@.?!W">fig</field><field name="MEMBER">show</field><data>fig:show</data></block></xml> ###Output _____no_output_____ ###Markdown **QUESTION:**What color is strongly negative, what color is zero, and what color is strongly positive? **ANSWER: (click here to edit)***Negative is dark purple, zero is pinkish, and positive is yellow.* **QUESTION:**What's going on in the lower right corner? **ANSWER: (click here to edit)***`popular` and `winpercent` are highly correlated, but that's because `popular` is based on `winpercent`. So we should ignore it.* HistogramsFor binary variables, histograms don't tell us anything that the descriptives don't already tell us.However, there are two percent-type variables to plot, `sugarpercent` and `pricepercent`.Plot a histogram of `sugarpercent`. ###Code px.histogram(dataframe, x="sugarpercent") #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="+yh,Zg{hON]zy6D~.rH#">px</variable><variable id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</variable></variables><block type="varDoMethod" id="M7Nr}Mv.DAk?=8Xv:VTh" x="129" y="279"><field name="VAR" id="+yh,Zg{hON]zy6D~.rH#">px</field><field name="MEMBER">histogram</field><data>px:histogram</data><value name="INPUT"><block type="lists_create_with" id="JD%Ihlvfj~M#!M@}qUh7"><mutation items="2"></mutation><value name="ADD0"><block type="variables_get" id="HERwNg=OfTm0rFH2lO!*"><field name="VAR" id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</field></block></value><value name="ADD1"><block type="valueOutputCodeBlock" id="uJ?DS:J9fu7,X*mfzq]c"><field name="CODE">x="sugarpercent"</field></block></value></block></value></block></xml> ###Output _____no_output_____ ###Markdown Plot a histogram of `pricepercent`. ###Code px.histogram(dataframe, x="pricepercent") #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="+yh,Zg{hON]zy6D~.rH#">px</variable><variable id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</variable></variables><block type="varDoMethod" id="M7Nr}Mv.DAk?=8Xv:VTh" x="129" y="279"><field name="VAR" id="+yh,Zg{hON]zy6D~.rH#">px</field><field name="MEMBER">histogram</field><data>px:histogram</data><value name="INPUT"><block type="lists_create_with" id="JD%Ihlvfj~M#!M@}qUh7"><mutation items="2"></mutation><value name="ADD0"><block type="variables_get" id="HERwNg=OfTm0rFH2lO!*"><field name="VAR" id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</field></block></value><value name="ADD1"><block type="valueOutputCodeBlock" id="uJ?DS:J9fu7,X*mfzq]c"><field name="CODE">x="pricepercent"</field></block></value></block></value></block></xml> ###Output _____no_output_____ ###Markdown **QUESTION:**What can you say about the distributions of `sugarpercent` and `pricepercent`?Is there anything we should be concerned about? **ANSWER: (click here to edit)***They are both pretty flat, or uniform. There is a notch in the middle of each, which may represent something, but it is not big enough to obviously mean anything.Nothing about them seems concerning at all.* Prepare train/test setsYou need to split the dataframe into training data and testing data, and also separate the predictors from the class labels.Start by dropping the label, `popular`, and its counterpart, `winpercent`, to make a new dataframe called `X`. ###Code X = dataframe.drop(columns=["popular","winpercent"]) X #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="Eu6+HM0^Zfw6=$49Xgk7">X</variable><variable id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</variable></variables><block type="variables_set" id="j12HEn?u}2$P-S)Ax6d(" x="-16" y="265"><field name="VAR" id="Eu6+HM0^Zfw6=$49Xgk7">X</field><value name="VALUE"><block type="varDoMethod" id="PMH^/O2y?rD(Y}r-N6eY"><field name="VAR" id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</field><field name="MEMBER">drop</field><data>dataframe:drop</data><value name="INPUT"><block type="lists_create_with" id="bGn#S3b1DV-0-egwl87D"><mutation items="1"></mutation><value name="ADD0"><block type="dummyOutputCodeBlock" id="8N-YF(7ms%)oox.~Z;b/"><field name="CODE">columns=["popular","winpercent"]</field></block></value></block></value></block></value></block><block type="variables_get" id="*Q(nsPMh9xg`HceqC3eN" x="-11" y="350"><field name="VAR" id="Eu6+HM0^Zfw6=$49Xgk7">X</field></block></xml> ###Output _____no_output_____ ###Markdown Save a dataframe with just `popular` in `Y`. ###Code Y = dataframe[['popular']] Y #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="vEy06PN?Djk]8Ag?;UWi">Y</variable><variable id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</variable></variables><block type="variables_set" id="_:J1([xryT4i+F7k{#[%" x="17" y="175"><field name="VAR" id="vEy06PN?Djk]8Ag?;UWi">Y</field><value name="VALUE"><block type="indexer" id="22hOF=T:mHvJrMybHkPs"><field name="VAR" id="-(zD`=)|+6ZJe1eZ)t2_">dataframe</field><value name="INDEX"><block type="lists_create_with" id="qT2G_{7q~*L;[5j:vtKq"><mutation items="1"></mutation><value name="ADD0"><block type="text" id="=UVLwuCW$1d9ZJ]6HF_,"><field name="TEXT">popular</field></block></value></block></value></block></value></block><block type="variables_get" id="pog__n6Jf{VK_sNEB)B$" x="17" y="239"><field name="VAR" id="vEy06PN?Djk]8Ag?;UWi">Y</field></block></xml> ###Output _____no_output_____ ###Markdown Import `sklean.model_selection` to split `X` and `Y` into train and test sets. ###Code import sklearn.model_selection as model_selection #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="uASGz64Zb$AOvQyV4pRj">model_selection</variable></variables><block type="importAs" id="sN1YO5FEzpHyxb31@j,Z" x="16" y="10"><field name="libraryName">sklearn.model_selection</field><field name="libraryAlias" id="uASGz64Zb$AOvQyV4pRj">model_selection</field></block></xml> ###Output _____no_output_____ ###Markdown Now do the splits. ###Code splits = model_selection.train_test_split(X, Y, test_size=0.2) #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="_ut$e0PL4OMi4o1MXTpw">splits</variable><variable id="uASGz64Zb$AOvQyV4pRj">model_selection</variable><variable id="Eu6+HM0^Zfw6=$49Xgk7">X</variable><variable id="vEy06PN?Djk]8Ag?;UWi">Y</variable></variables><block type="variables_set" id="oTGRJ#{R!U^we@Bl@pkT" x="31" y="224"><field name="VAR" id="_ut$e0PL4OMi4o1MXTpw">splits</field><value name="VALUE"><block type="varDoMethod" id="f?j@ker(a#hJv;Nh)IGX"><field name="VAR" id="uASGz64Zb$AOvQyV4pRj">model_selection</field><field name="MEMBER">train_test_split</field><data>model_selection:train_test_split</data><value name="INPUT"><block type="lists_create_with" id="er6r2]}|nA;1;}VsM5I7"><mutation items="3"></mutation><value name="ADD0"><block type="variables_get" id=".mm}`*H4)i%Eq5z={e-$"><field name="VAR" id="Eu6+HM0^Zfw6=$49Xgk7">X</field></block></value><value name="ADD1"><block type="variables_get" id="I3dOV;CPBf^~E%BvgthZ"><field name="VAR" id="vEy06PN?Djk]8Ag?;UWi">Y</field></block></value><value name="ADD2"><block type="dummyOutputCodeBlock" id="@Hg?ib/!8fH$;f3pWJy2"><field name="CODE">test_size=0.2</field></block></value></block></value></block></value></block></xml> ###Output _____no_output_____ ###Markdown Logistic regression modelImport libraries for:- Logistic regression- Metrics- Ravel**NOTE: technically we don't need to scale anything and so don't need a pipeline.** **QUESTION:**Why don't we need to scale anything? **ANSWER: (click here to edit)***All the variables are between 0 and 1, so they are basically on the same scale. In general, we don't need to scale for regression, though some people prefer to do that.* ###Code import sklearn.linear_model as linear_model import sklearn.metrics as metrics import numpy as np #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="cGnMvhq5543q[r$:Og-x">linear_model</variable><variable id=")}+w@N9iB/j=:+PLkMv8">metrics</variable><variable id="Zhzp)s*VL?V@ES3(j:*b">np</variable></variables><block type="importAs" id="C,|uKYZ4CH*/,cD|4($8" x="135" y="303"><field name="libraryName">sklearn.linear_model</field><field name="libraryAlias" id="cGnMvhq5543q[r$:Og-x">linear_model</field><next><block type="importAs" id="*G_SVgZ;}hIr,Hi1~$Z6"><field name="libraryName">sklearn.metrics</field><field name="libraryAlias" id=")}+w@N9iB/j=:+PLkMv8">metrics</field><next><block type="importAs" id="rPZJ#sIeu`Zr!8,RiL!w"><field name="libraryName">numpy</field><field name="libraryAlias" id="Zhzp)s*VL?V@ES3(j:*b">np</field></block></next></block></next></block></xml> ###Output _____no_output_____ ###Markdown Create the logistic regression model. ###Code lm = linear_model.LogisticRegression() #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="}#R~_f(;Z:ZnAFRy.{;t">lm</variable><variable id="cGnMvhq5543q[r$:Og-x">linear_model</variable></variables><block type="variables_set" id="}81D/tZY#o}$E:M}:u4x" x="102" y="417"><field name="VAR" id="}#R~_f(;Z:ZnAFRy.{;t">lm</field><value name="VALUE"><block type="varCreateObject" id="ar7keIh-Yv)+b+#Edsp_"><field name="VAR" id="cGnMvhq5543q[r$:Og-x">linear_model</field><field name="MEMBER">LogisticRegression</field><data>linear_model:LogisticRegression</data></block></value></block></xml> ###Output _____no_output_____ ###Markdown Train the logistic regression model using the splits. ###Code lm.fit(splits[0], np.ravel(splits[2])) #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="}#R~_f(;Z:ZnAFRy.{;t">lm</variable><variable id="Zhzp)s*VL?V@ES3(j:*b">np</variable><variable id="_ut$e0PL4OMi4o1MXTpw">splits</variable></variables><block type="varDoMethod" id="Z)$q-zn.KTC{+^l-wH6u" x="259" y="169"><field name="VAR" id="}#R~_f(;Z:ZnAFRy.{;t">lm</field><field name="MEMBER">fit</field><data>lm:fit</data><value name="INPUT"><block type="lists_create_with" id="e_B;36VOJ^lH70V=aWY}"><mutation items="2"></mutation><value name="ADD0"><block type="lists_getIndex" id="C#,#1*rEm+]qEx?L1x[L"><mutation statement="false" at="true"></mutation><field name="MODE">GET</field><field name="WHERE">FROM_START</field><value name="VALUE"><block type="variables_get" id="b_Sz{9#d7d=ystO|k?l_"><field name="VAR" id="_ut$e0PL4OMi4o1MXTpw">splits</field></block></value><value name="AT"><block type="math_number" id=":s{r1~S,,@.CSh#9`$R;"><field name="NUM">1</field></block></value></block></value><value name="ADD1"><block type="varDoMethod" id="zYBlZ,!^P^%[email protected]"><field name="VAR" id="Zhzp)s*VL?V@ES3(j:*b">np</field><field name="MEMBER">ravel</field><data>np:ravel</data><value name="INPUT"><block type="lists_create_with" id="9s({WSn={~Ink.5O+6Cc"><mutation items="1"></mutation><value name="ADD0"><block type="lists_getIndex" id="^)UBt0jM;BnGmWeG7pw*"><mutation statement="false" at="true"></mutation><field name="MODE">GET</field><field name="WHERE">FROM_START</field><value name="VALUE"><block type="variables_get" id="4Vo!*g]qQ=D}XtD2i39/"><field name="VAR" id="_ut$e0PL4OMi4o1MXTpw">splits</field></block></value><value name="AT"><block type="math_number" id="RysCD3.C27sBxztz(T}2"><field name="NUM">3</field></block></value></block></value></block></value></block></value></block></value></block></xml> ###Output _____no_output_____ ###Markdown Get predictions from the model using the test data. ###Code predictions = lm.predict(splits[1]) predictions #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="(`;mrW|63Vww]$wlV9+1">predictions</variable><variable id="}#R~_f(;Z:ZnAFRy.{;t">lm</variable><variable id="_ut$e0PL4OMi4o1MXTpw">splits</variable></variables><block type="variables_set" id="NHw$/HH988vNLbZgn)CM" x="88" y="212"><field name="VAR" id="(`;mrW|63Vww]$wlV9+1">predictions</field><value name="VALUE"><block type="varDoMethod" id="N}3ds6:i%0MtTA:(2im4"><field name="VAR" id="}#R~_f(;Z:ZnAFRy.{;t">lm</field><field name="MEMBER">predict</field><data>lm:predict</data><value name="INPUT"><block type="lists_create_with" id="3Ru6U*^.a`oD7$bu/I%y"><mutation items="1"></mutation><value name="ADD0"><block type="lists_getIndex" id="@lpyN+:CEcPQ#Q:Svm|9"><mutation statement="false" at="true"></mutation><field name="MODE">GET</field><field name="WHERE">FROM_START</field><value name="VALUE"><block type="variables_get" id="ng))4fZyb@U1|eswo1}:"><field name="VAR" id="_ut$e0PL4OMi4o1MXTpw">splits</field></block></value><value name="AT"><block type="math_number" id="Lo)w=2LL|Tf-L/gkeTdT"><field name="NUM">2</field></block></value></block></value></block></value></block></value></block><block type="variables_get" id="I#4y[,+*#I5s;b;h3o/M" x="75" y="307"><field name="VAR" id="(`;mrW|63Vww]$wlV9+1">predictions</field></block></xml> ###Output _____no_output_____ ###Markdown Assessing the modelPrint the model accuracy. ###Code print('Accuracy:' + str(metrics.accuracy_score(splits[3], predictions))) #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id=")}+w@N9iB/j=:+PLkMv8">metrics</variable><variable id="(`;mrW|63Vww]$wlV9+1">predictions</variable><variable id="_ut$e0PL4OMi4o1MXTpw">splits</variable></variables><block type="text_print" id="6KCf/4(JectOv*aO[L6d" x="67" y="-410"><value name="TEXT"><shadow type="text" id=",J]%.V~I;qhep.pWfj3L"><field name="TEXT">abc</field></shadow><block type="text_join" id="|l|?2Yb#{m=ys_V^)+v-"><mutation items="2"></mutation><value name="ADD0"><block type="text" id="8MdDU0D^rCAqKltb#kaY"><field name="TEXT">Accuracy:</field></block></value><value name="ADD1"><block type="varDoMethod" id="p`ehX8lLN?zayQ1Ip=}V"><field name="VAR" id=")}+w@N9iB/j=:+PLkMv8">metrics</field><field name="MEMBER">accuracy_score</field><data>predictions:</data><value name="INPUT"><block type="lists_create_with" id="b;$i*:lxuE^`xvk}OQ4m"><mutation items="2"></mutation><value name="ADD0"><block type="lists_getIndex" id="bFo^*si#t6gt7l@W:;ux"><mutation statement="false" at="true"></mutation><field name="MODE">GET</field><field name="WHERE">FROM_START</field><value name="VALUE"><block type="variables_get" id="9d/lD-+8|63uHF/H1dwi"><field name="VAR" id="_ut$e0PL4OMi4o1MXTpw">splits</field></block></value><value name="AT"><block type="math_number" id="o@Z:jn.60#6-_fkDivxs"><field name="NUM">4</field></block></value></block></value><value name="ADD1"><block type="variables_get" id="GjWcPkckr7_}|j]O[Em+"><field name="VAR" id="(`;mrW|63Vww]$wlV9+1">predictions</field></block></value></block></value></block></value></block></value></block></xml> ###Output Accuracy:0.7647058823529411 ###Markdown **QUESTION:**How does this compare to the average value of `popular`? Is this a good accuracy? **ANSWER: (click here to edit)***It's about .30 better than the average value, so it doesn't seem that bad.* Print precision, recall, and F1. ###Code print(metrics.classification_report(splits[3], predictions)) #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id=")}+w@N9iB/j=:+PLkMv8">metrics</variable><variable id="(`;mrW|63Vww]$wlV9+1">predictions</variable><variable id="_ut$e0PL4OMi4o1MXTpw">splits</variable></variables><block type="text_print" id="w?Z]Mpw]G,uTA;S:C5Ef" x="27" y="-195"><value name="TEXT"><shadow type="text" id="j5J:iees]K0Kn%J)=1[1"><field name="TEXT">abc</field></shadow><block type="varDoMethod" id="p`ehX8lLN?zayQ1Ip=}V"><field name="VAR" id=")}+w@N9iB/j=:+PLkMv8">metrics</field><field name="MEMBER">classification_report</field><data>metrics:classification_report</data><value name="INPUT"><block type="lists_create_with" id="b;$i*:lxuE^`xvk}OQ4m"><mutation items="2"></mutation><value name="ADD0"><block type="lists_getIndex" id="bFo^*si#t6gt7l@W:;ux"><mutation statement="false" at="true"></mutation><field name="MODE">GET</field><field name="WHERE">FROM_START</field><value name="VALUE"><block type="variables_get" id="9d/lD-+8|63uHF/H1dwi"><field name="VAR" id="_ut$e0PL4OMi4o1MXTpw">splits</field></block></value><value name="AT"><block type="math_number" id="o@Z:jn.60#6-_fkDivxs"><field name="NUM">4</field></block></value></block></value><value name="ADD1"><block type="variables_get" id="GjWcPkckr7_}|j]O[Em+"><field name="VAR" id="(`;mrW|63Vww]$wlV9+1">predictions</field></block></value></block></value></block></value></block></xml> ###Output precision recall f1-score support 0 0.89 0.73 0.80 11 1 0.62 0.83 0.71 6 accuracy 0.76 17 macro avg 0.76 0.78 0.76 17 weighted avg 0.80 0.76 0.77 17 ###Markdown **QUESTION:**How to the precision/recall/f1 compare for unpopular (0) and popular (1)? **ANSWER: (click here to edit)***Popular (1) has lower precision but higher recall. The F1 for 1 is lower than 0.Altogether, this tells us that the classifier is biased a bit more towards positives such that false positives lower precision but true positives raise recall.* Make an ROC plot. ###Code probs = lm.predict_proba(splits[1]) rocMetrics = metrics.roc_curve(splits[3], probs[:,1]) fig = px.line(x=rocMetrics[0], y=rocMetrics[1]) fig.update_yaxes(title_text="Recall/True positive rate") fig.update_xaxes(title_text="False positive rate") #<xml xmlns="https://developers.google.com/blockly/xml"><variables><variable id="M}t8Cm4jpy8CZc{YnXy3">probs</variable><variable id="`MlX9tv$x9^+8hq@.?!W">fig</variable><variable id="}#R~_f(;Z:ZnAFRy.{;t">lm</variable><variable id=",,t`1|+/`aO88;3vt8ZU">rocMetrics</variable><variable id=")}+w@N9iB/j=:+PLkMv8">metrics</variable><variable id="+yh,Zg{hON]zy6D~.rH#">px</variable><variable id="_ut$e0PL4OMi4o1MXTpw">splits</variable></variables><block type="variables_set" id="NHw$/HH988vNLbZgn)CM" x="88" y="212"><field name="VAR" id="M}t8Cm4jpy8CZc{YnXy3">probs</field><value name="VALUE"><block type="varDoMethod" id="N}3ds6:i%0MtTA:(2im4"><field name="VAR" id="}#R~_f(;Z:ZnAFRy.{;t">lm</field><field name="MEMBER">predict_proba</field><data>lm:predict_proba</data><value name="INPUT"><block type="lists_create_with" id="3Ru6U*^.a`oD7$bu/I%y"><mutation items="1"></mutation><value name="ADD0"><block type="lists_getIndex" id="@lpyN+:CEcPQ#Q:Svm|9"><mutation statement="false" at="true"></mutation><field name="MODE">GET</field><field name="WHERE">FROM_START</field><value name="VALUE"><block type="variables_get" id="ng))4fZyb@U1|eswo1}:"><field name="VAR" id="_ut$e0PL4OMi4o1MXTpw">splits</field></block></value><value name="AT"><block type="math_number" id="Lo)w=2LL|Tf-L/gkeTdT"><field name="NUM">2</field></block></value></block></value></block></value></block></value><next><block type="variables_set" id="}4kjlvw)%T2:Y,TuO$1k"><field name="VAR" id=",,t`1|+/`aO88;3vt8ZU">rocMetrics</field><value name="VALUE"><block type="varDoMethod" id="St}]W`i!e!OdZl|3qj)#"><field name="VAR" id=")}+w@N9iB/j=:+PLkMv8">metrics</field><field name="MEMBER">roc_curve</field><data>metrics:roc_curve</data><value name="INPUT"><block type="lists_create_with" id="cZ56CQZr95f)2[OP#h-9"><mutation items="2"></mutation><value name="ADD0"><block type="lists_getIndex" id="}~bW=uGu5bwR72v;A4Cx"><mutation statement="false" at="true"></mutation><field name="MODE">GET</field><field name="WHERE">FROM_START</field><value name="VALUE"><block type="variables_get" id="Q|/,RoTQ:TGE}q68/W7b"><field name="VAR" id="_ut$e0PL4OMi4o1MXTpw">splits</field></block></value><value name="AT"><block type="math_number" id="yoYX)i~8,,)(u:7zfo*h"><field name="NUM">4</field></block></value></block></value><value name="ADD1"><block type="dummyOutputCodeBlock" id="7XMkWCE@tnPK}_Bi!M(("><field name="CODE">probs[:,1]</field></block></value></block></value></block></value><next><block type="variables_set" id="0/UMi`0+3{3*hHWkTDP%"><field name="VAR" id="`MlX9tv$x9^+8hq@.?!W">fig</field><value name="VALUE"><block type="varDoMethod" id="_#Q.pqcS0(zyB~^yEE.a"><field name="VAR" id="+yh,Zg{hON]zy6D~.rH#">px</field><field name="MEMBER">line</field><data>px:line</data><value name="INPUT"><block type="lists_create_with" id="OC-;VKVNf@v,4Vh/L=/5"><mutation items="2"></mutation><value name="ADD0"><block type="dummyOutputCodeBlock" id="1H4P3K^L01dS/g-VDy6M"><field name="CODE">x=rocMetrics[0]</field></block></value><value name="ADD1"><block type="dummyOutputCodeBlock" id="V!iDI^[cIrybrEGiltvm"><field name="CODE">y=rocMetrics[1]</field></block></value></block></value></block></value></block></next></block></next></block><block type="varDoMethod" id="?[FeD.F=+5w?b%v4~QMN" x="74" y="432"><field name="VAR" id="`MlX9tv$x9^+8hq@.?!W">fig</field><field name="MEMBER">update_yaxes</field><data>fig:update_yaxes</data><value name="INPUT"><block type="dummyOutputCodeBlock" id="lM7[K0n*qO+Kz)j*~888"><field name="CODE">title_text="Recall/True positive rate"</field></block></value></block><block type="varDoMethod" id="bQR3M#?N-_JszSa)[H^V" x="85" y="487"><field name="VAR" id="`MlX9tv$x9^+8hq@.?!W">fig</field><field name="MEMBER">update_xaxes</field><data>fig:update_xaxes</data><value name="INPUT"><block type="dummyOutputCodeBlock" id="-BM+X)VB?t#|LHtZ;cYy"><field name="CODE">title_text="False positive rate"</field></block></value></block></xml> ###Output _____no_output_____ ###Markdown **QUESTION:**If we decreased the recall to .66, what would the false positives be? HINT: hover your mouse over the plot line at that value. **ANSWER: (click here to edit)***.09.* This last part is something we didn't really get to develop in the first session, so just run the code.The odds ratio shows how much more likely a property makes the candy `popular`.For many of these, the property is just presence/absence.For example, the odds ratio of 3.06 on chocolate means that having chocolate as an ingredient makes the candy 3.06 times more popular than candy without chocolate. ###Code pd.DataFrame( {"variable":X.columns, "odds_ratio":np.exp(np.ravel(lm.coef_)) }) ###Output _____no_output_____ ###Markdown Logistic Regression: Problem solvingIn this session, you will predict whether or not a candy is popular based on its other properties.This dataset [was collected](http://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) to discover the most popular Halloween candy.| Variable | Type | Description ||:-----------------|:------------------|:--------------------------------------------------------------|| chocolate | Numeric (binary) | Does it contain chocolate? || fruity | Numeric (binary) | Is it fruit flavored? || caramel | Numeric (binary) | Is there caramel in the candy? || peanutalmondy | Numeric (binary) | Does it contain peanuts, peanut butter or almonds? || nougat | Numeric (binary) | Does it contain nougat? || crispedricewafer | Numeric (binary) | Does it contain crisped rice, wafers, or a cookie component? || hard | Numeric (binary) | Is it a hard candy? || bar | Numeric (binary) | Is it a candy bar? || pluribus | Numeric (binary) | Is it one of many candies in a bag or box? || sugarpercent | Numeric (0 to 1) | The percentile of sugar it falls under within the data set. || pricepercent | Numeric (0 to 1) | The unit price percentile compared to the rest of the set. || winpercent | Numeric (percent) | The overall win percentage according to 269,000 matchups || popular | Numeric (binary) | 1 if win percentage is over 50% and 0 otherwise |**Acknowledgements:**This dataset is Copyright (c) 2014 ESPN Internet Ventures and distributed under an MIT license. Load the dataFirst import `pandas`. Load a dataframe with `"datasets/candy-data.csv"` and display it. Notice there is a bogus variable `competitorname` that is actually an ID, also known as an **index**. We saw the same thing in KNN regression with the `mpg` dataset, but that time it was the car name.Load the dataframe again, but this time use `index_col="competitorname"` to fix this. Explore the data Descriptive statisticsDescribe the data. Remember that for the 0/1 variables, the mean reflects the average presence of an ingredient in candy.For example, `chocolate` is in 43.5% of candy. **QUESTION:**What is the least common ingredient (there may be more than one that is the same)? **ANSWER: (click here to edit)** **QUESTION:**What is the most common ingredient? **ANSWER: (click here to edit)** **QUESTION:**Do you see any problems with the data, e.g. missing data? **ANSWER: (click here to edit)** CorrelationsCreate and display a correlation matrix. **QUESTION:**What property is most positively related to being popular?What property is most negatively related to being popular? **ANSWER: (click here to edit)** Create a heatmap for the correlation matrix.Start by importing `plotly.express`. Create the heatmap figure And show it. **QUESTION:**What color is strongly negative, what color is zero, and what color is strongly positive? **ANSWER: (click here to edit)** **QUESTION:**What's going on in the lower right corner? **ANSWER: (click here to edit)** HistogramsFor binary variables, histograms don't tell us anything that the descriptives don't already tell us.However, there are two percent-type variables to plot, `sugarpercent` and `pricepercent`.Plot a histogram of `sugarpercent`. Plot a histogram of `pricepercent`. **QUESTION:**What can you say about the distributions of `sugarpercent` and `pricepercent`?Is there anything we should be concerned about? **ANSWER: (click here to edit)** Prepare train/test setsYou need to split the dataframe into training data and testing data, and also separate the predictors from the class labels.Start by dropping the label, `popular`, and its counterpart, `winpercent`, to make a new dataframe called `X`. Save a dataframe with just `popular` in `Y`. Import `sklean.model_selection` to split `X` and `Y` into train and test sets. Now do the splits. Logistic regression modelImport libraries for:- Logistic regression- Metrics- Ravel**NOTE: technically we don't need to scale anything and so don't need a pipeline.** **QUESTION:**Why don't we need to scale anything? **ANSWER: (click here to edit)** Create the logistic regression model. Train the logistic regression model using the splits. Get predictions from the model using the test data. Assessing the modelPrint the model accuracy. **QUESTION:**How does this compare to the average value of `popular`? Is this a good accuracy? **ANSWER: (click here to edit)** Print precision, recall, and F1. **QUESTION:**How to the precision/recall/f1 compare for unpopular (0) and popular (1)? **ANSWER: (click here to edit)** Make an ROC plot. **QUESTION:**If we decreased the recall to .66, what would the false positives be? HINT: hover your mouse over the plot line at that value. **ANSWER: (click here to edit)** This last part is something we didn't really get to develop in the first session, so just run the code.The odds ratio shows how much more likely a property makes the candy `popular`.For many of these, the property is just presence/absence.For example, the odds ratio of 3.06 on chocolate means that having chocolate as an ingredient makes the candy 3.06 times more popular than candy without chocolate. ###Code pd.DataFrame( {"variable":X.columns, "odds_ratio":np.exp(np.ravel(lm.coef_)) }) ###Output _____no_output_____
site/_build/html/_sources/notebooks/03-viz-api-scraper/03-visualization-python-seaborn.ipynb
###Markdown [![AnalyticsDojo](https://github.com/rpi-techfundamentals/spring2019-materials/blob/master/fig/final-logo.png?raw=1)](http://rpi.analyticsdojo.com)Introduction to Seaborn - Pythonintroml.analyticsdojo.com Introduction to Seaborn Overview- Look at distributions- Seaborn is an alternate data visualization package. - This has been adopted from the Seaborn Documentation.Read more at https://stanford.edu/~mwaskom/software/seaborn/api.html ###Code #This uses the same mechanisms. %matplotlib inline import numpy as np import pandas as pd from scipy import stats, integrate import matplotlib.pyplot as plt import seaborn as sns sns.set(color_codes=True) ###Output _____no_output_____ ###Markdown Distribution Plots- Histogram with KDE- Histogram with Rugplot ###Code import seaborn as sns, numpy as np sns.set(); np.random.seed(0) x = np.random.randn(100) x ###Output _____no_output_____ ###Markdown Distribution Plot (distplot) - Any compbination of hist, rug, kde- Note it also has in it a KDE plot included- Can manually set the number of bins- See documentation [here](https://seaborn.pydata.org/generated/seaborn.distplot.htmlseaborn.distplot) ###Code #Histogram # https://seaborn.pydata.org/generated/seaborn.distplot.html#seaborn.distplot ax = sns.distplot(x) #Adjust number of bins for more fine grained view ax = sns.distplot(x, bins = 20) #Include rug and kde (no histogram) sns.distplot(x, hist=False, rug=True); #Kernel Density #https://seaborn.pydata.org/generated/seaborn.rugplot.html#seaborn.rugplot ax = sns.distplot(x, bins=10, kde=True, rug=True) ###Output _____no_output_____ ###Markdown Box Plots - Break data into quartiles. - Can show distribution/ranges of different categories. Jhguch at en.wikipedia [CC BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5)], from Wikimedia Commons ###Code sns.set_style("whitegrid") #This is data on tips (a real dataset) and our familiar iris dataset tips = sns.load_dataset("tips") iris = sns.load_dataset("iris") titanic = sns.load_dataset("titanic") #Tips is a pandas dataframe tips.head() ax = sns.boxplot(x=tips["total_bill"]) # Notice we can see the few ouliers on right side ax = sns.distplot(tips["total_bill"], kde=True, rug=True) ###Output _____no_output_____ ###Markdown Relationship Plots- Pairplots to show all - Regplot for 2 continuous variables- Scatterplot for two continuous variables- Swarmplot or BoxPlot for continuous and categorical ###Code #Notice how this works for continuous, not great for categorical h = sns.pairplot(tips, hue="time") g = sns.pairplot(iris, hue="species") # Show relationship between 2 continuous variables with regression line. sns.regplot(x="total_bill", y="tip", data=tips); # Break down sns.boxplot(x="day", y="total_bill", hue="time", data=tips); #Uses an algorithm to prevent overlap sns.swarmplot(x="day", y="total_bill", hue= "time",data=tips); #Uses an algorithm to prevent overlap sns.violinplot(x="day", y="total_bill", hue= "time",data=tips); #Stacking Graphs Is Easy sns.violinplot(x="day", y="total_bill", data=tips, inner=None) sns.swarmplot(x="day", y="total_bill", data=tips, color="w", alpha=.5); ###Output _____no_output_____ ###Markdown Visualizing Summary Data- Barplots will show the ###Code #This sns.barplot(x="sex", y="tip", data=tips); tips tips #Notice the selection of palette and how we can swap the axis. sns.barplot(x="tip", y="day", data=tips, palette="Greens_d"); #Notice the selection of palette and how we can swap the axis. sns.barplot(x="total_bill", y="day", data=tips, palette="Reds_d"); #Saturday is the bigger night sns.countplot(x="day", data=tips); ###Output _____no_output_____
docs/notebooks/step_functions.ipynb
###Markdown Step Functions ###Code import matplotlib.pyplot as plt import random import numpy as np import hotstepper as hs from hotstepper import Step, Steps ###Output _____no_output_____ ###Markdown What is a step function? In simple terms, atleast mathematically, it is a piecewise continous function that maps a set to a constant value via a boolean predicate indicating membership of the set, and maps to zero when a member of the compliment set.Formally, we can define a step function as the union of piecewise constant intervals, essentially a linear expansion of constant intervals by way of a basis function defined by a boolean predicate.An indicator function can be defined as:$\begin{equation*}\chi_A(x) = \left\{ \begin{array}{ll} 0 & \quad x \notin A \\ 1 & \quad x \in A \end{array}\right\}\;\;\;\;\; where \;A \in \mathbb{R}\end{equation*}$We can also use a shorthand notation for the indicator function by using the [Iversion bracket notation](https://en.wikipedia.org/wiki/Iverson_bracket), this allows for a definition of a boolean predicate that is syntactically similar to the way we select intervals of a Numpy array.$\chi_A(x) = {[x \in A]}$With these definitions, we can define a step function as an expansion of constant values $\alpha_n \in \mathbb{R}$ over a basis of indicator functions.$f(x) = \displaystyle\sum_{n=0}^{N}{\alpha_n \chi_n(x)}$where $\chi_n(x) = {[x \in A_n]}$The practical advantages of using step functions and specifically this definition comes from looking at what this equation looks similar to. It looks like a vector function defined over a set of basis functions. If we have a set of orthagonal basis functions that are complete (in the proper sense), than any linear combination of the basis functions can represent any function within the space span by those basis.Think Euclidean geometry and the x,y and z basis, we can express any point in this 3D space as a linear combination of those three basis functions, they just happen to be constant and magnitude 1 and each span the entire real line.We have the same here, the indicator functions play the role of the basis, however if we test for orthagonality of different basis fucntions (indicator functions), we find something interesting.$\chi_1(x) = {[x \in S_1]}$ $\chi_2(x) = {[x \in S_2]}$ $\chi_1(x) \cdot \chi_2(x) = {[x \in S_1]} \cdot {[x \in S_2]} = [x \in (S_1 \cap S_2)]$ Now, if the sets $S_1$ and $S_2$ overlap, the result is a new interval defined by $S_3 = S_1 \cap S_2$ and $\chi_3(x) = [x \in S_3]$. This behaviour is the same as that of a mathematical group, were we apply the group operation, in this case, an intersection (the dot product) and the result is another element of the group, in this case another interval.When the two sets don't have any overlap, $S_1 \cap S_2 = \emptyset$, this result is the same as an [orthogonal basis](https://en.wikipedia.org/wiki/Orthogonal_basis).Based on these results, we can draw some important results.1. $\chi_1(x) \cdot \chi_2(x) \cdot \chi_3(x) = [x \in S_1 \cap S_2 \cap S_3]=[x \in S_3 \cap S_2 \cap S_1] = \chi_3(x) \cdot \chi_2(x) \cdot \chi_1(x)$2. $\chi_1(x) + \chi_2(x) + \chi_3(x) = [x \in S_1 \cup S_2 \cup S_3]= [x \in S_3 \cup S_2 \cup S_1] = \chi_3(x) + \chi_2(x) + \chi_1(x)$2. $\chi_1(x) \cdot (\chi_2(x) + \chi_3(x)) = [x \in S_1 \cap (S_2 \cup S_3)] = [x \in (S_1 \cap S_2) \cup (S_1 \cap S_3)] = \chi_1 \cdot \chi_2 + \chi_1 \cdot \chi_3 = \chi_2 \cdot \chi_1 + \chi_3 \cdot \chi_1$$where \;\; S_1,S_2,S_3 \in \mathbb{R}$These results show that using basis functions defined over set membership via indicator functions equipped with addition and multiplication operations leads to a [communatative ring](https://en.wikipedia.org/wiki/Commutative_ring). We can skip to the end of the story by observing that we have defined a [sigma algebra](https://en.wikipedia.org/wiki/%CE%A3-algebra) over partitions of the real line. Since the definitions of the partitions is based on the step data we are considering, the exact set of partitions will vary from data to data, therefore each step function defines a different, but not unique set of partitions over the real line. These partitions can be considered a none unique set of basis functions, since we could show above that when $S_1 \cap S_2 = \emptyset$ the sets $S_1$ and $S_2$ are orthagonal in the sense defined over a vector space. With this interpretation, our indicator functions can be promoted to basis functions with a few conditions.We won't exhaust the point and this isn't a mathematical paper, so I won't push forward any further for now, unless I get some interest in expanding these concepts further. The final result we will need before moving onto the basis interpretation within the HotStepper library is, if we have a set of indicator functions that span our data in order to represent a step function, we can define a tuple of scalars over the basis functions that represent the weighting the function has across each indicator (basis) function. Since we have alot of freedom to choose indicator functions, as long as they don't overlap (their intersection is the empty set) and to ensure we have a function defined across the entire real line, we will also need to include, for completeness, any partitions that are not covered by the indicator functions.That is a really long winded way of say that we have to have enough indicator fucntions to represent, without overlap, the entire real line, otherwise we would have gaps and therefore we would run into issues if we reparametertised our basis functions, hint hint, using a one parameter family of indicator functions that represent a limit based definition of our indicator functions, specifically our chosen favourite son, the Heaviside step function. Step Function Basis The idea of this libary is to take advantage of the power of linear algebra when dealing with count value data. There are a number of approaches to analysing and modelling count data, however these tend to fall in two broad categories. The first approach relies on known discrete statistical distributions such as the Poisson, Binomial and Negative Binomial. There are many powerful models that can be formulated and provide great utility via this approach, particularly when combined with Markov Chain and Bayesian techniques. The second aproach relies on approximating the discrete data as being continious and using the factory of techniques such as linear regression and linear (not nessarily linear) models. This approach is very common when the count values are relatively high and clamping the output values as integers provides good results.The challenge is when we have low count data and don't wish to use a purely statistical approach. This library provides an alternative in this areana and has the flexbility to be applicable within both high and low count regimes. The second advantage this library presents is the use of traditional linear algebra techniques, as this allows for the use of many powerful results which would only be indirectly accessible when using the continous approximation or statistical approaches.For those that need some specifics and don't want to just take my word for it, let's open this jar of pickles and see where it goes.The first step (pun intended) is to look to provide a solid foundation for any count type data we may encounter, by this I mean specifically, we don't want to treat the data in a vaccuum and on a case by case basis. That segways neatly to the solution, we represent the data in a basis, just as we use catesian coordinates to represent a position in space by way of three numbers, we can select a basis in which any of our count data can be represented.Since the count data naturally appears to be a series of step functions, often referred to as a staircase function, we can select the most straight forward and well known step function as a basis, namely the Heaviside step function. Note that the definition we are using here adopts the convention of including the zero point in right hand domain and reprsents the first location of the shift from the null mapped set to the unity mapped set.Therefore we use the definition of the [Heaviside function](https://en.wikipedia.org/wiki/Heaviside_step_function) here as:$\begin{equation*} \theta(t) = \left\{ \begin{array}{ll} 0 & \quad t < 0 \\ 1 & \quad t \geq 0 \end{array} \right\} \;\;\;\;\; where \;t \in \mathbb{R}\end{equation*}$This definition seems unrelated to our discussion on sets and indicator functions, however, if we use the Iverson bracket notation to rewrite the Heaviside function, it becomes clear that the Heaviside function can be seen to represent an indicator function. $\theta(t) = [t \in [0,\infty)] \;\;\; where \; t \in \mathbb{R}$Using the Heaviside function defined above, we can use it as a basis to represent any step data by expanding individual steps by shifting and multiplying the Heaviside function. More formally, if we have a single step at 0 with unit weight, we can represent this as;$f(t) = \theta(t)=[t \in [0, \infty)] \;\;\; where \; t \in \mathbb{R}$We can plot this directly using a Step object from HotStepper. We have explicitly set the start and weight for clarity. ###Code t = np.arange(-5,6,0.1) fig,ax = plt.subplots() heaviside = Step(start=0,weight=1) ax.step(t,heaviside(t)) ax.set_xlabel('t') ax.set_ylabel('$f(t)$') ax.set_title('Heaviside Step Function'); ###Output _____no_output_____ ###Markdown That seems rather unexciting, however, if we have another step function that steps at 3 instead of 0, we can subtract this second step from the first and get something slightly more interesting.Mathematically, we can represent these two individual steps via the Heaviside function as;$f(t)_1 = \theta(t) = [t \in [0, \infty)]$$f(t)_2 = \theta(t-3)=[t \in [3, \infty)]$And subtracting the second from the first, we have;$f(t) = f(t)_1 - f(t)_2 = \theta(t) - \theta(t-3)$We can once again plot the result to see what we get, this time we didn't set the weight value, as HotStepper assigns a value of 1 when no value is provided. ###Code t = np.arange(-5,6,0.1) fig,ax = plt.subplots() step1 = Step(start=0) step2 = Step(start=3) steps = step1 - step2 ax.step(t,steps(t)) ax.set_xlabel('t') ax.set_ylabel('$f(t)$') ax.set_title('Two Heaviside Step Functions combined'); ###Output _____no_output_____ ###Markdown Before we move on, we can also scale or weight the step. Recalling the definition of a step function expressed in a basis of indicator functions.$f(x) = \displaystyle\sum_{n=0}^{N}{\alpha_n \chi_n(x)}$where $\chi_n(x) = {[x \in A_n]}$We can assign a value to the $\alpha_n$ that will scale the step value, in the current example, we can change the weight of each step and when combined, see the result as before.For example, let;$\alpha_1 = 2$$\alpha_2 = 3$$f(t)_1 = \alpha_1\theta(t) = 2 \cdot [t \in [0, \infty)]$$f(t)_2 = \alpha_2\theta(t-3)=3 \cdot [t \in [3, \infty)]$And subtracting the second from the first, we have;$f(t) = f(t)_1 - f(t)_2 = 2\theta(t) - 3\theta(t-3)$The result is plotted below. ###Code t = np.arange(-5,6,0.1) fig,ax = plt.subplots() step1 = Step(start=0, weight=2) step2 = Step(start=3,weight=3) steps = step1 - step2 ax.step(t,steps(t)) ax.set_xlabel('t') ax.set_ylabel('$f(t)$') ax.set_title('Two Reweighted Heaviside Step Functions combined'); ###Output _____no_output_____ ###Markdown Ok, while you haven't fallen asleep already, we can add and subtract a bunch of these shifted and reweighted instances of the Heaviside function and get all sorts of crazy stairs and step looking reults. ###Code def generate_step_func(samples, x_vals): steps = np.zeros(len(x_vals)) for i in range(samples): signs = random.choice([-3,-2,-1,1,2,3]) shifts = random.randint(-5,5) steps += signs*Step(start=shifts)(x_vals) return steps t = np.arange(-1,6,0.1) fig,axs = plt.subplots(nrows=4, ncols=5,figsize=(28,12),sharey=True, sharex=True) plt.tight_layout() for i,ax in np.ndenumerate(axs): steps_data = generate_step_func(10, t) ax.step(t,steps_data) ax.set_xlabel('t') ax.set_title('Multiple Heaviside Step Functions combined') ax.set_ylabel('f(t)') ###Output _____no_output_____
california.ipynb
###Markdown 캘리포니아 집값 분석 ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns train = pd.read_csv('/content/sample_data/california_housing_test.csv') test = pd.read_csv('/content/sample_data/california_housing_train.csv') test.head() train.describe() train.hist(figsize=(15,13) , grid=False, bins=50) correlation = train.corr() plt.figure(figsize=(10,10)) sns.heatmap(correlation, annot=True) plt.show() ###Output _____no_output_____ ###Markdown ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns train = pd.read_csv('/content/sample_data/california_housing_train.csv') test = pd.read_csv('/content/sample_data/california_housing_test.csv') test.head() train.describe() train.hist(figsize=(15,13) , grid=False , bins=50) plt.show() correlation = train.corr() plt.figure(figsize=(10,10)) sns.heatmap(correlation , annot=True) plt.show() ###Output _____no_output_____ ###Markdown 새 섹션 ###Code import pandas as pd import matplotlib.pyplot as plt import seaborn as sns test = pd.read_csv('/content/sample_data/california_housing_test.csv') train = pd.read_csv('/content/sample_data/california_housing_train.csv') train.head() test.head() train.describe() train.hist(figsize=(15,13), grid=False, bins=50) plt.show() correlation = train.corr() plt.figure(figsize=(10,10)) sns.heatmap(correlation, annot=True) plt.show() ###Output _____no_output_____
gridworld_mdp/example_notebook.ipynb
###Markdown SetupIn this notebook, we present a pymdptoolbox implementation of the gridworld environment defined in Lesson 2. To define an MDP in the mdptoolbox, we need to construct numpy arrays for both the transition matrix and reward matrix. We are going to use the transitions defined in Lesson 2, Video 5 "Quiz: The World - 2":And the rewards defined in Lesson 2, Video 12 "Quiz: More About Rewards - 3": ###Code #reward inputs r_s = +2 #most states r_g = +1 #good terminal state r_b = -1 #bad terminal state #transition inputs p_intended = .8 p_opposite = 0.0 p_right = .1 p_left = .1 #create_mdp can be found in helpers.py T , R = create_mdp([r_s, r_g, r_b], \ [p_intended, p_opposite, p_right, p_left]) ###Output _____no_output_____ ###Markdown Value IterationOnce we have both matrices, we are going to run value iteration using the mdptoolbox.Note: we've defined the transition matrix T such that up is 0, right is 1, down is 2 and left is 3. ###Code #create object, undiscounted for now vi = mdptoolbox.mdp.ValueIteration(T, R, discount=1) #run value iteration silently vi.setSilent() vi.run() #print policy found by value iteration print(np.array(vi.policy).reshape((3,4))) ###Output WARNING: check conditions of convergence. With no discount, convergence can not be assumed. [[0 0 3 0] [0 0 3 0] [1 0 3 2]] ###Markdown Finding Q-valuesHowever, this doesn't tell us when the choice of action doesn't matter. We need to look at the Q (state-action) values to see if all actions for a given state have the same value. To explore this we will use the function get_q_values (found in helpers.py.) This function outputs a policy where "-1" denotes that all actions have the same Q-value (precision: the mdp object's epsilon value).With verbose on, Q-values for each state are also presented. State numbers are as follows:| | | | ||---|---|----|----|| 0 | 1 | 2 | 3 | | 4 | 5 | 6 | 7 || 8 | 9 | 10 | 11 | ###Code #print policy with -1 where all actions have same Q value #in verbose mode we will also see the Q values themselves vi.setVerbose() print("Q-values:\n") policy = get_q_values(vi).reshape((3,4)) print("\n=====================\nPolicy:\n") print(policy) ###Output Q-values: State 0: [ 2002. 2002. 2002. 2002.] State 1: [ 2002. 2002. 2002. 2002.] State 2: [ 1902. 1202. 1902. 2002.] State 3: [ 1001. 1001. 1001. 1001.] State 4: [ 2002. 2002. 2002. 2002.] State 5: [ 100100. 100100. 100100. 100100.] State 6: [ 1702. -398. 1702. 2002.] State 7: [-1001. -1001. -1001. -1001.] State 8: [ 2002. 2002. 2002. 2002.] State 9: [ 2002. 2002. 2002. 2002.] State 10: [ 2002. 2002. 2002. 2002.] State 11: [ -398. 1702. 2002. 1702.] ===================== Policy: [[-1 -1 3 -1] [-1 -1 3 -1] [-1 -1 -1 2]]
Feed_forward_Tensorflow.ipynb
###Markdown ###Code import tensorflow as tf import numpy as np from sklearn import datasets from sklearn.model_selection import train_test_split RANDOM_SEED = 42 #tf.set_random_seed(RANDOM_SEED) #import tensorflow.compat.v1 as tf #tf.disable_v2_behavior() def init_weights(shape): """ Weight initialization """ weights = tf.random_normal(shape, stddev=0.1) return tf.Variable(weights) def forwardprop(X, w_1, w_2): """ Forward-propagation. IMPORTANT: yhat is not softmax since TensorFlow's softmax_cross_entropy_with_logits() does that internally. """ h = tf.nn.sigmoid(tf.matmul(X, w_1)) # The \sigma function yhat = tf.matmul(h, w_2) # The \varphi function return yhat def get_iris_data(): """ Read the iris data set and split them into training and test sets """ iris = datasets.load_iris() data = iris["data"] target = iris["target"] # Prepend the column of 1s for bias N, M = data.shape all_X = np.ones((N, M + 1)) all_X[:, 1:] = data # Convert into one-hot vectors num_labels = len(np.unique(target)) all_Y = np.eye(num_labels)[target] # One liner trick! return train_test_split(all_X, all_Y, test_size=0.33, random_state=RANDOM_SEED) def main(): train_X, test_X, train_y, test_y = get_iris_data() print("We are going to train a neural network") print("Be Patient") print ("We need to work hard on our data") # Layer's sizes x_size = train_X.shape[1] # Number of input nodes: 4 features and 1 bias #print(x_size,shape[1]) print("First we need to know X shape") print(train_X.shape[1]) print(train_X.shape[0]) print("Then wE need to know Y Shape") print(train_y.shape[1]) print(train_y.shape[0]) print(train_X) #print(train_y) h_size = 256 # Number of hidden nodes y_size = train_y.shape[1] # Number of outcomes (3 iris flowers) # Symbols X = tf.placeholder("float", shape=[None, x_size]) #X=tf.Variable(tf.ones(shape=[None, x_size]), dtype=tf.float32) y = tf.placeholder("float", shape=[None, y_size]) #y=tf.Variable(tf.ones(shape=[None, y_size]), dtype=tf.float32) # Weight initializations w_1 = init_weights((x_size, h_size)) w_2 = init_weights((h_size, y_size)) # Forward propagation yhat = forwardprop(X, w_1, w_2) predict = tf.argmax(yhat, axis=1) # Backward propagation cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=yhat)) updates = tf.train.GradientDescentOptimizer(0.01).minimize(cost) # Run SGD sess = tf.Session() init = tf.global_variables_initializer() sess.run(init) for epoch in range(100): # Train with each example for i in range(len(train_X)): sess.run(updates, feed_dict={X: train_X[i: i + 1], y: train_y[i: i + 1]}) train_accuracy = np.mean(np.argmax(train_y, axis=1) == sess.run(predict, feed_dict={X: train_X, y: train_y})) test_accuracy = np.mean(np.argmax(test_y, axis=1) == sess.run(predict, feed_dict={X: test_X, y: test_y})) print("Epoch = %d, train accuracy = %.2f%%, test accuracy = %.2f%%" % (epoch + 1, 100. * train_accuracy, 100. * test_accuracy)) sess.close() if __name__ == '__main__': main() pip install tensorflow==1.4.0 import tensorflow as tf # first, create a TensorFlow constant const = tf.constant(2.0, name="const") # create TensorFlow variables b = tf.Variable(2.0, name='b') c = tf.Variable(1.0, name='c') print(b) print(c) # now create some operations d = tf.add(b, c, name='d') e = tf.add(c, const, name='e') a = tf.multiply(d, e, name='a') # setup the variable initialisation #init_op = tf.global_variables_initializer() !pip install tensorflow==1.12.0 import tensorflow as tf print(tf.__version__) import tensorflow as tf print(tf.__version__) ###Output 1.12.0 ###Markdown ###Code import tensorflow as tf import numpy as np from sklearn import datasets from sklearn.model_selection import train_test_split RANDOM_SEED = 42 #tf.set_random_seed(RANDOM_SEED) #import tensorflow.compat.v1 as tf #tf.disable_v2_behavior() def init_weights(shape): """ Weight initialization """ weights = tf.random_normal(shape, stddev=0.1) return tf.Variable(weights) def forwardprop(X, w_1, w_2): """ Forward-propagation. IMPORTANT: yhat is not softmax since TensorFlow's softmax_cross_entropy_with_logits() does that internally. """ h = tf.nn.sigmoid(tf.matmul(X, w_1)) # The \sigma function yhat = tf.matmul(h, w_2) # The \varphi function return yhat def get_iris_data(): """ Read the iris data set and split them into training and test sets """ iris = datasets.load_iris() data = iris["data"] target = iris["target"] # Prepend the column of 1s for bias N, M = data.shape all_X = np.ones((N, M + 1)) all_X[:, 1:] = data # Convert into one-hot vectors num_labels = len(np.unique(target)) all_Y = np.eye(num_labels)[target] # One liner trick! return train_test_split(all_X, all_Y, test_size=0.33, random_state=RANDOM_SEED) def main(): train_X, test_X, train_y, test_y = get_iris_data() print("We are going to train a neural network") print("Be Patient") print ("We need to work hard on our data") # Layer's sizes x_size = train_X.shape[1] # Number of input nodes: 4 features and 1 bias #print(x_size,shape[1]) print("First we need to know X shape") print(train_X.shape[1]) print(train_X.shape[0]) print("Then wE need to know Y Shape") print(train_y.shape[1]) print(train_y.shape[0]) print(train_X) #print(train_y) h_size = 256 # Number of hidden nodes y_size = train_y.shape[1] # Number of outcomes (3 iris flowers) # Symbols X = tf.placeholder("float", shape=[None, x_size]) #X=tf.Variable(tf.ones(shape=[None, x_size]), dtype=tf.float32) y = tf.placeholder("float", shape=[None, y_size]) #y=tf.Variable(tf.ones(shape=[None, y_size]), dtype=tf.float32) # Weight initializations w_1 = init_weights((x_size, h_size)) w_2 = init_weights((h_size, y_size)) # Forward propagation yhat = forwardprop(X, w_1, w_2) predict = tf.argmax(yhat, axis=1) # Backward propagation cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=yhat)) updates = tf.train.GradientDescentOptimizer(0.01).minimize(cost) # Run SGD sess = tf.Session() init = tf.global_variables_initializer() sess.run(init) for epoch in range(100): # Train with each example for i in range(len(train_X)): sess.run(updates, feed_dict={X: train_X[i: i + 1], y: train_y[i: i + 1]}) train_accuracy = np.mean(np.argmax(train_y, axis=1) == sess.run(predict, feed_dict={X: train_X, y: train_y})) test_accuracy = np.mean(np.argmax(test_y, axis=1) == sess.run(predict, feed_dict={X: test_X, y: test_y})) print("Epoch = %d, train accuracy = %.2f%%, test accuracy = %.2f%%" % (epoch + 1, 100. * train_accuracy, 100. * test_accuracy)) sess.close() if __name__ == '__main__': main() pip install tensorflow==1.4.0 import tensorflow as tf # first, create a TensorFlow constant const = tf.constant(2.0, name="const") # create TensorFlow variables b = tf.Variable(2.0, name='b') c = tf.Variable(1.0, name='c') print(b) print(c) # now create some operations d = tf.add(b, c, name='d') e = tf.add(c, const, name='e') a = tf.multiply(d, e, name='a') # setup the variable initialisation #init_op = tf.global_variables_initializer() !pip install tensorflow==1.12.0 import tensorflow as tf print(tf.__version__) import tensorflow as tf print(tf.__version__) ###Output 1.12.0
notebooks/Dataset Statistics-Merged.ipynb
###Markdown Graphs on app usage ###Code onlyOpenEvents = rawEventsRdd.map( parseRawData).map(lambda x : (itemIdConversionDictionary[x[1]],1)) appvsruntime = onlyOpenEvents.reduceByKey(lambda a,b : a + b).sortBy(lambda x: x[1], ascending=False) appvsruntime.take(3) import json userappmap = json.load(open(eventsPath + "/userAppMap.txt")) def findKey(d, v): for k, val in d.iteritems(): if val == v: return k listofapps = [ (findKey(itemIdConversionDictionary, v[0]), v[1]) for v in appvsruntime.take(100)] stats_dir = eventsPath + "/stats/" if not os.path.exists(stats_dir): os.makedirs(stats_dir) numberOfEvents = float(eventsRdd.count()) outfile = open(stats_dir + "topapps_run_byusers.csv",'w') outfile.write("App,#usersruntheapp, percentage\n") for el in listofapps: outfile.write(str(el[0]) + "," + str(el[1]) + "," + str(el[1]/numberOfEvents) + '\n') outfile.close() ###Output _____no_output_____ ###Markdown Number of open events vs application ###Code import matplotlib.pyplot as plt import time import os %matplotlib inline data= appvsruntime.map(lambda x : x[1]).collect() plt.plot(data) plt.ylabel('Number of usages') plt.xlabel('Apps') plt.axis([0,200,0 , 1e7]) figure_dir = eventsPath + "/figures/" if not os.path.exists(figure_dir): os.makedirs(figure_dir) plt.savefig(figure_dir + "numberofusagevsapp" + str(int(time.time())) + ".png") plt.show() ###Output _____no_output_____ ###Markdown Percentage of usage over all events per app ###Code import matplotlib.pyplot as plt import time import os %matplotlib inline try: data except NameError: data= appvsruntime.map(lambda x : x[1]).collect() itemCount = float(len(data)) plt.plot([el/itemCount for el in data]) plt.ylabel('Percentage of usages') plt.xlabel('Apps') plt.axis([0,200,0 , 100]) figure_dir = eventsPath + "/figures/" if not os.path.exists(figure_dir): os.makedirs(figure_dir) plt.savefig(figure_dir + "percentageofusagevsapp" + str(int(time.time())) + ".png") plt.show() ###Output _____no_output_____ ###Markdown Graphs of users installed per application ###Code userappmaprdd = sc.parallelize([ (int(k), userappmap[k]) for k in userappmap.keys()]) userappmaprdd = userappmaprdd.flatMap(lambda x : [ (k, 1) for k in x[1] ] ) #(itemid, 1) format userappmaprdd = userappmaprdd.reduceByKey( lambda a,b : a + b).sortBy(lambda x : x[1], ascending=False) def findKey(d, v): for k, val in d.iteritems(): if val == v: return k listofapps = [ (findKey(itemIdConversionDictionary, v[0]), v[1]) for v in userappmaprdd.take(100)] stats_dir = eventsPath + "/stats/" if not os.path.exists(stats_dir): os.makedirs(stats_dir) numberOfUsers = float(len(userappmap)) outfile = open(stats_dir + "topapps_owned_byusers.csv",'w') outfile.write("App,#usersowntheapp, percentage\n") for el in listofapps: outfile.write(str(el[0]) + "," + str(el[1]) + "," + str(el[1]/numberOfUsers) + '\n') outfile.close() ###Output _____no_output_____ ###Markdown Number of users installed per application ###Code import matplotlib.pyplot as plt import time import os %matplotlib inline dataiteminstall= userappmaprdd.map(lambda x : x[1]).collect() plt.plot(dataiteminstall) plt.ylabel('Number of installs') plt.xlabel('Apps') plt.axis([0,10000, 0, 2000]) figure_dir = eventsPath + "/figures/" if not os.path.exists(figure_dir): os.makedirs(figure_dir) plt.savefig(figure_dir + "numberofinstallperapp" + str(int(time.time())) + ".png") plt.show() ###Output _____no_output_____ ###Markdown Percentage of users installed per application ###Code import matplotlib.pyplot as plt import time import os %matplotlib inline try: dataiteminstall except NameError: dataiteminstall= userappmaprdd.map(lambda x : x[1]).collect() numberOfUsers = float(len(userappmap)) plt.plot([el/numberOfUsers for el in dataiteminstall]) plt.ylabel('Percentage of users own the app') plt.xlabel('Apps') plt.axis([0,1000, 0.0, 0.5]) figure_dir = eventsPath + "/figures/" if not os.path.exists(figure_dir): os.makedirs(figure_dir) plt.savefig(figure_dir + "percentageofusersownedvsapp" + str(int(time.time())) + ".png") plt.show() ###Output _____no_output_____ ###Markdown Histogram of users having certain number of apps ###Code import os import numpy as np execfile("../script/utils.py") eventsPath = os.environ["YAHOO_DATA"] splitedRdd = sc.textFile(eventsPath + "/splitedData") splitedRdd = splitedRdd.map(parseContextData2).map(lambda x : (len(x[1][1]) + len(x[1][0]))) intervalsLarge = np.arange(0,9001,1000).tolist() histDataOpenlarge = splitedRdd.histogram(intervalsLarge) intervalsSmall = np.arange(0,8601,100).tolist() histDataOpenSmall = splitedRdd.histogram(intervalsSmall) #splitedRdd.max() 8597 histDataOpenlarge ###Output _____no_output_____ ###Markdown Histogram of number opens per user 1000 interval adopted ###Code import matplotlib.pyplot as plt import time import os %matplotlib inline plt.bar(histDataOpenlarge[0][:-1], histDataOpenlarge[1], width=1000 ) plt.ylabel('Number of users') plt.xlabel('Number of events') figure_dir = eventsPath + "/figures/" if not os.path.exists(figure_dir): os.makedirs(figure_dir) plt.savefig(figure_dir + "histof_#opens_peruser_1000int" + str(int(time.time())) + ".png") plt.show() ###Output _____no_output_____ ###Markdown Histogram of number opens per user 100 interval adopted ###Code import matplotlib.pyplot as plt import time import os %matplotlib inline plt.bar(histDataOpenSmall[0][:-1], histDataOpenSmall[1], width=100 ) plt.ylabel('Number of users') plt.xlabel('Number of events') plt.axis([0,2000,0,35000]) figure_dir = eventsPath + "/figures/" if not os.path.exists(figure_dir): os.makedirs(figure_dir) plt.savefig(figure_dir + "histof_#opens_peruser_100int" + str(int(time.time())) + ".png") plt.show() ###Output _____no_output_____ ###Markdown Histogram of the number of applications owned by each user ###Code import json userappmap = json.load(open(eventsPath + "/userAppMap.txt")) userappcountrdd = sc.parallelize([len(v) for k,v in userappmap.iteritems()]) intervalAppCount = [0,10,20,30,40,50,60,70,100,150,200,260] histDataAppcount = userappcountrdd.histogram(intervalAppCount) userappcountrdd.mean(), userappcountrdd.max(), userappcountrdd.count() histDataAppcount import matplotlib.pyplot as plt import time import os %matplotlib inline plt.bar(histDataAppcount[0][:-1], histDataAppcount[1], width=[x - intervalAppCount[i - 1] for i, x in enumerate(intervalAppCount)][1:]) plt.ylabel('Number of users') plt.xlabel('Number of apps') #figure_dir = eventsPath + "/figures/" #if not os.path.exists(figure_dir): # os.makedirs(figure_dir) #plt.savefig(figure_dir + "histof_#opens_peruser_100int" + str(int(time.time())) + ".png") plt.show() len(itemIdConversionDictionary) eventRDD = eventsConvertedRdd.groupBy(lambda x: x[0]).map(lambda (x,y): (x, sorted(list(y),key=lambda a: a[2]))) def tempRemoveUserIdDup(line): data = line[1] newData = [el[1:] for el in data] return line[0], newData eventRDD2 = eventRDD.map(tempRemoveUserIdDup) def splitRddMerged(line): open_events = [el for el in line[1] if el[7] == "App_Opened"] install_events = [el for el in line[1] if el[7] == "install"] uninstall_events = [el for el in line[1] if el[7] == "uninstall"] return line[0],open_events, install_events, uninstall_events splited = eventRDD2.map(splitRddMerged) splited.collect() outp = open(eventsPath + "/outputstat.txt","a") import datetime import time outp.write("--------------------------------------------------------\n") #separator outp.write(datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')) outp.write("\n") numberofusers = splited.count() outp.write("Number of users : " + str(numberofusers) + "\n") install = eventsConvertedRdd.filter(lambda x : x[8]=="install").count() uninstall = eventsConvertedRdd.filter(lambda x : x[8]=="uninstall").count() app_open = eventsConvertedRdd.filter(lambda x : x[8]=="App_Opened").count() outp.write("Number of events(install, uninstall, open, all) : " + str((install, uninstall, app_open, install + uninstall + app_open)) + "\n") outp.write("Average number of events per user(install, uninstall, open, all) : " + str((install/float(numberofusers), uninstall/float(numberofusers), app_open/float(numberofusers), (install + uninstall + app_open)/float(numberofusers))) + "\n") outp.close() ###Output _____no_output_____
helpers/helper_classification_algorithms_1.ipynb
###Markdown Algorithms--- Classification--- Naive Bayes--- ###Code # import libraries import numpy as np import pandas as pd import seaborn as sns from matplotlib import cm import matplotlib.pyplot as plt from pandas.tools.plotting import scatter_matrix %matplotlib inline from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import Lasso from sklearn.model_selection import GridSearchCV from sklearn.model_selection import train_test_split, KFold, StratifiedKFold # alternate numbers belongs to different classification features = np.array([ 200, 1, 202, 3, 204, 5, 206, 7, 208, 9, 210, 11, 212, 13, 214, 15, 216, 17, 218, 19, 220, 21, 222, 23, 224, 25, 226, 27, 228, 29, 230, 31, 232, 33, 234, 35, 236, 37, 238, 39, 240, 41, 242, 43, 244, 45, 246, 47, 248, 49, 250, 51, 252, 53, 254, 55, 256, 57, 258, 59, 260, 61, 262, 63, 264, 65, 266, 67, 268, 69, 270, 71, 272, 73, 274, 75, 276, 77, 278, 79, 280, 81, 282, 83, 284, 85, 286, 87, 288, 89, 290, 91, 292, 93, 294, 95, 296, 97, 298, 99]) features target = np.tile([0,1],50) target df = pd.DataFrame([features,target]) df = df.T df.columns = ['num','class'] df.head() sns.countplot(df['class'],label="Count") df.plot('num','class') X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.23, random_state=68) X_train = X_train.reshape(-1,1) #y_train = y_train.reshape(-1,1) X_test = X_test.reshape(-1,1) gnb = GaussianNB() gnb gnb.fit(X_train, y_train) y_pred = gnb.predict(X_test) y_pred print("Number of mislabeled points out of a total %d points : %d" ... % (len(features),(y_test != y_pred).sum())) from sklearn.metrics import classification_report classificationReport = classification_report(y_test, y_pred) classificationReport def plot_classification_report(cr, title='Classification report ', with_avg_total=False, cmap=plt.cm.Blues): lines = cr.split('\n') classes = [] plotMat = [] for line in lines[2 : (len(lines) - 3)]: #print(line) t = line.split() # print(t) classes.append(t[0]) v = [float(x) for x in t[1: len(t) - 1]] print(v) plotMat.append(v) if with_avg_total: aveTotal = lines[len(lines) - 1].split() classes.append('avg/total') vAveTotal = [float(x) for x in t[1:len(aveTotal) - 1]] plotMat.append(vAveTotal) plt.imshow(plotMat, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() x_tick_marks = np.arange(3) y_tick_marks = np.arange(len(classes)) plt.xticks(x_tick_marks, ['precision', 'recall', 'f1-score'], rotation=45) plt.yticks(y_tick_marks, classes) plt.tight_layout() plt.ylabel('Classes') plt.xlabel('Measures') plot_classification_report(classificationReport) ###Output [1.0, 1.0, 1.0] [1.0, 1.0, 1.0] ###Markdown GridSearchCV ###Code features.shape, target.shape lasso = Lasso(random_state=0) alphas = np.logspace(-4, -0.5, 30) tuned_parameters = [{'alpha': alphas}] n_folds = 3 clf = GridSearchCV(lasso, tuned_parameters, cv=n_folds, refit=False) clf.fit(X_train.reshape(-1,1), y_train) scores = clf.cv_results_['mean_test_score'] scores_std = clf.cv_results_['std_test_score'] plt.figure().set_size_inches(8, 6) plt.semilogx(alphas, scores) # plot error lines showing +/- std. errors of the scores std_error = scores_std / np.sqrt(n_folds) plt.semilogx(alphas, scores + std_error, 'b--') plt.semilogx(alphas, scores - std_error, 'b--') # alpha=0.2 controls the translucency of the fill color plt.fill_between(alphas, scores + std_error, scores - std_error, alpha=0.2) plt.ylabel('CV score +/- std error') plt.xlabel('alpha') plt.axhline(np.max(scores), linestyle='--', color='.5') plt.xlim([alphas[0], alphas[-1]]) ###Output _____no_output_____ ###Markdown Logistic Regression ###Code from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix features.shape, target.shape model = LogisticRegression() model model.fit(X_train.reshape(-1,1), y_train) y_pred = model.predict(X_test.reshape(-1,1)) y_pred confusion_matrix(y_test, y_pred) plt.scatter(X_test.reshape(-1,1), y_test, marker='+', cmap='autumn') plt.scatter(X_test.reshape(-1,1), y_pred, marker='^') plt.scatter(features, target) ###Output _____no_output_____ ###Markdown Linear SVC ###Code from sklearn.svm import LinearSVC, SVC model = LinearSVC() model %time model.fit(X_train.reshape(-1,1), y_train) y_pred = model.predict(X_test.reshape(-1,1)) y_pred confusion_matrix(y_test, y_pred) print(model.coef_) print(model.intercept_) x0 = df.loc[df['class'] == 0] x1 = df.loc[df['class'] == 1] x0.head() plt.scatter(x0['num'], x0['class'], marker='^') plt.scatter(x1['num'], x1['class'], marker='o') ###Output _____no_output_____
short_path.ipynb
###Markdown Find the shortest path Described below is a short algorithm to find the shortest path on a checkerboard pattern to a specific cell. ###Code import matplotlib.pyplot as plt import numpy as np plt.hold(True) for ln in range(1, 5): plt.plot((0, 5), (ln, ln), color=(0., 0., 0.)) plt.plot((ln, ln), (0, 5), color=(0., 0., 0.)) for itm in range(0, 25): if itm == 22: plt.text(np.floor(itm/5)+0.3, itm%5 + 0.3, str(itm), color=(1., 0., 0.)) else: plt.text(np.floor(itm/5)+0.3, itm%5 + 0.3, str(itm)) plt.show() ###Output _____no_output_____ ###Markdown Fields 22 is adjacent to the exit from the gameboard. Any horizontal step and any vertical step takes one time unit. Therefore to reach the exti from fields 22 takes one time unit. Any field adjacent to 22 takes an extra time unit and so on. Python implementation Each cell is represented as a class that holds a reference to each adjacent cell. ###Code class Cell: def __init__(self, number): self.number = number self.weight = 0 self.neighbors = [] ###Output _____no_output_____ ###Markdown The gameboard is created by creating the cells and linking them so each cell knows its neighbors. Each cell in the middle has four neigbors; each cell on the edge has three neighbors; each corner cell has two neighbors. ###Code def create_gameboard(): gameboard = [Cell(i) for i in range(25)] for i in range(25): gameboard[i].number = i if i % 5 != 4: gameboard[i].neighbors.append(gameboard[i+1]) if i % 5 != 0: gameboard[i].neighbors.append(gameboard[i-1]) if i >= 5: gameboard[i].neighbors.append(gameboard[i-5]) if i < 20: gameboard[i].neighbors.append(gameboard[i+5]) return gameboard ###Output _____no_output_____ ###Markdown To calculate the weights of each cell walk back from the end cell and increase the weight of each cell reached by one for each successive step. ###Code def calc_weights(end_points): current_cells = end_points step = 1 while len(current_cells): next_cells = [] for cell in current_cells: cell.weight = step next_cells.extend([itm for itm in cell.neighbors if itm.weight == 0]) step += 1 current_cells = next_cells ###Output _____no_output_____ ###Markdown Get the end cells, and calculate the weight for the gameboard. ###Code gameboard = create_gameboard() end_cells = [cell for cell in gameboard if cell.number==22] calc_weights(end_cells) ###Output _____no_output_____ ###Markdown Plot the gameboard and instead of the cell numbers write the weight of each cell on the board. ###Code plt.hold(True) for ln in range(1, 5): plt.plot((0, 5), (ln, ln), color=(0., 0., 0.)) plt.plot((ln, ln), (0, 5), color=(0., 0., 0.)) for i in range(0, 25): no = gameboard[i].number if gameboard[i] in end_cells: plt.text(np.floor(no/5)+0.3, no%5 + 0.3, str(gameboard[i].weight), color=(1., 0., 0.)) else: plt.text(np.floor(no/5)+0.3, no%5 + 0.3, str(gameboard[i].weight)) plt.show() ###Output _____no_output_____ ###Markdown Non passable cells Some cells may not be passable (e.g. they are blocked by an object). If a cell cannot be passed it will be skipped when walking the the cells back from the end cells. The cells can simply be removed from the gameboard. Since constantly recreating the gameboard may be unncessarily difficult it may be easier to mark a cell as non passable and ignore those cells when walking the gameboard. ###Code class Cell: def __init__(self, number): self.number = number self.weight = 0 self.neighbors = [] self.passable = True ###Output _____no_output_____ ###Markdown The new algorithm has to check if a cell is passable. ###Code def calc_weights(end_points): current_cells = end_points step = 1 while len(current_cells): next_cells = [] for cell in current_cells: cell.weight = step next_cells.extend([itm for itm in cell.neighbors if (itm.weight == 0 and itm.passable == True)]) step += 1 current_cells = next_cells ###Output _____no_output_____ ###Markdown Create a new gameboard; set some cells to non passable and calculate the weights. ###Code gameboard = create_gameboard() for i in [5, 7, 8, 19, 17, 16]: gameboard[i].passable = False end_cells = [cell for cell in gameboard if cell.number==22] calc_weights(end_cells) ###Output _____no_output_____ ###Markdown When drawing the gameboard leave non passable cells blank. ###Code plt.hold(True) for ln in range(1, 5): plt.plot((0, 5), (ln, ln), color=(0., 0., 0.)) plt.plot((ln, ln), (0, 5), color=(0., 0., 0.)) for i in range(len(gameboard)): if gameboard[i].passable == False: continue no = gameboard[i].number if gameboard[i] in end_cells: plt.text(np.floor(no/5)+0.3, no%5 + 0.3, str(gameboard[i].weight), color=(1., 0., 0.)) else: plt.text(np.floor(no/5)+0.3, no%5 + 0.3, str(gameboard[i].weight)) plt.show() ###Output _____no_output_____ ###Markdown Fields with additional penalty Some fields may have an additional penalty. The penalty may be used to indicate fields that should not be transversed (i.e. fields where a gamepiece might be damaged). That penalty has to be added to the weight on top of the time units it takes to reach the exit. That way a shortest way can be found while making sure the shortest way goes around fields that should be avoided.The new Cell class holds the extra penalty. ###Code class Cell: def __init__(self, number): self.number = number self.weight = 0 self.neighbors = [] self.passable = True self.extra_penalty = 0 ###Output _____no_output_____ ###Markdown Since each cell's weight is based on the number of steps the cell is from the end point, as well as the cell's extra penalty, as well as the extra penealties in the path between the cell and the end cell a cell's value can no longer be assigned solely based on the number of cells between the cell and the end cell. A longer path may yield a lower weight if the shorter path contains extra penalties.A cell's weight must be the lowest weight possible because the shortest possible path is sought. ###Code def calc_weights(end_points): current_cells = end_points for cell in current_cells: cell.weight = 1 + cell.extra_penalty while len(current_cells): next_cells = set() for cell in current_cells: for next_cell in cell.neighbors: if next_cell.passable == False: continue new_weight = cell.weight + next_cell.extra_penalty + 1 if new_weight < next_cell.weight or next_cell.weight == 0 : next_cell.weight = new_weight next_cells.add(next_cell) current_cells = next_cells gameboard = create_gameboard() for i in [5, 7, 8, 19, 17, 16]: gameboard[i].passable = False gameboard[6].extra_penalty = 2 end_cells = [itm for itm in gameboard if itm.number==22 or itm.number==21] calc_weights(end_cells) ###Output _____no_output_____ ###Markdown Plot the gameboard and show the extra penalty in bracets. ###Code plt.hold(True) for ln in range(1, 5): plt.plot((0, 5), (ln, ln), color=(0., 0., 0.)) plt.plot((ln, ln), (0, 5), color=(0., 0., 0.)) for i in range(len(gameboard)): if gameboard[i].passable == False: continue no = gameboard[i].number if gameboard[i] in end_cells: plt.text(np.floor(no/5)+0.3, no%5 + 0.3, "{0:d}({1:d})".format( gameboard[i].weight, gameboard[i].extra_penalty), color=(1., 0., 0.)) else: plt.text(np.floor(no/5)+0.3, no%5 + 0.3, "{0:d}({1:d})".format( gameboard[i].weight, gameboard[i].extra_penalty)) plt.show() ###Output _____no_output_____ ###Markdown Example ###Code gameboard = create_gameboard() for i in [16, 17, 18, 6, 7, 8, 9]: gameboard[i].passable = False gameboard[20].extra_penalty = 10 end_cells = [itm for itm in gameboard if itm.number==22] calc_weights(end_cells) plt.hold(True) for ln in range(1, 5): plt.plot((0, 5), (ln, ln), color=(0., 0., 0.)) plt.plot((ln, ln), (0, 5), color=(0., 0., 0.)) for i in range(len(gameboard)): if gameboard[i].passable == False: continue no = gameboard[i].number if gameboard[i] in end_cells: plt.text(np.floor(no/5)+0.3, no%5 + 0.3, "{0:d}({1:d})".format( gameboard[i].weight, gameboard[i].extra_penalty), color=(1., 0., 0.)) else: plt.text(np.floor(no/5)+0.3, no%5 + 0.3, "{0:d}({1:d})".format( gameboard[i].weight, gameboard[i].extra_penalty)) plt.show() ###Output _____no_output_____
Debata_oksfordzka.ipynb
###Markdown Losowanie uczestników do debaty oksfordzkiej2018-06-05 ###Code # Wczytaj dane z formularza Google data = pd.read_csv(r'C:\Users\Ol\Documents\DATA ANALYSIS\Debata oksfordzka.csv') # Sprawdź liczność grup data['Preferowana grupa'].value_counts() # Mamy łącznie 13 zgłoszeń, potrzebujemy wylosować i przenieść do przeciwnej grupy # dwie osoby z grupy "Za tezą", żeby uzyskać stosunek liczebności grup 7 / 6 data[data['Preferowana grupa'] == 'Za tezą'].sample(2) data['Wylosowana grupa'] = data['Preferowana grupa'] data.iloc[3,3] = 'Przeciw tezie' data.iloc[9,3] = 'Przeciw tezie' data ###Output _____no_output_____
Dimensionality Reduction/PCA/SparsePCA_StandardScaler.ipynb
###Markdown Sparse PCA with StandardScaler This code template is for Sparse Principal Component Analysis(SparsePCA) along with Standard Scaler in python for dimensionality reduction technique and Data Rescaling. It is used to decompose a multivariate dataset into a set of successive orthogonal components that explain a maximum amount of the variance, keeping only the most significant singular vectors to project the data to a lower dimensional space. Required Packages ###Code import warnings import itertools import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt import matplotlib.pyplot as plt from mpl_toolkits import mplot3d from sklearn.preprocessing import LabelEncoder,StandardScaler from sklearn.decomposition import SparsePCA from numpy.linalg import eigh warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown InitializationFilepath of CSV file ###Code #filepath file_path= " " ###Output _____no_output_____ ###Markdown List of features which are required for model training . ###Code #x_values features=[] ###Output _____no_output_____ ###Markdown Target feature for prediction. ###Code #y_value target=' ' ###Output _____no_output_____ ###Markdown Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ###Code df=pd.read_csv(file_path) df.head() ###Output _____no_output_____ ###Markdown Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y. ###Code X = df[features] Y = df[target] ###Output _____no_output_____ ###Markdown Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ###Code def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head() ###Output _____no_output_____ ###Markdown Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ###Code f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ###Output _____no_output_____ ###Markdown Data RescalingFor rescaling the data StandardScaler function of Sklearn is used. StandardScaler standardizes features by removing the mean and scaling the data element to unit variance. The standard score of a sample x is calculated as: z = (x - u) / s ,where u is the mean of the training samples and s is the standard deviation of the training samples Standard Scaler:sklearn.preprocessing.StandardScaler(*, copy=True, with_mean=True, with_std=True)Reference URL to StandardScaler API :https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html ###Code X_Scaled=StandardScaler().fit_transform(X) X=pd.DataFrame(X_Scaled,columns=X.columns) X.head(3) ###Output _____no_output_____ ###Markdown Choosing the number of componentsA vital part of using Sparse PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components.This curve quantifies how much of the total, dimensional variance is contained within the first N components. Explained Variance Explained variance refers to the variance explained by each of the principal components (eigenvectors). It can be represented as a function of ratio of related eigenvalue and sum of eigenvalues of all eigenvectors. The function below returns a list with the values of explained variance and also plots cumulative explained variance ###Code def explained_variance_plot(X): cov_matrix = np.cov(X, rowvar=False) #this function returns the co-variance matrix for the features egnvalues, egnvectors = eigh(cov_matrix) #eigen decomposition is done here to fetch eigen-values and eigen-vectos total_egnvalues = sum(egnvalues) var_exp = [(i/total_egnvalues) for i in sorted(egnvalues, reverse=True)] plt.plot(np.cumsum(var_exp)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); return var_exp var_exp=explained_variance_plot(X) ###Output _____no_output_____ ###Markdown Scree plotThe scree plot helps you to determine the optimal number of components. The eigenvalue of each component in the initial solution is plotted. Generally, you want to extract the components on the steep slope. The components on the shallow slope contribute little to the solution. ###Code plt.plot(var_exp, 'ro-', linewidth=2) plt.title('Scree Plot') plt.xlabel('Principal Component') plt.ylabel('Proportion of Variance Explained') plt.show() ###Output _____no_output_____ ###Markdown ModelSparse PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance. In scikit-learn, Sparse PCA finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Tunning parameters reference : [API](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.SparsePCA.html) ###Code spca = SparsePCA(n_components=15) spcaX = pd.DataFrame(data = spca.fit_transform(X)) ###Output _____no_output_____ ###Markdown Output Dataframe ###Code finalDf = pd.concat([spcaX, Y], axis = 1) finalDf.head() ###Output _____no_output_____ ###Markdown Sparse PCA with StandardScaler This code template is for Sparse Principal Component Analysis(SparsePCA) along with Standard Scaler in python for dimensionality reduction technique and Data Rescaling. It is used to decompose a multivariate dataset into a set of successive orthogonal components that explain a maximum amount of the variance, keeping only the most significant singular vectors to project the data to a lower dimensional space. Required Packages ###Code import warnings import itertools import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt import matplotlib.pyplot as plt from mpl_toolkits import mplot3d from sklearn.preprocessing import LabelEncoder,StandardScaler from sklearn.decomposition import SparsePCA from numpy.linalg import eigh warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown InitializationFilepath of CSV file ###Code #filepath file_path= " " ###Output _____no_output_____ ###Markdown List of features which are required for model training . ###Code #x_values features=[] ###Output _____no_output_____ ###Markdown Target feature for prediction. ###Code #y_value target=' ' ###Output _____no_output_____ ###Markdown Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ###Code df=pd.read_csv(file_path) df.head() ###Output _____no_output_____ ###Markdown Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y. ###Code X = df[features] Y = df[target] ###Output _____no_output_____ ###Markdown Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ###Code def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) def EncodeY(df): if len(df.unique())<=2: return df else: un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort') df=LabelEncoder().fit_transform(df) EncodedT=[xi for xi in range(len(un_EncodedT))] print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT)) return df x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=EncodeY(NullClearner(Y)) X.head() ###Output _____no_output_____ ###Markdown Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ###Code f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ###Output _____no_output_____ ###Markdown Data RescalingFor rescaling the data StandardScaler function of Sklearn is used. StandardScaler standardizes features by removing the mean and scaling the data element to unit variance. The standard score of a sample x is calculated as: z = (x - u) / s ,where u is the mean of the training samples and s is the standard deviation of the training samples Standard Scaler:sklearn.preprocessing.StandardScaler(*, copy=True, with_mean=True, with_std=True)Reference URL to StandardScaler API :https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html ###Code X_Scaled=StandardScaler().fit_transform(X) X=pd.DataFrame(X_Scaled,columns=X.columns) X.head(3) ###Output _____no_output_____ ###Markdown Choosing the number of componentsA vital part of using Sparse PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components.This curve quantifies how much of the total, dimensional variance is contained within the first N components. Explained Variance Explained variance refers to the variance explained by each of the principal components (eigenvectors). It can be represented as a function of ratio of related eigenvalue and sum of eigenvalues of all eigenvectors. The function below returns a list with the values of explained variance and also plots cumulative explained variance ###Code def explained_variance_plot(X): cov_matrix = np.cov(X, rowvar=False) #this function returns the co-variance matrix for the features egnvalues, egnvectors = eigh(cov_matrix) #eigen decomposition is done here to fetch eigen-values and eigen-vectos total_egnvalues = sum(egnvalues) var_exp = [(i/total_egnvalues) for i in sorted(egnvalues, reverse=True)] plt.plot(np.cumsum(var_exp)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); return var_exp var_exp=explained_variance_plot(X) ###Output _____no_output_____ ###Markdown Scree plotThe scree plot helps you to determine the optimal number of components. The eigenvalue of each component in the initial solution is plotted. Generally, you want to extract the components on the steep slope. The components on the shallow slope contribute little to the solution. ###Code plt.plot(var_exp, 'ro-', linewidth=2) plt.title('Scree Plot') plt.xlabel('Principal Component') plt.ylabel('Proportion of Variance Explained') plt.show() ###Output _____no_output_____ ###Markdown ModelSparse PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance. In scikit-learn, Sparse PCA finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Tunning parameters reference : [API](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.SparsePCA.html) ###Code spca = SparsePCA(n_components=15) spcaX = pd.DataFrame(data = spca.fit_transform(X)) ###Output _____no_output_____ ###Markdown Output Dataframe ###Code finalDf = pd.concat([spcaX, Y], axis = 1) finalDf.head() ###Output _____no_output_____ ###Markdown Sparse PCA with StandardScaler This code template is for Sparse Principal Component Analysis(SparsePCA) along with Standard Scaler in python for dimensionality reduction technique and Data Rescaling. It is used to decompose a multivariate dataset into a set of successive orthogonal components that explain a maximum amount of the variance, keeping only the most significant singular vectors to project the data to a lower dimensional space. Required Packages ###Code import warnings import itertools import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt import matplotlib.pyplot as plt from mpl_toolkits import mplot3d from sklearn.preprocessing import LabelEncoder,StandardScaler from sklearn.decomposition import SparsePCA from numpy.linalg import eigh warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown InitializationFilepath of CSV file ###Code #filepath file_path= " " ###Output _____no_output_____ ###Markdown List of features which are required for model training . ###Code #x_values features=[] ###Output _____no_output_____ ###Markdown Target feature for prediction. ###Code #y_value target=' ' ###Output _____no_output_____ ###Markdown Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ###Code df=pd.read_csv(file_path) df.head() ###Output _____no_output_____ ###Markdown Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y. ###Code X = df[features] Y = df[target] ###Output _____no_output_____ ###Markdown Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ###Code def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head() ###Output _____no_output_____ ###Markdown Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ###Code f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ###Output _____no_output_____ ###Markdown Data RescalingFor rescaling the data StandardScaler function of Sklearn is used. StandardScaler standardizes features by removing the mean and scaling the data element to unit variance. The standard score of a sample x is calculated as: z = (x - u) / s ,where u is the mean of the training samples and s is the standard deviation of the training samples Standard Scaler:sklearn.preprocessing.StandardScaler(*, copy=True, with_mean=True, with_std=True)Reference URL to StandardScaler API :https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html ###Code X_Scaled=StandardScaler().fit_transform(X) X=pd.DataFrame(X_Scaled,columns=X.columns) X.head(3) ###Output _____no_output_____ ###Markdown Choosing the number of componentsA vital part of using Sparse PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components.This curve quantifies how much of the total, dimensional variance is contained within the first N components. Explained Variance Explained variance refers to the variance explained by each of the principal components (eigenvectors). It can be represented as a function of ratio of related eigenvalue and sum of eigenvalues of all eigenvectors. The function below returns a list with the values of explained variance and also plots cumulative explained variance ###Code def explained_variance_plot(X): cov_matrix = np.cov(X, rowvar=False) #this function returns the co-variance matrix for the features egnvalues, egnvectors = eigh(cov_matrix) #eigen decomposition is done here to fetch eigen-values and eigen-vectos total_egnvalues = sum(egnvalues) var_exp = [(i/total_egnvalues) for i in sorted(egnvalues, reverse=True)] plt.plot(np.cumsum(var_exp)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance'); return var_exp var_exp=explained_variance_plot(X) ###Output _____no_output_____ ###Markdown Scree plotThe scree plot helps you to determine the optimal number of components. The eigenvalue of each component in the initial solution is plotted. Generally, you want to extract the components on the steep slope. The components on the shallow slope contribute little to the solution. ###Code plt.plot(var_exp, 'ro-', linewidth=2) plt.title('Scree Plot') plt.xlabel('Principal Component') plt.ylabel('Proportion of Variance Explained') plt.show() ###Output _____no_output_____ ###Markdown ModelSparse PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance. In scikit-learn, Sparse PCA finds the set of sparse components that can optimally reconstruct the data. The amount of sparseness is controllable by the coefficient of the L1 penalty, given by the parameter alpha. Tunning parameters reference : [API](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.SparsePCA.html) ###Code spca = SparsePCA(n_components=15) spcaX = pd.DataFrame(data = spca.fit_transform(X)) ###Output _____no_output_____ ###Markdown Output Dataframe ###Code finalDf = pd.concat([spcaX, Y], axis = 1) finalDf.head() ###Output _____no_output_____
.ipynb_checkpoints/Projet_IA-checkpoint.ipynb
###Markdown Introduction Traitement des données ###Code import os import pandas as pd from sklearn.impute import SimpleImputer import matplotlib.pyplot as plt import numpy as np import tarfile from six.moves import urllib from sklearn.preprocessing import OrdinalEncoder import seaborn as sns DOWNLOAD_ROOT = "https://raw.githubusercontent.com/Snargol/projet-IA-CESI/main/" DATA_PATH = os.path.join("datasets", "data") # Méthode de récupération des données def fetch_data(data_url=DOWNLOAD_ROOT,data_path = DATA_PATH): if not os.path.isdir(data_path): os.makedirs(data_path) employee_url = data_url+"employee_survey_data_full.csv" manager_url = data_url +"manager_survey_data_full.csv" general_url = data_url +"general_data_full.csv" employee_path = os.path.join(data_path, "employee_survey_data_full.csv") manager_path = os.path.join(data_path, "manager_survey_data_full.csv") general_path = os.path.join(data_path, "general_data_full.csv") urllib.request.urlretrieve(employee_url, employee_path) urllib.request.urlretrieve(manager_url, manager_path) urllib.request.urlretrieve(general_url, general_path) fetch_data() def load_data(name): csv_path = os.path.join("./datasets/data/"+name+".csv") return pd.read_csv(csv_path) def load_final_data(): general_data = load_data("general_data_full") employee_survey_data = load_data("employee_survey_data_full") manager_survey_data = load_data("manager_survey_data_full") temp_result = pd.merge(general_data,employee_survey_data,on='EmployeeID') result = pd.merge(temp_result,manager_survey_data,on='EmployeeID') return result columns_to_fill = ['NumCompaniesWorked','TotalWorkingYears','EnvironmentSatisfaction','JobSatisfaction','WorkLifeBalance',] binary_columns = ['Attrition','Gender'] nominal_columns = ['Department','EducationField','JobRole','MaritalStatus'] ordinal_columns = [ { "label":'BusinessTravel', "order":['Non-Travel','Travel_Rarely','Travel_Frequently'] } ] data = load_final_data() # Supprime les colonnes dont l'écart-type vaut 0 (une donnée unique pour toutes les lignes de la colonne) def del_std_of_0(_data): _deleted_columns = [] for each in _data.describe().columns : if _data.describe()[str(each)]['std'] == 0: _data = _data.drop(columns=[each]) _deleted_columns.append([each]) print("Deleted Columns : ") print(_deleted_columns) return _data data = del_std_of_0(data.copy()) def check_contains_nan(_data): count_nan = 0 for each in columns_to_fill: temp_count_nan = count_nan + _data[each].isna().sum() count_nan= temp_count_nan return count_nan check_contains_nan(data) def fill_data(_data_to_process) : imputer = SimpleImputer(strategy="median") _data_to_process_index = data_num.index _data_to_process_labels = data_num.columns imputer.fit(_data_to_process) data_filled_temp = imputer.transform(_data_to_process) data_filled = pd.DataFrame(data_filled_temp,columns=_data_to_process_labels) return data_filled def delete_nan_row(_data_to_process): for each in columns_to_fill: _data_to_process.drop(_data_to_process[_data_to_process[each].isna()].index,inplace=True) return _data_to_process data_num = data.select_dtypes(include=[np.number]) data_num = fill_data(data_num) # we concat the data data_cat = data.select_dtypes(object) dataset = pd.concat([data_num, data_cat], axis=1) dataset.head() ###Output _____no_output_____ ###Markdown Répartition des données ###Code #Afficher le Dataset pour constater la répartition fig = plt.figure(figsize=(15,15)) for index,i in enumerate(dataset.columns.tolist()): ax=plt.subplot(8,4,index+1) sns.countplot(x=i,data=dataset,ax=ax) fig.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Proportion d'Attrition TODO : le séparer en plusieurs parties : d'abord sur les ordinal puis les nominales et enfin numérique. Virer avant les données inutiles ###Code #on trace le nombre d'Attrition et le pourcentage d'attrition def drawAttritionRepart(listColumns): fig = plt.figure(figsize=(15,(len(listColumns) / 2) * 3.75)) for idx,i in enumerate(listColumns): index = idx * 2; crosstab = pd.crosstab(index=dataset[i], columns=dataset["Attrition"]) ax1=plt.subplot(len(listColumns) / 2,4,index+1) crosstab.plot(kind="bar",stacked=True,ax=ax1) plt.title(i + ' / Attrition | ∑') plt.xlabel(i) plt.ylabel('nb Employees') ax2=plt.subplot(len(listColumns) / 2,4,index+2) table= pd.crosstab(dataset[i],dataset.Attrition) table.div(table.sum(1).astype(float), axis=0).plot(kind='bar', stacked=True,ax=ax2) plt.title(i + ' / Attrition | %') plt.xlabel(i) plt.ylabel('Number of Employees') fig.tight_layout() plt.show() drawAttritionRepart(['Age', 'Education', 'JobLevel', 'NumCompaniesWorked']) #Afficher le Dataset selon l'attrition afin de déterminer des tendances fig = plt.figure(figsize=(26,30)) for idx,i in enumerate(dataset.columns.tolist()[:len(dataset) - 2]): ax =plt.subplot(7,4,idx+1) ax.scatter(dataset[i], dataset.Attrition) ax.title.set_text(str(i)) fig.tight_layout() plt.show() #Pair-Plot num_col_eda=['Age','DistanceFromHome','PercentSalaryHike','MonthlyIncome','TotalWorkingYears','YearsAtCompany'] num_attrition=num_col_eda+['Attrition'] sns.pairplot(dataset[num_attrition], hue = 'Attrition') ###Output _____no_output_____
analysis/PhenoGraph_clustering/01_PhenoGraph_clustering.ipynb
###Markdown Construct hierarchy from Seurat clustering for t3 dataset ###Code suppressMessages({ source('../../R/gene_signature.R') library(tidyverse) }) ###Output Warning message: “package ‘limma’ was built under R version 3.5.1” ###Markdown load the dataset and tidy the dataset ###Code t3_scaled_matrix.path = "../../data/scaledData/t3.scaledexpression.cutree.csv" #t4_scaled_matrix.path = "../../data/scaledData/t4.scaledexpression.cutree.csv" t3_clusters.path = "../../data/DESeq/t3.Seurat.cluster.csv" #t4_clusters.path = "../../data/DESeq/t4.Seurat.cluster.csv" t3_count_matrix.path = '../../data/DESeq/t_3k_genesymbol.csv' suppressMessages({ t3_clusters = read_csv(t3_clusters.path) }) colnames(t3_clusters)[1] = 'cell' head(t3_clusters, n=2) suppressMessages({ t3_scaled_matrix = read.table(t3_scaled_matrix.path, header=T, row.names = 1) }) head(t3_scaled_matrix_cleaned[1:4, 1:4]) t3_scaled_matrix_cleaned = t3_scaled_matrix[, -(ncol(t3_scaled_matrix) - (0:3))] rownames(t3_scaled_matrix_cleaned) = gsub('\\.', '-', rownames(t3_scaled_matrix_cleaned)) t3_count_matrix = read.table(t3_count_matrix.path, header=1, row.names=1) t3_count_matrix_cleaned = t3_count_matrix colnames(t3_count_matrix_cleaned) = gsub('\\.', '-', colnames(t3_count_matrix_cleaned)) t3_count_matrix_cleaned = t(t3_count_matrix_cleaned) ###Output _____no_output_____ ###Markdown Compute the average per gene within each cluster ###Code membership = select(t3_clusters, cell, res.0.6) colnames(membership) = c('cell', 'membership') cluster_numbers = unique(membership$membership) cluster_numbers clusters_average = c() for (cluster in cluster_numbers) { cells_id_in_cluster = membership %>% filter(membership == cluster) %>% select(cell) %>% unlist #print(cells_id_in_cluster[1:2]) #print(head(cells_id_in_cluster)) cluster_scaled_matrix = t3_scaled_matrix_cleaned[cells_id_in_cluster,] #print(cluster_scaled_matrix[1:5, 1:5]) cluster_average = apply(cluster_scaled_matrix, 2, mean) clusters_average = rbind(clusters_average, cluster_average) } rownames(clusters_average) = cluster_numbers clusters_average[,1:10] # cluster-wise gene average ###Output _____no_output_____ ###Markdown Create the hclust on the cluster average ###Code dist_clusters = dist(clusters_average) hclust_obj = hclust(dist_clusters, method='ward.D2') options(repr.plot.width = 5, repr.plot.height = 3) par(oma=c(0,0,0,0),mar=c(1,0,1,0)) plot(as.dendrogram(hclust_obj), type = "triangle", ylab = "Distance", axes=F) pdf('../../figures/dendrogram_seurat_res_0_6.pdf', width=5, height=3) par(oma=c(0,0,0,0),mar=c(1,1,1,1)) plot(as.dendrogram(hclust_obj), type = "triangle", ylab = "Distance", axes=F) dev.off() ###Output _____no_output_____ ###Markdown Create the hierarchy matrix to be used in running the differentially expressed genes ###Code K = seq(2, length(cluster_numbers)) memberships = c() for (k in K) { membership_k = cutree(hclust_obj, k=k) names(membership_k) = NULL #print(k) #print(membership_k) memberships = cbind(memberships, membership_k) } rownames(memberships) = cluster_numbers colnames(memberships) = K memberships # each row represent a cluster and each column represent a cut that produces the given number of clusters ###Output _____no_output_____ ###Markdown Expand the hierarchy matrix for the cells - For each cell (row names) in the original expression matrix (`t3_scaled_matrix_cleaned`), - find which cluster they belong to from the cluster identity matrix (`membership`) - append the corresponding rows from `membership` to create the ancestor matrix for all cells- save the cell ancestor matrix as a csv. ###Code cells_id = rownames(t3_scaled_matrix_cleaned) cell_ancestor_matrix = c() cell_memberships = c() for (cell in cells_id) { #print(cell) cell_membership = membership[membership[,1] == cell, 2] %>% as.character cell_memberships = c(cell_memberships, cell_membership) #print(cell_membership) cell_ancestor_matrix = rbind(cell_ancestor_matrix, memberships[cell_membership,]) } #print(cell_ancestor_matrix) rownames(cell_ancestor_matrix) = cells_id cell_ancestor_matrix[1:5, 1:5] write.csv(cell_ancestor_matrix, file='../../data/treeHierarchy/cell_ancestor_matrix.csv') ###Output _____no_output_____ ###Markdown Run the differentially expressed genes by traversing each level goal: return a data frame that defines the DE for clusters (cluster_df)- For each level of the hierarchy (where there will be exactly one cut occuring), find out the subset where the cut occurs - subset the set of split where it occurs - compute the DE for each node and get the genes - define the cluster_id and genes as a data frame - write the information into the `cluster_DE` ###Code q_thresh = 0.2 FC_thresh = 1.0 head(expression[1:4, 1:5]) cluster_DE = c() for (k in seq(ncol(cell_ancestor_matrix))[1:2]) { print(k) if (k == 1) { unique_clusters = unique(cell_ancestor_matrix[,k]) expression = t3_scaled_matrix_cleaned for (unique_cluster in unique_clusters) { unique_cluster_membership = (cell_ancestor_matrix[,k] == unique_cluster) over = OverExpressedGenes(unique_cluster_membership, t(expression), q_thresh, FC_thresh) subtree_id = paste(data.matrix(cell_ancestor_matrix[which(unique_cluster_membership)[1], seq(k)]), collapse='_') print(paste(subtree_id, length(over))) if (length(over) > 0) { cluster_DE = rbind(cluster_DE, data.frame(id = subtree_id, gene=over)) } else { cluster_DE = rbind(cluster_DE, data.frame(id = subtree_id, gene=NA)) } } } else { # find where the split occurs summary_matrix = table(cell_ancestor_matrix[,k-1], cell_ancestor_matrix[,k]) parent_split_idx = apply(summary_matrix, 1, function(x) sum(x > 0) == 2) parent_split = names(parent_split_idx)[parent_split_idx] # subset for the part expression matrix of the expression matrix at level k - 1 where it subset_idx = cell_ancestor_matrix[,k-1] == parent_split expression = t3_scaled_matrix_cleaned[subset_idx, ] cell_ancestor_matrix_subset = cell_ancestor_matrix[subset_idx,] unique_clusters = unique(cell_ancestor_matrix_subset[,k]) for (unique_cluster in unique_clusters) { unique_cluster_membership = (cell_ancestor_matrix_subset[,k] == unique_cluster) over = OverExpressedGenes(unique_cluster_membership, t(expression), q_thresh, FC_thresh) subtree_id = paste(data.matrix(cell_ancestor_matrix_subset[which(unique_cluster_membership)[1], seq(k)]), collapse='_') print(paste(subtree_id, length(over))) if (length(over) > 0) { cluster_DE = rbind(cluster_DE, data.frame(id = subtree_id, gene=over)) } else { cluster_DE = rbind(cluster_DE, data.frame(id = subtree_id, gene=NA)) } } } } summary(cluster_DE) head(cluster_DE) tail(cluster_DE) #' @param cell_ancestor_matrix n cells by k levels matrix #' @param expression_matrix n cells by m genes matrix #' @return return a list of the (cluster_id, genes) pairs that are overexpressed in the cluster against #' its sibling DE_hclust = function(cell_ancestor_matrix, expression_matrix, FC_thresh=1.0, q_thresh=0.2) { cluster_DE = c() for (k in seq(ncol(cell_ancestor_matrix))) { print(k) if (k == 1) { unique_clusters = unique(cell_ancestor_matrix[,k]) expression = expression_matrix for (unique_cluster in unique_clusters) { unique_cluster_membership = (cell_ancestor_matrix[,k] == unique_cluster) over = OverExpressedGenes(unique_cluster_membership, t(expression), q_thresh, FC_thresh) subtree_id = paste(data.matrix(cell_ancestor_matrix[which(unique_cluster_membership)[1], seq(k)]), collapse='_') print(paste(subtree_id, length(over))) if (length(over) > 0) { cluster_DE = rbind(cluster_DE, data.frame(id = subtree_id, gene=over)) } else { cluster_DE = rbind(cluster_DE, data.frame(id = subtree_id, gene=NA)) } } } else { # find where the split occurs summary_matrix = table(cell_ancestor_matrix[,k-1], cell_ancestor_matrix[,k]) parent_split_idx = apply(summary_matrix, 1, function(x) sum(x > 0) == 2) parent_split = names(parent_split_idx)[parent_split_idx] # subset for the part expression matrix of the expression matrix at level k - 1 where it subset_idx = cell_ancestor_matrix[,k-1] == parent_split expression = expression_matrix[subset_idx, ] cell_ancestor_matrix_subset = cell_ancestor_matrix[subset_idx,] unique_clusters = unique(cell_ancestor_matrix_subset[,k]) for (unique_cluster in unique_clusters) { unique_cluster_membership = (cell_ancestor_matrix_subset[,k] == unique_cluster) over = OverExpressedGenes(unique_cluster_membership, t(expression), q_thresh, FC_thresh) subtree_id = paste(data.matrix(cell_ancestor_matrix_subset[which(unique_cluster_membership)[1], seq(k)]), collapse='_') print(paste(subtree_id, length(over))) if (length(over) > 0) { cluster_DE = rbind(cluster_DE, data.frame(id = subtree_id, gene=over)) } else { cluster_DE = rbind(cluster_DE, data.frame(id = subtree_id, gene=NA)) } } } } return(cluster_DE) } head(cell_ancestor_matrix[1:4, 1:4]) head(t3_scaled_matrix_cleaned[1:4, 1:4]) cluster_genes_DE = DE_hclust(cell_ancestor_matrix, t3_scaled_matrix_cleaned) sum(rownames(t3_count_matrix_cleaned) != rownames(cell_ancestor_matrix)) dim(t3_count_matrix_cleaned_subset) dim(cell_ancestor_matrix) t3_count_matrix_cleaned_subset = t3_count_matrix_cleaned[rownames(cell_ancestor_matrix),] FC_thresh = 1.1 q_thresh = 0.2 cluster_genes_DE = DE_hclust(cell_ancestor_matrix[,1:2], t3_count_matrix_cleaned_subset, FC_thresh=FC_thresh, q_thresh = q_thresh) cluster_genes_DE ###Output _____no_output_____ ###Markdown Compute the differential expression on the lowest level ###Code t3_count_matrix[1:2, 1:2] expression_test = expression[1,] %>% unlist label = sample(c(T, F), length(expression_test), replace=T) res = wilcox.test(expression_test~label) res$statistic res$ library(qvalue) source('../../R/gene_signature.R') X = matrix(runif(500), ncol=10, nrow=50) colnames(X) = paste0('c', seq(ncol(X))) rownames(X) = paste0('gene', seq(nrow(X))) head(X) child_node_1_idx = sample(c(T, F), ncol(X), replace=T) sum(child_node_1_idx) DE_childrenNodes.wilcox(X, child_node_1_idx, !child_node_1_idx, q_thresh = 0.99, FC_thresh = 1) ###Output _____no_output_____ ###Markdown Naive DE ###Code bottom_level_membership = cell_ancestor_matrix[, 5] head(bottom_level_membership) cluster = 1 is_cluster_1 = (bottom_level_membership == cluster) t3_count_matrix_cleaned_subset %>% dim length(bottom_level) ###Output _____no_output_____
notebooks/exploratory/Untitled Folder/DataUnderstanding.ipynb
###Markdown Analysis of the ICFHR 2020 Competition on Image Retrieval for Historical Handwritten Fragments (HisFrag20) Dataset Data Understanding for solving a Jigsaw Puzzle of Historical Fragments by Timo Bohnstedt ###Code # import packages from os import listdir, path from os.path import isfile, join, splitext import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns from PIL import Image # declare global variables train = "/Users/beantown/Projekte/Jigsaw-Puzzling/Data/hisfrag20" test = "/Users/beantown/Projekte/Jigsaw-Puzzling/Data/hisfrag20_test" # prepare environtment %matplotlib inline # prepare plots from helper_functions import set_size sns.set(style="whitegrid") plt.style.use('seaborn') width = 496.85625 def get_info(data_path): # load file names train dataset file_names = [splitext(f)[0] for f in listdir(data_path) if isfile(join(data_path, f))] # load file name test dataset #file_names_test = [splitext(f)[0] for f in listdir(data_path_test) if isfile(join(data_path_test, f))] # Split the image naming in wirter, page and fragment # For training #file_names_parts = [i.split("_") for i in file_names] # For test file_names_parts = [i.split("_") for i in file_names_test] return pd.DataFrame.from_records(file_names_parts,columns=['writer_id', 'page_id','fragment_id']) df get_info(data_path=test) df.head() ###Output _____no_output_____