Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
12,800
Given the following text description, write Python code to implement the functionality described below step by step Description: Anna KaRNNa In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever. Step3: And we can see the characters encoded as integers. Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from. Step5: Making training mini-batches Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps. Step7: If you implemented get_batches correctly, the above output should look something like ``` x [[55 63 69 22 6 76 45 5 16 35] [ 5 69 1 5 12 52 6 5 56 52] [48 29 12 61 35 35 8 64 76 78] [12 5 24 39 45 29 12 56 5 63] [ 5 29 6 5 29 78 28 5 78 29] [ 5 13 6 5 36 69 78 35 52 12] [63 76 12 5 18 52 1 76 5 58] [34 5 73 39 6 5 12 52 36 5] [ 6 5 29 78 12 79 6 61 5 59] [ 5 78 69 29 24 5 6 52 5 63]] y [[63 69 22 6 76 45 5 16 35 35] [69 1 5 12 52 6 5 56 52 29] [29 12 61 35 35 8 64 76 78 28] [ 5 24 39 45 29 12 56 5 63 29] [29 6 5 29 78 28 5 78 29 45] [13 6 5 36 69 78 35 52 12 43] [76 12 5 18 52 1 76 5 58 52] [ 5 73 39 6 5 12 52 36 5 78] [ 5 29 78 12 79 6 61 5 59 63] [78 69 29 24 5 6 52 5 63 76]] `` although the exact numbers will be different. Check to make sure the data is shifted over one step fory`. Building the model Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network. <img src="assets/charRNN.png" width=500px> Inputs First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size. Exercise Step8: LSTM Cell Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer. We first create a basic LSTM cell with python lstm = tf.contrib.rnn.BasicLSTMCell(num_units) where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with python tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this python tf.contrib.rnn.MultiRNNCell([cell]*num_layers) This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like ```python def build_cell(num_units, keep_prob) Step9: RNN Output Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text. If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$. We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$. One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names. Exercise Step10: Training loss Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$. Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss. Exercise Step11: Optimizer Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step. Step12: Build the network Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN. Exercise Step13: Hyperparameters Here are the hyperparameters for the network. batch_size - Number of sequences running through the network in one pass. num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. lstm_size - The number of units in the hidden layers. num_layers - Number of hidden LSTM layers to use learning_rate - Learning rate for training keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this. Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from. Tips and Tricks Monitoring Validation Loss vs. Training Loss If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular Step14: Time for training This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint. Here I'm saving checkpoints with the format i{iteration number}_l{# hidden layer units}.ckpt Exercise Step15: Saved checkpoints Read up on saving and loading checkpoints here Step16: Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. Step17: Here, pass in the path to a checkpoint and sample from the network.
Python Code: import time from collections import namedtuple import numpy as np import tensorflow as tf Explanation: Anna KaRNNa In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> End of explanation with open('anna.txt', 'r') as f: text=f.read() vocab = sorted(set(text)) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32) Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. End of explanation text[:100] Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever. End of explanation encoded[:100] Explanation: And we can see the characters encoded as integers. End of explanation len(vocab) Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from. End of explanation def get_batches(arr, n_seqs, n_steps): '''Create a generator that returns batches of size n_seqs x n_steps from arr. Arguments --------- arr: Array you want to make batches from n_seqs: Batch size, the number of sequences per batch n_steps: Number of sequence steps per batch ''' # Get the number of characters per batch and number of batches we can make characters_per_batch = n_seqs * n_steps n_batches = len(arr) // characters_per_batch # Keep only enough characters to make full batches arr = arr[:characters_per_batch * n_batches] # Reshape into n_seqs rows arr = arr.reshape((n_seqs, -1)) for n in range(0, arr.shape[1], n_steps): # The features x = arr[:, n: n+n_steps] # The targets, shifted by one y = np.zeros_like(x) y[:, 0:-1] = x[:, 1:] y[:, -1] = x[:, 0] yield x, y Explanation: Making training mini-batches Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this: <img src="assets/[email protected]" width=500px> <br> We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator. The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep. After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches. Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this: python y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0] where x is the input batch and y is the target batch. The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide. Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself. End of explanation batches = get_batches(encoded, 10, 50) x, y = next(batches) b = get_batches(encoded, 2, 32) x, y = next(b) print("x: \n", x) print("y: \n", y) print("x's shape: \n", x.shape) print("y's shape: \n", y.shape) print('x\n', x[:10, :10]) print('\ny\n', y[:10, :10]) Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps. End of explanation def build_inputs(batch_size, num_steps): ''' Define placeholders for inputs, targets, and dropout Arguments --------- batch_size: Batch size, number of sequences per batch num_steps: Number of sequence steps in a batch ''' # Declare placeholders we'll feed into the graph inputs = tf.placeholder(tf.int32, shape=(batch_size, num_steps), name="inputs") targets = tf.placeholder(tf.int32, shape=(batch_size, num_steps), name="targets") # Keep probability placeholder for drop out layers keep_prob = tf.placeholder(tf.float32, name='keep_prob') return inputs, targets, keep_prob Explanation: If you implemented get_batches correctly, the above output should look something like ``` x [[55 63 69 22 6 76 45 5 16 35] [ 5 69 1 5 12 52 6 5 56 52] [48 29 12 61 35 35 8 64 76 78] [12 5 24 39 45 29 12 56 5 63] [ 5 29 6 5 29 78 28 5 78 29] [ 5 13 6 5 36 69 78 35 52 12] [63 76 12 5 18 52 1 76 5 58] [34 5 73 39 6 5 12 52 36 5] [ 6 5 29 78 12 79 6 61 5 59] [ 5 78 69 29 24 5 6 52 5 63]] y [[63 69 22 6 76 45 5 16 35 35] [69 1 5 12 52 6 5 56 52 29] [29 12 61 35 35 8 64 76 78 28] [ 5 24 39 45 29 12 56 5 63 29] [29 6 5 29 78 28 5 78 29 45] [13 6 5 36 69 78 35 52 12 43] [76 12 5 18 52 1 76 5 58 52] [ 5 73 39 6 5 12 52 36 5 78] [ 5 29 78 12 79 6 61 5 59 63] [78 69 29 24 5 6 52 5 63 76]] `` although the exact numbers will be different. Check to make sure the data is shifted over one step fory`. Building the model Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network. <img src="assets/charRNN.png" width=500px> Inputs First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size. Exercise: Create the input placeholders in the function below. End of explanation def build_lstm(lstm_size, num_layers, batch_size, keep_prob): ''' Build LSTM cell. Arguments --------- keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability lstm_size: Size of the hidden layers in the LSTM cells num_layers: Number of LSTM layers batch_size: Batch size ''' ### Build the LSTM Cell # Use a basic LSTM cell lstm_cells = [tf.contrib.rnn.BasicLSTMCell(lstm_size) for _ in range(num_layers)] # Add dropout to the cell outputs lstm_cells = [tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) for lstm in lstm_cells] # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell(lstm_cells) initial_state = cell.zero_state(batch_size, tf.float32) return cell, initial_state Explanation: LSTM Cell Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer. We first create a basic LSTM cell with python lstm = tf.contrib.rnn.BasicLSTMCell(num_units) where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with python tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this python tf.contrib.rnn.MultiRNNCell([cell]*num_layers) This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like ```python def build_cell(num_units, keep_prob): lstm = tf.contrib.rnn.BasicLSTMCell(num_units) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)]) ``` Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell. We also need to create an initial cell state of all zeros. This can be done like so python initial_state = cell.zero_state(batch_size, tf.float32) Below, we implement the build_lstm function to create these LSTM cells and the initial state. End of explanation def build_output(lstm_output, in_size, out_size): ''' Build a softmax layer, return the softmax output and logits. Arguments --------- lstm_output: List of output tensors from the LSTM layer in_size: Size of the input tensor, for example, size of the LSTM cells out_size: Size of this softmax layer ''' # Reshape output so it's a bunch of rows, one row for each step for each sequence. # Concatenate lstm_output over axis 1 (the columns) seq_output = tf.concat((lstm_output), axis=1) # Reshape seq_output to a 2D tensor with lstm_size columns x = tf.reshape(seq_output, shape=(-1, in_size)) # Connect the RNN outputs to a softmax layer with tf.variable_scope('softmax'): # Create the weight and bias variables here softmax_w = tf.Variable( tf.truncated_normal((in_size, out_size), stddev=0.1), name="softmax_w" ) softmax_b = tf.Variable( tf.zeros(out_size), name="softmax_b" ) # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch # of rows of logit outputs, one for each step and sequence logits = tf.matmul(x, softmax_w) + softmax_b # Use softmax to get the probabilities for predicted characters out = tf.nn.softmax(logits, name="predications") return out, logits Explanation: RNN Output Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text. If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$. We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$. One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names. Exercise: Implement the output layer in the function below. End of explanation def build_loss(logits, targets, lstm_size, num_classes): ''' Calculate the loss from the logits and the targets. Arguments --------- logits: Logits from final fully connected layer targets: Targets for supervised learning lstm_size: Number of LSTM hidden units num_classes: Number of classes in targets ''' # One-hot encode targets and reshape to match logits, one row per sequence per step y_one_hot = tf.one_hot(targets, depth=num_classes) y_reshaped = tf.reshape(y_one_hot, shape=logits.get_shape()) # Softmax cross entropy loss loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped) loss = tf.reduce_mean(loss) return loss Explanation: Training loss Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$. Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss. Exercise: Implement the loss calculation in the function below. End of explanation def build_optimizer(loss, learning_rate, grad_clip): ''' Build optmizer for training, using gradient clipping. Arguments: loss: Network loss learning_rate: Learning rate for optimizer ''' # Optimizer for training, using gradient clipping to control exploding gradients tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) return optimizer Explanation: Optimizer Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step. End of explanation class CharRNN: def __init__(self, num_classes, batch_size=64, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one character at a time, so providing an option for that if sampling == True: batch_size, num_steps = 1, 1 else: batch_size, num_steps = batch_size, num_steps tf.reset_default_graph() # Build the input placeholder tensors self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps) # Build the LSTM cell cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob) ### Run the data through the RNN layers # First, one-hot encode the input tokens x_one_hot = tf.one_hot(self.inputs, depth=num_classes) # Run each sequence step through the RNN with tf.nn.dynamic_rnn lstm_outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state) self.final_state = state # Get softmax predictions and logits self.prediction, self.logits = build_output(lstm_outputs, lstm_size, num_classes) # Loss and optimizer (with gradient clipping) self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes) self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip) Explanation: Build the network Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN. Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network. End of explanation batch_size = 100 # Sequences per batch num_steps = 100 # Number of sequence steps per batch lstm_size = 256 # Size of hidden layers in LSTMs num_layers = 2 # Number of LSTM layers learning_rate = 0.005 # Learning rate keep_prob = 0.5 # Dropout keep probability Explanation: Hyperparameters Here are the hyperparameters for the network. batch_size - Number of sequences running through the network in one pass. num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. lstm_size - The number of units in the hidden layers. num_layers - Number of hidden LSTM layers to use learning_rate - Learning rate for training keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this. Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from. Tips and Tricks Monitoring Validation Loss vs. Training Loss If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular: If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on. If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer) Approximate number of parameters The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are: The number of parameters in your model. This is printed when you start training. The size of your dataset. 1MB file is approximately 1 million characters. These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples: I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger. I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss. Best models strategy The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end. It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance. By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative. End of explanation epochs = 20 # Save every N iterations save_every_n = 200 model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps, lstm_size=lstm_size, num_layers=num_layers, learning_rate=learning_rate) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/______.ckpt') counter = 0 for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 batches = get_batches(encoded, batch_size, num_steps) for x, y in batches: counter += 1 start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: keep_prob, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.loss, model.final_state, model.optimizer], feed_dict=feed) end = time.time() print('Epoch: {}/{}... '.format(e+1, epochs), 'Training Step: {}... '.format(counter), 'Training loss: {:.4f}... '.format(batch_loss), '{:.4f} sec/batch'.format((end-start))) if (counter % save_every_n == 0): saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) Explanation: Time for training This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint. Here I'm saving checkpoints with the format i{iteration number}_l{# hidden layer units}.ckpt Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU. End of explanation tf.train.get_checkpoint_state('checkpoints') Explanation: Saved checkpoints Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables End of explanation def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): samples = [c for c in prime] model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) Explanation: Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. End of explanation tf.train.latest_checkpoint('checkpoints') checkpoint = tf.train.latest_checkpoint('checkpoints') samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i200_l256.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i600_l256.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i1200_l256.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) Explanation: Here, pass in the path to a checkpoint and sample from the network. End of explanation
12,801
Given the following text description, write Python code to implement the functionality described below step by step Description: Dynamical systems A (discrete time) dynamical system describes the evolution of the state of a system and the observations that can be obtained from the state. The general form is \begin{eqnarray} x_0 & \sim & \pi(x_0) \ x_t & = & f(x_{t-1}, \epsilon_t) \ y_t & = & g(x_{t}, \nu_t) \end{eqnarray} Here, $f$ and $g$ are transition and observation functions. The variables $\epsilon_t$ and $\nu_t$ are assumed to be unknown random noise components with a known distribution. The initial state, $x_0$, can be either known exactly or at least and initial state distribution density $\pi$ is known. The model describes the relation between observations $y_t$ and states $x_t$. Frequency modulated sinusoidal signal \begin{eqnarray} \epsilon_t & \sim & \mathcal{N}(0, P) \ x_{1,t} & = & \mu + a x_{1,t-1} + \epsilon_t \ x_{2,t} & = & x_{2,t-1} + x_{1,t-1} \ \nu_t & \sim & \mathcal{N}(0, R) \ y_t & = & \cos(2\pi x_{2,t}) + \nu_t \end{eqnarray} Step1: Stochastic Kinetic Model Stochastic Kinetic Model is a general modelling technique to describe the interactions of a set of objects such as molecules, individuals or items. This class of models are particularly useful in modeling queuing systems, production plants, chemical, ecological, biological systems or biological cell cycles at a sufficiently detailed level. It is a good example of a a dynamical model that displays quite interesting and complex behaviour. The model is best motivated first with a specific example, known as the Lotka-Volterra predator-prey model Step2: State Space Representation Step3: In this model, the state space can be visualized as a 2-D lattice of nonnegative integers, where each point $(x_1, x_2)$ denotes the number of smileys versus the zombies. The model simulates a Markov chain on a directed graph where possible transitions are shown as edges where the edge color shade is proportional to the transition probability (darker means higher probability). The edges are directed, the arrow tips are not shown. There are three types of edges, each corresponding to one event type Step4: Generic code to simulate an SKM Step5: A simple ecosystem Suppose there are $x_1$ rabbits and $x_2$ clovers. Rabbits eat clovers with a rate of $k_1$ to reproduce. Similarly, rabbits die with rate $k_2$ and a clover grows. Pray (Clover) Step6: A simple ecological network Food (Clover) Step7: Alternative model Constant food supply for the prey. <h1><center> 🐰🍀 $\rightarrow$ 🐰🐰🍀 </center></h1> <h1><center> 🐰🐺 $\rightarrow$ 🐺 </center></h1> <h1><center> 🐺 $\rightarrow$ 🐺🐺 </center></h1> <h1><center> 🐺 $\rightarrow$ ☠️ </center></h1> This model is flawed as it allows predators to reproduce even when no prey is there. Step8: 🙀 Step9: From Diaconis and Freedman A random walk on the unit interval. Start with $x$, choose one of the two intervals $[0,x]$ and $[x,1]$ with equal probability $0.5$, then choose a new $x$ uniformly on the interval. Step10: A random switching system \begin{eqnarray} A(0) & = & \left(\begin{array}{cc} 0.444 & -0.3733 \ 0.06 & 0.6000 \end{array}\right) \ B(0) & = & \left(\begin{array}{c} 0.3533 \ 0 \end{array}\right) \ A(1) & = & \left(\begin{array}{cc} -0.8 & -0.1867 \ 0.1371 & 0.8 \end{array}\right) \ B(1) & = & \left(\begin{array}{c} 1.1 \ 0.1 \end{array}\right) \ w & = & 0.2993 \end{eqnarray} \begin{eqnarray} c_t & \sim & \mathcal{BE}(c; w) \ x_t & = & A(c_t) x_{t-1} + B(c_t) \end{eqnarray} Step11: Polya Urn Models Many urn models can be represented as instances of the stochastic kinetic model Mahmoud Step12: Polya
Python Code: %matplotlib inline import numpy as np import matplotlib.pylab as plt N = 100 T = 100 a = 0.9 xm = 0.9 sP = np.sqrt(0.001) sR = np.sqrt(0.01) x1 = np.zeros(N) x2 = np.zeros(N) y = np.zeros(N) for i in range(N): if i==0: x1[0] = xm x2[0] = 0 else: x1[i] = xm + a*x1[i-1] + np.random.normal(0, sP) x2[i] = x2[i-1] + x1[i-1] y[i] = np.cos(2*np.pi*x2[i]/T) + np.random.normal(0, sR) plt.figure() plt.plot(x) plt.figure() plt.plot(y) plt.show() Explanation: Dynamical systems A (discrete time) dynamical system describes the evolution of the state of a system and the observations that can be obtained from the state. The general form is \begin{eqnarray} x_0 & \sim & \pi(x_0) \ x_t & = & f(x_{t-1}, \epsilon_t) \ y_t & = & g(x_{t}, \nu_t) \end{eqnarray} Here, $f$ and $g$ are transition and observation functions. The variables $\epsilon_t$ and $\nu_t$ are assumed to be unknown random noise components with a known distribution. The initial state, $x_0$, can be either known exactly or at least and initial state distribution density $\pi$ is known. The model describes the relation between observations $y_t$ and states $x_t$. Frequency modulated sinusoidal signal \begin{eqnarray} \epsilon_t & \sim & \mathcal{N}(0, P) \ x_{1,t} & = & \mu + a x_{1,t-1} + \epsilon_t \ x_{2,t} & = & x_{2,t-1} + x_{1,t-1} \ \nu_t & \sim & \mathcal{N}(0, R) \ y_t & = & \cos(2\pi x_{2,t}) + \nu_t \end{eqnarray} End of explanation %matplotlib inline import numpy as np import matplotlib.pylab as plt A = np.array([[1,0],[1,1],[0,1]]) B = np.array([[2,0],[0,2],[0,0]]) S = B-A N = S.shape[1] M = S.shape[0] STEPS = 50000 k = np.array([0.8,0.005, 0.3]) X = np.zeros((N,STEPS)) x = np.array([100,100]) T = np.zeros(STEPS) t = 0 for i in range(STEPS-1): rho = k*np.array([x[0], x[0]*x[1], x[1]]) srho = np.sum(rho) if srho == 0: break idx = np.random.choice(M, p=rho/srho) dt = np.random.exponential(scale=1./srho) x = x + S[idx,:] t = t + dt X[:, i+1] = x T[i+1] = t plt.figure(figsize=(10,5)) plt.plot(T,X[0,:], '.b') plt.plot(T,X[1,:], '.r') plt.legend([u'Smiley',u'Zombie']) plt.show() Explanation: Stochastic Kinetic Model Stochastic Kinetic Model is a general modelling technique to describe the interactions of a set of objects such as molecules, individuals or items. This class of models are particularly useful in modeling queuing systems, production plants, chemical, ecological, biological systems or biological cell cycles at a sufficiently detailed level. It is a good example of a a dynamical model that displays quite interesting and complex behaviour. The model is best motivated first with a specific example, known as the Lotka-Volterra predator-prey model: A Predator Prey Model (Lotka-Volterra) Consider a population of two species, named as smiley 😊 and zombie 👹. Our dynamical model will describe the evolution of the number of individuals in this entire population. We define $3$ different event types: Event 1: Reproduction The smiley, denoted by $X_1$, reproduces by division so one smiley becomes two smileys after a reproduction event. <h1><center> 😊 $\rightarrow$ 🙂 😊 </center></h1> In mathematical notation, we denote this event as \begin{eqnarray} X_1 & \xrightarrow{k_1} 2 X_1 \end{eqnarray} Here, $k_1$ denotes the rate constant, the rate at which a single single smiley is reproducing according to the exponential distribution. When there are $x_1$ smileys, each reproducing with rate $k_1$, the rate at which a reproduction event occurs is simply \begin{eqnarray} h_1(x, k_1) & = & k_1 x_1 \end{eqnarray} The rate $h_1$ is the rate of a reproduction event, increasing proportionally to the number of smileys. Event 2: Consumption The predatory species, the zombies, denoted as $X_2$, transform the smileys into zombies. So one zombie 'consumes' one smiley to create a new zombie. <h1><center> 😥 👹 $\rightarrow$ 👹 👹 </center></h1> The consumption event is denoted as \begin{eqnarray} X_1 + X_2 & \xrightarrow{k_2} 2 X_2 \end{eqnarray} Here, $k_2$ denotes the rate constant, the rate at which a zombie and a smiley meet, and the zombie transforms the smiley into a new zombie. When there are $x_1$ smileys and $x_2$ zombies, there are in total $x_1 x_2$ possible meeting events, With each meeting event occurring at rate $k_2$, the rate at which a consumption event occurs is simply \begin{eqnarray} h_2(x, k_2) & = & k_2 x_1 x_2 \end{eqnarray} The rate $h_2$ is the rate of a consumption event. There are more consumptions if there are more zombies or smileys. Event 3: Death Finally, in this story, unlike Hollywood blockbusters, the zombies are mortal and they decease after a certain random time. <h1><center> 👹 $\rightarrow $ ☠️ </center></h1> This is denoted as $X_2$ disappearing from the scene. \begin{eqnarray} X_2 & \xrightarrow{k_3} \emptyset \end{eqnarray} A zombie death event occurs, by a similar argument as reproduction, at rate \begin{eqnarray} h_3(x, k_3) & = & k_3 x_2 \end{eqnarray} Model All equations can be written \begin{eqnarray} X_1 & \xrightarrow{k_1} 2 X_1 & \hspace{3cm}\text{Reproduction}\ X_1 + X_2 & \xrightarrow{k_2} 2 X_2 & \hspace{3cm}\text{Consumption} \ X_2 & \xrightarrow{k_3} \emptyset & \hspace{3cm} \text{Death} \end{eqnarray} More compactly, in matrix form we can write: \begin{eqnarray} \left( \begin{array}{cc} 1 & 0 \ 1 & 1 \ 0 & 1 \end{array} \right) \left( \begin{array}{cc} X_1 \ X_2 \end{array} \right) \rightarrow \left( \begin{array}{cc} 2 & 0 \ 0 & 2 \ 0 & 0 \end{array} \right) \left( \begin{array}{cc} X_1 \ X_2 \end{array} \right) \end{eqnarray} The rate constants $k_1, k_2$ and $k_3$ denote the rate at which a single event is occurring according to the exponential distribution. All objects of type $X_1$ trigger the next event \begin{eqnarray} h_1(x, k_1) & = & k_1 x_1 \ h_2(x, k_2) & = & k_2 x_1 x_2 \ h_3(x, k_2) & = & k_3 x_2 \end{eqnarray} The dynamical model is conditioned on the type of the next event, denoted by $r(j)$ \begin{eqnarray} Z(j) & = & \sum_i h_i(x(j-1), k_i) \ \pi_i(j) & = & \frac{h_i(x(j-1), k_i) }{Z(j)} \ r(j) & \sim & \mathcal{C}(r; \pi(j)) \ \Delta(j) & \sim & \mathcal{E}(1/Z(j)) \ t(j) & = & t(j-1) + \Delta(j) \ x(j) & = & x(j-1) + S(r(j)) \end{eqnarray} End of explanation plt.figure(figsize=(10,5)) plt.plot(X[0,:],X[1,:], '.') plt.xlabel('# of Smileys') plt.ylabel('# of Zombies') plt.axis('square') plt.show() Explanation: State Space Representation End of explanation %matplotlib inline import networkx as nx import numpy as np import matplotlib.pylab as plt from itertools import product # Maximum number of smileys or zombies N = 20 #A = np.array([[1,0],[1,1],[0,1]]) #B = np.array([[2,0],[0,2],[0,0]]) #S = B-A k = np.array([0.6,0.05, 0.3]) G = nx.DiGraph() pos = [u for u in product(range(N),range(N))] idx = [u[0]*N+u[1] for u in pos] G.add_nodes_from(idx) edge_colors = [] edges = [] for y,x in product(range(N),range(N)): source = (x,y) rho = k*np.array([source[0], source[0]*source[1], source[1]]) srho = np.sum(rho) if srho==0: srho = 1. if x<N-1: # Birth target = (x+1,y) edges.append((source[0]*N+source[1], target[0]*N+target[1])) edge_colors.append(rho[0]/srho) if y<N-1 and x>0: # Consumption target = (x-1,y+1) edges.append((source[0]*N+source[1], target[0]*N+target[1])) edge_colors.append(rho[1]/srho) if y>0: # Death target = (x,y-1) edges.append((source[0]*N+source[1], target[0]*N+target[1])) edge_colors.append(rho[2]/srho) G.add_edges_from(edges) col_dict = {u: c for u,c in zip(edges, edge_colors)} cols = [col_dict[u] for u in G.edges() ] plt.figure(figsize=(9,9)) nx.draw(G, pos, arrows=False, width=2, node_size=20, node_color="white", edge_vmin=0,edge_vmax=0.7, edge_color=cols, edge_cmap=plt.cm.gray_r ) plt.xlabel('# of smileys') plt.ylabel('# of zombies') #plt.gca().set_visible('on') plt.show() Explanation: In this model, the state space can be visualized as a 2-D lattice of nonnegative integers, where each point $(x_1, x_2)$ denotes the number of smileys versus the zombies. The model simulates a Markov chain on a directed graph where possible transitions are shown as edges where the edge color shade is proportional to the transition probability (darker means higher probability). The edges are directed, the arrow tips are not shown. There are three types of edges, each corresponding to one event type: $\rightarrow$ Birth $\nwarrow$ Consumption $\downarrow$ Death End of explanation def simulate_skm(A, B, k, x0, STEPS=1000): S = B-A N = S.shape[1] M = S.shape[0] X = np.zeros((N,STEPS)) x = x0 T = np.zeros(STEPS) t = 0 X[:,0] = x for i in range(STEPS-1): # rho = k*np.array([x[0]*x[2], x[0], x[0]*x[1], x[1]]) rho = [k[j]*np.prod(x**A[j,:]) for j in range(M)] srho = np.sum(rho) if srho == 0: break idx = np.random.choice(M, p=rho/srho) dt = np.random.exponential(scale=1./srho) x = x + S[idx,:] t = t + dt X[:, i+1] = x T[i+1] = t return X,T Explanation: Generic code to simulate an SKM End of explanation #%matplotlib nbagg %matplotlib inline import numpy as np import matplotlib.pylab as plt A = np.array([[1,1],[1,0]]) B = np.array([[2,0],[0,1]]) k = np.array([0.02,0.3]) x0 = np.array([10,40]) X,T = simulate_skm(A,B,k,x0,STEPS=10000) plt.figure(figsize=(10,5)) plt.plot(T,X[0,:], '.b',ms=2) plt.plot(T,X[1,:], '.g',ms=2) plt.legend([u'Rabbit', u'Clover']) plt.show() Explanation: A simple ecosystem Suppose there are $x_1$ rabbits and $x_2$ clovers. Rabbits eat clovers with a rate of $k_1$ to reproduce. Similarly, rabbits die with rate $k_2$ and a clover grows. Pray (Clover): 🍀 Predator (Rabbit): 🐰 <h1><center> 🐰🍀 $\rightarrow$ 🐰🐰 </center></h1> <h1><center> 🐰 $\rightarrow$ 🍀 </center></h1> In this system, clearly the total number of objects $x_1+x_2 = N$ is constant. Probabilistic question What is the distribution of the number of rabbits at time $t$ Statistical questions What are the parameters $k_1$ and $k_2$ of the system given observations of rabbit counts at specific times $t_1, t_2, \dots, t_K$ Given rabbit counts at time $t$, predict counts at time $t + \Delta$ End of explanation #%matplotlib nbagg %matplotlib inline import numpy as np import matplotlib.pylab as plt A = np.array([[1,0,1],[1,0,0],[1,1,0],[0,1,0]]) B = np.array([[2,0,0],[0,0,1],[0,2,0],[0,0,1]]) #k = np.array([0.02,0.09, 0.001, 0.3]) #x0 = np.array([1000,1000,10000]) k = np.array([0.02,0.19, 0.001, 2.8]) x0 = np.array([1000,1,10000]) X,T = simulate_skm(A,B,k,x0,STEPS=50000) plt.figure(figsize=(10,5)) plt.plot(T,X[0,:], '.y',ms=2) plt.plot(T,X[1,:], '.r',ms=2) plt.plot(T,X[2,:], '.g',ms=2) plt.legend([u'Rabbit',u'Wolf',u'Clover']) plt.show() sm = int(sum(X[:,0]))+1 Hist = np.zeros((sm,sm)) STEPS = X.shape[1] for i in range(STEPS): Hist[int(X[1,i]),int(X[0,i])] = Hist[int(X[1,i]),int(X[0,i])] + 1 plt.figure(figsize=(10,5)) #plt.plot(X[0,:],X[1,:], '.',ms=1) plt.imshow(Hist,interpolation='nearest') plt.xlabel('# of Rabbits') plt.ylabel('# of Wolfs') plt.gca().invert_yaxis() #plt.axis('square') plt.show() %matplotlib inline import networkx as nx import numpy as np import matplotlib.pylab as plt # Maximum number of rabbits or wolves N = 30 k = np.array([0.005,0.06, 0.001, 0.1]) G = nx.DiGraph() pos = [u for u in product(range(N),range(N))] idx = [u[0]*N+u[1] for u in pos] G.add_nodes_from(idx) edge_colors = [] edges = [] for y,x in product(range(N),range(N)): clover = N - (x+y) source = (x,y) rho = k*np.array([source[0]*clover, source[0], source[0]*source[1], source[1]]) srho = np.sum(rho) if srho==0: srho = 1. if x<N-1: # Rabbit Birth target = (x+1,y) edges.append((source[0]*N+source[1], target[0]*N+target[1])) edge_colors.append(rho[0]/srho) if y<N-1 and x>0: # Consumption target = (x-1,y+1) edges.append((source[0]*N+source[1], target[0]*N+target[1])) edge_colors.append(rho[2]/srho) # if y>0: # Wolf Death # target = (x,y-1) # edges.append((source[0]*N+source[1], target[0]*N+target[1])) # edge_colors.append(rho[3]/srho) # if x>0: # Rabbit Death # target = (x-1,y) # edges.append((source[0]*N+source[1], target[0]*N+target[1])) # edge_colors.append(rho[1]/srho) G.add_edges_from(edges) col_dict = {u: c for u,c in zip(edges, edge_colors)} cols = [col_dict[u] for u in G.edges() ] plt.figure(figsize=(5,5)) nx.draw(G, pos, arrows=False, width=2, node_size=20, node_color="white", edge_vmin=0,edge_vmax=0.4, edge_color=cols, edge_cmap=plt.cm.gray_r ) plt.xlabel('# of smileys') plt.ylabel('# of zombies') #plt.gca().set_visible('on') plt.show() Explanation: A simple ecological network Food (Clover): 🍀 Prey (Rabbit): 🐰 Predator (Wolf): 🐺 <h1><center> 🐰🍀 $\rightarrow$ 🐰🐰 </center></h1> <h1><center> 🐰 $\rightarrow$ 🍀 </center></h1> <h1><center> 🐰🐺 $\rightarrow$ 🐺🐺 </center></h1> <h1><center> 🐺 $\rightarrow$ 🍀 </center></h1> The number of objects in this system are constant End of explanation #%matplotlib nbagg %matplotlib inline import numpy as np import matplotlib.pylab as plt A = np.array([[1,0,1],[1,1,0],[0,1,0],[0,1,0]]) B = np.array([[2,0,1],[0,1,0],[0,2,0],[0,0,0]]) k = np.array([4.0,0.038, 0.02, 0.01]) x0 = np.array([50,100,1]) X,T = simulate_skm(A,B,k,x0,STEPS=10000) plt.figure(figsize=(10,5)) plt.plot(T,X[0,:], '.b',ms=2) plt.plot(T,X[1,:], '.r',ms=2) plt.plot(T,X[2,:], '.g',ms=2) plt.legend([u'Rabbit',u'Wolf',u'Clover']) plt.show() Explanation: Alternative model Constant food supply for the prey. <h1><center> 🐰🍀 $\rightarrow$ 🐰🐰🍀 </center></h1> <h1><center> 🐰🐺 $\rightarrow$ 🐺 </center></h1> <h1><center> 🐺 $\rightarrow$ 🐺🐺 </center></h1> <h1><center> 🐺 $\rightarrow$ ☠️ </center></h1> This model is flawed as it allows predators to reproduce even when no prey is there. End of explanation #%matplotlib nbagg %matplotlib inline import numpy as np import matplotlib.pylab as plt death_rate = 1.8 A = np.array([[1,0,0,1],[1,1,0,0],[0,0,1,0],[0,0,1,0],[0,0,1,0],[0,1,0,0]]) B = np.array([[2,0,0,1],[0,0,1,0],[0,1,0,0],[0,2,0,0],[0,0,0,0],[0,0,0,0]]) k = np.array([9.7, 9.5, 30, 3.5, death_rate, death_rate]) x0 = np.array([150,20,10,1]) X,T = simulate_skm(A,B,k,x0,STEPS=5000) plt.figure(figsize=(10,5)) plt.plot(X[0,:], '.b',ms=2) plt.plot(X[1,:], 'or',ms=2) plt.plot(X[2,:], '.r',ms=3) plt.legend([u'Mouse',u'Hungry Cat',u'Happy Cat']) plt.show() Explanation: 🙀 : Hungry cat 😻 : Happy cat <h1><center> 🐭🧀 $\rightarrow$ 🐭🐭🧀 </center></h1> <h1><center> 🐭🙀 $\rightarrow$ 😻 </center></h1> <h1><center> 😻 $\rightarrow$ 🙀 </center></h1> <h1><center> 😻 $\rightarrow$ 🙀🙀 </center></h1> <h1><center> 😻 $\rightarrow$ ☠️ </center></h1> <h1><center> 🙀 $\rightarrow$ ☠️ </center></h1> End of explanation %matplotlib inline import numpy as np Explanation: From Diaconis and Freedman A random walk on the unit interval. Start with $x$, choose one of the two intervals $[0,x]$ and $[x,1]$ with equal probability $0.5$, then choose a new $x$ uniformly on the interval. End of explanation #Diaconis and Freedman fern %matplotlib inline import numpy as np import matplotlib.pylab as plt T = 3000; x = np.matrix(np.zeros((2,T))); x[:,0] = np.matrix('[0.3533; 0]'); A = [np.matrix('[0.444 -0.3733;0.06 0.6000]'), np.matrix('[-0.8 -0.1867;0.1371 0.8]')]; B = [np.matrix('[0.3533;0]'), np.matrix('[1.1;0.1]')]; w = 0.27; for i in range(T-1): if np.random.rand()<w: c = 0; else: c = 1; x[:,i+1] = A[c]*x[:,i] + B[c] plt.figure(figsize=(5,5)) plt.plot(x[0,:],x[1,:], 'k.',ms=1) plt.plot(x[0,0:40].T,x[1,0:40].T, 'k:') plt.axis('equal') plt.show() plt.plot(x[0,0:200].T,x[1,0:200].T, 'k-') plt.axis('equal') plt.show() Explanation: A random switching system \begin{eqnarray} A(0) & = & \left(\begin{array}{cc} 0.444 & -0.3733 \ 0.06 & 0.6000 \end{array}\right) \ B(0) & = & \left(\begin{array}{c} 0.3533 \ 0 \end{array}\right) \ A(1) & = & \left(\begin{array}{cc} -0.8 & -0.1867 \ 0.1371 & 0.8 \end{array}\right) \ B(1) & = & \left(\begin{array}{c} 1.1 \ 0.1 \end{array}\right) \ w & = & 0.2993 \end{eqnarray} \begin{eqnarray} c_t & \sim & \mathcal{BE}(c; w) \ x_t & = & A(c_t) x_{t-1} + B(c_t) \end{eqnarray} End of explanation #%matplotlib nbagg %matplotlib inline import numpy as np import matplotlib.pylab as plt A = np.array([[1,0],[0,1]]) B = np.array([[0,1],[1,0]]) k = np.array([0.5,0.5]) x0 = np.array([0,50]) X,T = simulate_skm(A,B,k,x0,STEPS=10000) plt.figure(figsize=(10,5)) plt.plot(T,X[0,:], '.b',ms=2) plt.plot(T,X[1,:], '.g',ms=2) plt.legend([u'A', u'B']) plt.show() plt.hist(X[0,:],range=(0,np.sum(x0)),bins=np.sum(x0)) plt.show() Explanation: Polya Urn Models Many urn models can be represented as instances of the stochastic kinetic model Mahmoud: Ballot Problem \begin{eqnarray} 2X_1 & \rightarrow & X_1 \ 2X_2 & \rightarrow & X_2 \end{eqnarray} Polya-Eggenberger Urn \begin{eqnarray} X_1 & \rightarrow & s X_1 \ X_2 & \rightarrow & s X_2 \end{eqnarray} Bernard-Friedman Urn \begin{eqnarray} X_1 & \rightarrow & s X_1 + a X_2 \ X_2 & \rightarrow & a X_1 + s X_2 \end{eqnarray} Bagchi-Pal Urn \begin{eqnarray} X_1 & \rightarrow & a X_1 + b X_2 \ X_2 & \rightarrow & c X_1 + d X_2 \end{eqnarray} Ehrenfest \begin{eqnarray} X_1 & \rightarrow & X_2 \ X_2 & \rightarrow & X_1 \ \end{eqnarray} Extended Ehrenfest? \begin{eqnarray} 2 X_1 & \rightarrow & X_1 + X_2 \ 2 X_2 & \rightarrow & X_1 + X_2 \ \end{eqnarray} Ehrenfest End of explanation %matplotlib inline import numpy as np import matplotlib.pylab as plt A = np.array([[1,0],[0,1]]) B = np.array([[2,0],[0,2]]) k = np.array([0.05,0.05]) x0 = np.array([3,1]) X,T = simulate_skm(A,B,k,x0,STEPS=2000) plt.figure(figsize=(10,5)) plt.plot(T,X[0,:]/(X[0,:]+X[1,:]), '.-',ms=2) plt.ylim([0,1]) plt.show() Explanation: Polya End of explanation
12,802
Given the following text description, write Python code to implement the functionality described below step by step Description: Plotting Introduction This tutorial describes skrf's plotting features. If you would like to use skrf's matplotlib interface with skrf styling, start with this Step1: Plotting Methods Plotting functions are implemented as methods of the Network class. Network.plot_s_re Network.plot_s_im Network.plot_s_mag Network.plot_s_db ... Similar methods exist for Impedance (Network.z) and Admittance Parameters (Network.y), Network.plot_z_re Network.plot_z_im ... Network.plot_y_re Network.plot_y_im ... Smith Chart As a first example, load a Network and plot all four s-parameters on the Smith chart. Step2: Another common option is to draw addmitance contours, instead of impedance. This is controled through the chart_type argument. Step3: See skrf.plotting.smith() for more info on customizing the Smith Chart. Complex Plane Network parameters can also be plotted in the complex plane without a Smith Chart through Network.plot_s_complex. Step4: Log-Magnitude Scalar components of the complex network parameters can be plotted vs frequency as well. To plot the log-magnitude of the s-parameters vs. frequency, Step5: When no arguments are passed to the plotting methods, all parameters are plotted. Single parameters can be plotted by passing indices m and n to the plotting commands (indexing start from 0). Comparing the simulated reflection coefficient off the ring slot to a measurement, Step6: Phase Plot phase, Step7: Or unwrapped phase, Step8: Phase is radian (rad) is also available Group Delay A Network has a plot() method which creates a rectangular plot of the argument vs frequency. This can be used to make plots are arent 'canned'. For example group delay Step9: Impedance, Admittance The components the Impendance and Admittance parameters can be plotted similarly, Step10: Customizing Plots The legend entries are automatically filled in with the Network's Network.name. The entry can be overidden by passing the label argument to the plot method. Step11: The frequency unit used on the x-axis is automatically filled in from the Networks Network.frequency.unit attribute. To change the label, change the frequency's unit. Step12: Other key word arguments given to the plotting methods are passed through to the matplotlib matplotlib.pyplot.plot function. Step13: All components of the plots can be customized through matplotlib functions, and styles can be used with a context manager. Step14: Saving Plots Plots can be saved in various file formats using the GUI provided by the matplotlib. However, skrf provides a convenience function, called skrf.plotting.save_all_figs, that allows all open figures to be saved to disk in multiple file formats, with filenames pulled from each figure's title, from skrf.plotting import save_all_figs save_all_figs('data/', format=['png','eps','pdf']) Adding Markers Post Plot A common need is to make a color plot, interpretable in greyscale print. The skrf.plotting.add_markers_to_lines adds different markers each line in a plots after the plot has been made, which is usually when you remember to add them.
Python Code: %matplotlib inline import skrf as rf rf.stylely() Explanation: Plotting Introduction This tutorial describes skrf's plotting features. If you would like to use skrf's matplotlib interface with skrf styling, start with this End of explanation from skrf import Network ring_slot = Network('data/ring slot.s2p') ring_slot.plot_s_smith() ring_slot.plot_s_smith(draw_labels=True) Explanation: Plotting Methods Plotting functions are implemented as methods of the Network class. Network.plot_s_re Network.plot_s_im Network.plot_s_mag Network.plot_s_db ... Similar methods exist for Impedance (Network.z) and Admittance Parameters (Network.y), Network.plot_z_re Network.plot_z_im ... Network.plot_y_re Network.plot_y_im ... Smith Chart As a first example, load a Network and plot all four s-parameters on the Smith chart. End of explanation ring_slot.plot_s_smith(chart_type='y') Explanation: Another common option is to draw addmitance contours, instead of impedance. This is controled through the chart_type argument. End of explanation ring_slot.plot_s_complex() from matplotlib import pyplot as plt plt.axis('equal') # otherwise circles wont be circles Explanation: See skrf.plotting.smith() for more info on customizing the Smith Chart. Complex Plane Network parameters can also be plotted in the complex plane without a Smith Chart through Network.plot_s_complex. End of explanation ring_slot.plot_s_db() Explanation: Log-Magnitude Scalar components of the complex network parameters can be plotted vs frequency as well. To plot the log-magnitude of the s-parameters vs. frequency, End of explanation from skrf.data import ring_slot_meas ring_slot.plot_s_db(m=0,n=0, label='Theory') ring_slot_meas.plot_s_db(m=0,n=0, label='Measurement') Explanation: When no arguments are passed to the plotting methods, all parameters are plotted. Single parameters can be plotted by passing indices m and n to the plotting commands (indexing start from 0). Comparing the simulated reflection coefficient off the ring slot to a measurement, End of explanation ring_slot.plot_s_deg() Explanation: Phase Plot phase, End of explanation ring_slot.plot_s_deg_unwrap() Explanation: Or unwrapped phase, End of explanation gd = abs(ring_slot.s21.group_delay) *1e9 # in ns ring_slot.plot(gd) plt.ylabel('Group Delay (ns)') plt.title('Group Delay of Ring Slot S21') Explanation: Phase is radian (rad) is also available Group Delay A Network has a plot() method which creates a rectangular plot of the argument vs frequency. This can be used to make plots are arent 'canned'. For example group delay End of explanation ring_slot.plot_z_im() ring_slot.plot_y_im() Explanation: Impedance, Admittance The components the Impendance and Admittance parameters can be plotted similarly, End of explanation ring_slot.plot_s_db(m=0,n=0, label = 'Simulation') Explanation: Customizing Plots The legend entries are automatically filled in with the Network's Network.name. The entry can be overidden by passing the label argument to the plot method. End of explanation ring_slot.frequency.unit = 'mhz' ring_slot.plot_s_db(0,0) Explanation: The frequency unit used on the x-axis is automatically filled in from the Networks Network.frequency.unit attribute. To change the label, change the frequency's unit. End of explanation ring_slot.frequency.unit='ghz' ring_slot.plot_s_db(m=0,n=0, linewidth = 3, linestyle = '--', label = 'Simulation') ring_slot_meas.plot_s_db(m=0,n=0, marker = 'o', markevery = 10,label = 'Measured') Explanation: Other key word arguments given to the plotting methods are passed through to the matplotlib matplotlib.pyplot.plot function. End of explanation from matplotlib import pyplot as plt from matplotlib import style with style.context('seaborn-ticks'): ring_slot.plot_s_smith() plt.xlabel('Real Part'); plt.ylabel('Imaginary Part'); plt.title('Smith Chart With Legend Room'); plt.axis([-1.1,2.1,-1.1,1.1]) plt.legend(loc=5) Explanation: All components of the plots can be customized through matplotlib functions, and styles can be used with a context manager. End of explanation from skrf import plotting with style.context('printable'): ring_slot.plot_s_deg() plotting.add_markers_to_lines() plt.legend() # have to re-generate legend Explanation: Saving Plots Plots can be saved in various file formats using the GUI provided by the matplotlib. However, skrf provides a convenience function, called skrf.plotting.save_all_figs, that allows all open figures to be saved to disk in multiple file formats, with filenames pulled from each figure's title, from skrf.plotting import save_all_figs save_all_figs('data/', format=['png','eps','pdf']) Adding Markers Post Plot A common need is to make a color plot, interpretable in greyscale print. The skrf.plotting.add_markers_to_lines adds different markers each line in a plots after the plot has been made, which is usually when you remember to add them. End of explanation
12,803
Given the following text description, write Python code to implement the functionality described below step by step Description: Time-frequency representations on topographies for MEG sensors Both average power and intertrial coherence are displayed. Step1: Set parameters Step2: Calculate power and intertrial coherence
Python Code: # Authors: Alexandre Gramfort <[email protected]> # Denis Engemann <[email protected]> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne import io from mne.time_frequency import tfr_morlet from mne.datasets import somato print(__doc__) Explanation: Time-frequency representations on topographies for MEG sensors Both average power and intertrial coherence are displayed. End of explanation data_path = somato.data_path() raw_fname = data_path + '/MEG/somato/sef_raw_sss.fif' event_id, tmin, tmax = 1, -1., 3. # Setup for reading the raw data raw = io.Raw(raw_fname) baseline = (None, 0) events = mne.find_events(raw, stim_channel='STI 014') # picks MEG gradiometers picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=False) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=baseline, reject=dict(grad=4000e-13, eog=350e-6)) Explanation: Set parameters End of explanation freqs = np.arange(6, 30, 3) # define frequencies of interest n_cycles = freqs / 2. # different number of cycle per frequency power, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, use_fft=True, return_itc=True, decim=3, n_jobs=1) # Baseline correction can be applied to power or done in plots # To illustrate the baseline correction in plots the next line is commented # power.apply_baseline(baseline=(-0.5, 0), mode='logratio') # Inspect power power.plot_topo(baseline=(-0.5, 0), mode='logratio', title='Average power') power.plot([82], baseline=(-0.5, 0), mode='logratio') fig, axis = plt.subplots(1, 2, figsize=(7, 4)) power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=8, fmax=12, baseline=(-0.5, 0), mode='logratio', axes=axis[0], title='Alpha', vmax=0.45) power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=13, fmax=25, baseline=(-0.5, 0), mode='logratio', axes=axis[1], title='Beta', vmax=0.45) mne.viz.tight_layout() # Inspect ITC itc.plot_topo(title='Inter-Trial coherence', vmin=0., vmax=1., cmap='Reds') Explanation: Calculate power and intertrial coherence End of explanation
12,804
Given the following text description, write Python code to implement the functionality described below step by step Description: Sentiment Classification & How To "Frame Problems" for a Neural Network by Andrew Trask Twitter Step1: Note Step2: Lesson Step3: Project 1 Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words. Step5: TODO Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used. Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews. TODO Step8: Examine the ratios you've calculated for a few words Step9: Looking closely at the values you just calculated, we see the following Step10: Examine the new ratios you've calculated for the same words from before Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments. Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.) The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).) You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios. Step12: End of Project 1. Watch the next video to see Andrew's solution, then continue on to the next lesson. Transforming Text into Numbers<a id='lesson_3'></a> The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything. Step13: Project 2 Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074 Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer. Step16: TODO Step17: Run the following cell. It should display (1, 74074) Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word. Step20: TODO Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0. Step23: TODO Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively. Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively. Step29: End of Project 2. Watch the next video to see Andrew's solution, then continue on to the next lesson. Project 3 Step30: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1. Step31: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from. Step32: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing. Step33: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network. Step34: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network. Step35: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson. End of Project 3. Watch the next video to see Andrew's solution, then continue on to the next lesson. Understanding Neural Noise<a id='lesson_4'></a> The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything. Step36: Project 4 Step37: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1. Step38: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions. Step39: End of Project 4. Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson. Analyzing Inefficiencies in our Network<a id='lesson_5'></a> The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything. Step40: Project 5 Step41: Run the following cell to recreate the network and train it once again. Step42: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions. Step43: End of Project 5. Watch the next video to see Andrew's solution, then continue on to the next lesson. Further Noise Reduction<a id='lesson_6'></a> Step44: Project 6 Step45: Run the following cell to train your network with a small polarity cutoff. Step46: And run the following cell to test it's performance. It should be Step47: Run the following cell to train your network with a much larger polarity cutoff. Step48: And run the following cell to test it's performance. Step49: End of Project 6. Watch the next video to see Andrew's solution, then continue on to the next lesson. Analysis
Python Code: def pretty_print_review_and_label(i): print(labels[i] + "\t:\t" + reviews[i][:80] + "...") g = open('reviews.txt','r') # What we know! reviews = list(map(lambda x:x[:-1],g.readlines())) g.close() g = open('labels.txt','r') # What we WANT to know! labels = list(map(lambda x:x[:-1].upper(),g.readlines())) g.close() Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network by Andrew Trask Twitter: @iamtrask Blog: http://iamtrask.github.io What You Should Already Know neural networks, forward and back-propagation stochastic gradient descent mean squared error and train/test splits Where to Get Help if You Need it Re-watch previous Udacity Lectures Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code) Shoot me a tweet @iamtrask Tutorial Outline: Intro: The Importance of "Framing a Problem" (this lesson) Curate a Dataset Developing a "Predictive Theory" PROJECT 1: Quick Theory Validation Transforming Text to Numbers PROJECT 2: Creating the Input/Output Data Putting it all together in a Neural Network (video only - nothing in notebook) PROJECT 3: Building our Neural Network Understanding Neural Noise PROJECT 4: Making Learning Faster by Reducing Noise Analyzing Inefficiencies in our Network PROJECT 5: Making our Network Train and Run Faster Further Noise Reduction PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary Analysis: What's going on in the weights? Lesson: Curate a Dataset<a id='lesson_1'></a> The cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything. End of explanation len(reviews) reviews[0] labels[0] Explanation: Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way. End of explanation print("labels.txt \t : \t reviews.txt\n") pretty_print_review_and_label(2137) pretty_print_review_and_label(12816) pretty_print_review_and_label(6267) pretty_print_review_and_label(21934) pretty_print_review_and_label(5297) pretty_print_review_and_label(4998) Explanation: Lesson: Develop a Predictive Theory<a id='lesson_2'></a> End of explanation from collections import Counter import numpy as np Explanation: Project 1: Quick Theory Validation<a id='project_1'></a> There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook. You'll find the Counter class to be useful in this exercise, as well as the numpy library. End of explanation # Create three Counter objects to store positive, negative and total counts positive_counts = Counter() negative_counts = Counter() total_counts = Counter() Explanation: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words. End of explanation # TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects #nLoop = 0 for i in range(len(reviews)): #nLoop += 1 #if( nLoop > 1 ): # break; #print(''+str(i)) #print(r) #print(reviews[i].split(' ')) for word in reviews[i].split(' '): total_counts[word] += 1 if( labels[i] == 'POSITIVE' ): positive_counts[word] += 1 else: negative_counts[word] += 1 Explanation: TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter. Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show. End of explanation # Examine the counts of the most common words in positive reviews positive_counts.most_common() # Examine the counts of the most common words in negative reviews negative_counts.most_common() Explanation: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used. End of explanation # Create Counter object to store positive/negative ratios pos_neg_ratios = Counter() # TODO: Calculate the ratios of positive and negative uses of the most common words # Consider words to be "common" if they've been used at least 100 times #l = 0 for word in total_counts: #l += 1 #if l >= 10: # break #print(word, total_counts[word]) if( total_counts[word] >= 100 ): pos_neg_ratios[word] = positive_counts[word] / float(negative_counts[word]+1) Explanation: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews. TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios. Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews. End of explanation print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"])) print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"])) print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"])) Explanation: Examine the ratios you've calculated for a few words: End of explanation # TODO: Convert ratios to logs for word in pos_neg_ratios: pos_neg_ratios[word] = np.log(pos_neg_ratios[word]) Explanation: Looking closely at the values you just calculated, we see the following: Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be. Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be. Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway. Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons: Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys. When comparing absolute values it's easier to do that around zero than one. To fix these issues, we'll convert all of our ratios to new values using logarithms. TODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio)) In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs. End of explanation print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"])) print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"])) print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"])) Explanation: Examine the new ratios you've calculated for the same words from before: End of explanation # words most frequently seen in a review with a "POSITIVE" label pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label list(reversed(pos_neg_ratios.most_common()))[0:30] # Note: Above is the code Andrew uses in his solution video, # so we've included it here to avoid confusion. # If you explore the documentation for the Counter class, # you will see you could also find the 30 least common # words like this: pos_neg_ratios.most_common()[:-31:-1] Explanation: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments. Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.) The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).) You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios. End of explanation from IPython.display import Image review = "This was a horrible, terrible movie." Image(filename='sentiment_network.png') review = "The movie was excellent" Image(filename='sentiment_network_pos.png') Explanation: End of Project 1. Watch the next video to see Andrew's solution, then continue on to the next lesson. Transforming Text into Numbers<a id='lesson_3'></a> The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything. End of explanation # TODO: Create set named "vocab" containing all of the words from all of the reviews vocab = set(total_counts.elements()) len(total_counts) Explanation: Project 2: Creating the Input/Output Data<a id='project_2'></a> TODO: Create a set named vocab that contains every word in the vocabulary. End of explanation vocab_size = len(vocab) print(vocab_size) Explanation: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074 End of explanation from IPython.display import Image Image(filename='sentiment_network_2.png') Explanation: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer. End of explanation # TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros layer_0 = np.zeros([1, len(vocab)]) Explanation: TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns. End of explanation layer_0.shape from IPython.display import Image Image(filename='sentiment_network.png') Explanation: Run the following cell. It should display (1, 74074) End of explanation # Create a dictionary of words in the vocabulary mapped to index positions # (to be used in layer_0) word2index = {} for i,word in enumerate(vocab): word2index[word] = i # display the map of words to indices word2index Explanation: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word. End of explanation def update_input_layer(review): Modify the global layer_0 to represent the vector form of review. The element at a given index of layer_0 should represent how many times the given word occurs in the review. Args: review(string) - the string of the review Returns: None global layer_0 # clear out previous state by resetting the layer to be all 0s layer_0 *= 0 # TODO: count how many times each word is used in the given review and store the results in layer_0 for word in list(review.split(' ')): layer_0[0,word2index[word]] += 1 Explanation: TODO: Complete the implementation of update_input_layer. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside layer_0. End of explanation update_input_layer(reviews[0]) layer_0 Explanation: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0. End of explanation def get_target_for_label(label): Convert a label to `0` or `1`. Args: label(string) - Either "POSITIVE" or "NEGATIVE". Returns: `0` or `1`. # TODO: Your code here if( label == 'POSITIVE' ): return 1 else: return 0 Explanation: TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1, depending on whether the given label is NEGATIVE or POSITIVE, respectively. End of explanation labels[0] get_target_for_label(labels[0]) Explanation: Run the following two cells. They should print out'POSITIVE' and 1, respectively. End of explanation labels[1] get_target_for_label(labels[1]) Explanation: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively. End of explanation import time import sys import numpy as np # Encapsulate our neural network in a class class SentimentNetwork: def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1): Create a SentimenNetwork with the given settings Args: reviews(list) - List of reviews used for training labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews hidden_nodes(int) - Number of nodes to create in the hidden layer learning_rate(float) - Learning rate to use while training # Assign a seed to our random number generator to ensure we get # reproducable results during development np.random.seed(1) # process the reviews and their associated labels so that everything # is ready for training self.pre_process_data(reviews, labels) print('len(self.review_vocab)', len(self.review_vocab)) # Build the network to have the number of hidden nodes and the learning rate that # were passed into this initializer. Make the same number of input nodes as # there are vocabulary words and create a single output node. self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels): review_vocab = set() # TODO: populate review_vocab with all of the words in the given reviews # Remember to split reviews into individual words # using "split(' ')" instead of "split()". l = list() for r in reviews: l.extend(r.split(' ')) review_vocab = set(l) # Convert the vocabulary set to a list so we can access words via indices self.review_vocab = list(review_vocab) label_vocab = set() # TODO: populate label_vocab with all of the words in the given labels. # There is no need to split the labels because each one is a single word. label_vocab = {'NEGATIVE', 'POSITIVE'} # Convert the label vocabulary set to a list so we can access labels via indices self.label_vocab = list(label_vocab) # Store the sizes of the review and label vocabularies. self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) # Create a dictionary of words in the vocabulary mapped to index positions self.word2index = {} # TODO: populate self.word2index with indices for all the words in self.review_vocab # like you saw earlier in the notebook for i,word in enumerate(self.review_vocab): self.word2index[word] = i # Create a dictionary of labels mapped to index positions self.label2index = {} # TODO: do the same thing you did for self.word2index and self.review_vocab, # but for self.label2index and self.label_vocab instead for i,word in enumerate(self.label_vocab): self.label2index[word] = i def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Store the number of nodes in input, hidden, and output layers. print('input_nodes, hidden_nodes, output_nodes', input_nodes, hidden_nodes, output_nodes) self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Store the learning rate self.learning_rate = learning_rate # Initialize weights # TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between # the input layer and the hidden layer. self.weights_0_1 = np.zeros([input_nodes, hidden_nodes]) print('self.weights_0_1.shape', self.weights_0_1.shape) # TODO: initialize self.weights_1_2 as a matrix of random values. # These are the weights between the hidden layer and the output layer. self.weights_1_2 = np.random.normal(0.0, self.hidden_nodes**-0.5, (hidden_nodes, output_nodes)) print('self.weights_1_2', self.weights_1_2) # TODO: Create the input layer, a two-dimensional matrix with shape # 1 x input_nodes, with all values initialized to zero self.layer_0 = np.zeros((1,input_nodes)) def update_input_layer(self,review): # TODO: You can copy most of the code you wrote for update_input_layer # earlier in this notebook. # # However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE # THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS. # For example, replace "layer_0 *= 0" with "self.layer_0 *= 0" # clear out previous state by resetting the layer to be all 0s self.layer_0 *= 0 # TODO: count how many times each word is used in the given review and store the results in layer_0 for word in list(review.split(' ')): layer_0[0,word2index[word]] += 1 def get_target_for_label(self,label): # TODO: Copy the code you wrote for get_target_for_label # earlier in this notebook. return self.label2index[label] def sigmoid(self,x): # TODO: Return the result of calculating the sigmoid activation function # shown in the lectures return 1 / (1 + np.exp(-x)) def sigmoid_output_2_derivative(self,output): # TODO: Return the derivative of the sigmoid activation function, # where "output" is the original output from the sigmoid fucntion return output * (1 - output) def train(self, training_reviews, training_labels): # make sure out we have a matching number of reviews and labels assert(len(training_reviews) == len(training_labels)) # Keep track of correct predictions to display accuracy during training correct_so_far = 0 # Remember when we started for printing time statistics start = time.time() # loop through all the given reviews and run a forward and backward pass, # updating weights for every item for i in range(len(training_reviews)): # TODO: Get the next review and its correct label self.update_input_layer(training_reviews[i]) label = self.get_target_for_label(training_labels[i]) if( i < 10 ): print('self.layer_0, label', self.layer_0, label) # TODO: Implement the forward pass through the network. # That means use the given review to update the input layer, # then calculate values for the hidden layer, # and finally calculate the output layer. # # Do not use an activation function for the hidden layer, # but use the sigmoid activation function for the output layer. h = np.matmul(self.layer_0, self.weights_0_1) o = self.sigmoid( np.matmul(h, self.weights_1_2) ) # TODO: Implement the back propagation pass here. # That means calculate the error for the forward pass's prediction # and update the weights in the network according to their # contributions toward the error, as calculated via the # gradient descent and back propagation algorithms you # learned in class. delta_o = (label - o) delta_h = np.matmul( delta_o, self.weights_1_2.T ) g_o = h.T * delta_o.T g_h = self.layer_0.T * delta_h # TODO: Keep track of correct predictions. To determine if the prediction was # correct, check that the absolute value of the output error # is less than 0.5. If so, add one to the correct_so_far count. #print('label, o', label, o) if( abs(label - o) < 0.5 ): correct_so_far += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the training process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \ + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): Attempts to predict the labels for the given testing_reviews, and uses the test_labels to calculate the accuracy of those predictions. # keep track of how many correct predictions we make correct = 0 # we'll time how many predictions per second we make start = time.time() # Loop through each of the given reviews and call run to predict # its label. for i in range(len(testing_reviews)): #print(i) pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the prediction process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \ + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): Returns a POSITIVE or NEGATIVE prediction for the given review. # TODO: Run a forward pass through the network, like you did in the # "train" function. That means use the given review to # update the input layer, then calculate values for the hidden layer, # and finally calculate the output layer. # # Note: The review passed into this function for prediction # might come from anywhere, so you should convert it # to lower case prior to using it. # TODO: Get the next review and its correct label self.update_input_layer(review) # TODO: Implement the forward pass through the network. # That means use the given review to update the input layer, # then calculate values for the hidden layer, # and finally calculate the output layer. # # Do not use an activation function for the hidden layer, # but use the sigmoid activation function for the output layer. h = np.matmul(self.layer_0, self.weights_0_1) o = self.sigmoid( np.matmul(h, self.weights_1_2) ) # TODO: The output layer should now contain a prediction. # Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`, # and `NEGATIVE` otherwise. if( o >= 0.5 ): return 'POSITIVE' else: return 'NEGATIVE' Explanation: End of Project 2. Watch the next video to see Andrew's solution, then continue on to the next lesson. Project 3: Building a Neural Network<a id='project_3'></a> TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following: - Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. - Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs. - Re-use the code from earlier in this notebook to create the training data (see TODOs in the code) - Implement the pre_process_data function to create the vocabulary for our training data generating functions - Ensure train trains over the entire corpus Where to Get Help if You Need it Re-watch earlier Udacity lectures Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code) End of explanation #mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) mlp = SentimentNetwork(reviews[:],labels[:], learning_rate=0.1) Explanation: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1. End of explanation mlp.test(reviews[-1000:],labels[-1000:]) Explanation: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from. End of explanation mlp.train(reviews[:-1000],labels[:-1000]) Explanation: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing. End of explanation mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) Explanation: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network. End of explanation mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001) mlp.train(reviews[:-1000],labels[:-1000]) Explanation: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network. End of explanation from IPython.display import Image Image(filename='sentiment_network.png') def update_input_layer(review): global layer_0 # clear out previous state, reset the layer to be all 0s layer_0 *= 0 for word in review.split(" "): layer_0[0][word2index[word]] += 1 update_input_layer(reviews[0]) layer_0 review_counter = Counter() for word in reviews[0].split(" "): review_counter[word] += 1 review_counter.most_common() Explanation: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson. End of Project 3. Watch the next video to see Andrew's solution, then continue on to the next lesson. Understanding Neural Noise<a id='lesson_4'></a> The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything. End of explanation # TODO: -Copy the SentimentNetwork class from Projet 3 lesson # -Modify it to reduce noise, like in the video Explanation: Project 4: Reducing Noise in Our Input Data<a id='project_4'></a> TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following: * Copy the SentimentNetwork class you created earlier into the following cell. * Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used. End of explanation mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) mlp.train(reviews[:-1000],labels[:-1000]) Explanation: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1. End of explanation mlp.test(reviews[-1000:],labels[-1000:]) Explanation: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions. End of explanation Image(filename='sentiment_network_sparse.png') layer_0 = np.zeros(10) layer_0 layer_0[4] = 1 layer_0[9] = 1 layer_0 weights_0_1 = np.random.randn(10,5) layer_0.dot(weights_0_1) indices = [4,9] layer_1 = np.zeros(5) for index in indices: layer_1 += (1 * weights_0_1[index]) layer_1 Image(filename='sentiment_network_sparse_2.png') layer_1 = np.zeros(5) for index in indices: layer_1 += (weights_0_1[index]) layer_1 Explanation: End of Project 4. Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson. Analyzing Inefficiencies in our Network<a id='lesson_5'></a> The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything. End of explanation # TODO: -Copy the SentimentNetwork class from Project 4 lesson # -Modify it according to the above instructions Explanation: Project 5: Making our Network More Efficient<a id='project_5'></a> TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following: * Copy the SentimentNetwork class from the previous project into the following cell. * Remove the update_input_layer function - you will not need it in this version. * Modify init_network: You no longer need a separate input layer, so remove any mention of self.layer_0 You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero Modify train: Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step. At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review. Remove call to update_input_layer Use self's layer_1 instead of a local layer_1 object. In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review. When updating weights_0_1, only update the individual weights that were used in the forward pass. Modify run: Remove call to update_input_layer Use self's layer_1 instead of a local layer_1 object. Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review. End of explanation mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) mlp.train(reviews[:-1000],labels[:-1000]) Explanation: Run the following cell to recreate the network and train it once again. End of explanation mlp.test(reviews[-1000:],labels[-1000:]) Explanation: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions. End of explanation Image(filename='sentiment_network_sparse_2.png') # words most frequently seen in a review with a "POSITIVE" label pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label list(reversed(pos_neg_ratios.most_common()))[0:30] from bokeh.models import ColumnDataSource, LabelSet from bokeh.plotting import figure, show, output_file from bokeh.io import output_notebook output_notebook() hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="Word Positive/Negative Affinity Distribution") p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555") show(p) frequency_frequency = Counter() for word, cnt in total_counts.most_common(): frequency_frequency[cnt] += 1 hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="The frequency distribution of the words in our corpus") p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555") show(p) Explanation: End of Project 5. Watch the next video to see Andrew's solution, then continue on to the next lesson. Further Noise Reduction<a id='lesson_6'></a> End of explanation # TODO: -Copy the SentimentNetwork class from Project 5 lesson # -Modify it according to the above instructions Explanation: Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a> TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following: * Copy the SentimentNetwork class from the previous project into the following cell. * Modify pre_process_data: Add two additional parameters: min_count and polarity_cutoff Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.) Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times. Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff Modify __init__: Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data End of explanation mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) Explanation: Run the following cell to train your network with a small polarity cutoff. End of explanation mlp.test(reviews[-1000:],labels[-1000:]) Explanation: And run the following cell to test it's performance. It should be End of explanation mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) Explanation: Run the following cell to train your network with a much larger polarity cutoff. End of explanation mlp.test(reviews[-1000:],labels[-1000:]) Explanation: And run the following cell to test it's performance. End of explanation mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01) mlp_full.train(reviews[:-1000],labels[:-1000]) Image(filename='sentiment_network_sparse.png') def get_most_similar_words(focus = "horrible"): most_similar = Counter() for word in mlp_full.word2index.keys(): most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]]) return most_similar.most_common() get_most_similar_words("excellent") get_most_similar_words("terrible") import matplotlib.colors as colors words_to_visualize = list() for word, ratio in pos_neg_ratios.most_common(500): if(word in mlp_full.word2index.keys()): words_to_visualize.append(word) for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]: if(word in mlp_full.word2index.keys()): words_to_visualize.append(word) pos = 0 neg = 0 colors_list = list() vectors_list = list() for word in words_to_visualize: if word in pos_neg_ratios.keys(): vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]]) if(pos_neg_ratios[word] > 0): pos+=1 colors_list.append("#00ff00") else: neg+=1 colors_list.append("#000000") from sklearn.manifold import TSNE tsne = TSNE(n_components=2, random_state=0) words_top_ted_tsne = tsne.fit_transform(vectors_list) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="vector T-SNE for most polarized words") source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0], x2=words_top_ted_tsne[:,1], names=words_to_visualize, color=colors_list)) p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color") word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6, text_font_size="8pt", text_color="#555555", source=source, text_align='center') p.add_layout(word_labels) show(p) # green indicates positive words, black indicates negative words Explanation: End of Project 6. Watch the next video to see Andrew's solution, then continue on to the next lesson. Analysis: What's Going on in the Weights?<a id='lesson_7'></a> End of explanation
12,805
Given the following text description, write Python code to implement the functionality described below step by step Description: Plot Type Selector Example showing how to construct a dropdown widget that can be used to select a plot type. Here a dictionary must be used for the plot type options. Step1: Load cube. Step2: Compose and sort a dictionary of plot-types and then construct widget to present them, along with a default option. Display the widget using the IPython display call. Step3: Print selected widget value for clarity.
Python Code: import ipywidgets import IPython.display import iris import numpy as np import iris.quickplot as iplt import matplotlib.pyplot as plt Explanation: Plot Type Selector Example showing how to construct a dropdown widget that can be used to select a plot type. Here a dictionary must be used for the plot type options. End of explanation cube = iris.load_cube(iris.sample_data_path('A1B.2098.pp')) print cube Explanation: Load cube. End of explanation plot_type_dict = {'contour': iplt.contour, 'contourf': iplt.contourf, 'pcolor': iplt.pcolor, 'outline': iplt.outline, 'pcolormesh': iplt.pcolormesh, 'plot': iplt.plot, 'points': iplt.points} plot_types = plot_type_dict.keys() sorted(plot_types) im = ipywidgets.Dropdown( description='Plot-type:', options=plot_types, value='contour') IPython.display.display(im) Explanation: Compose and sort a dictionary of plot-types and then construct widget to present them, along with a default option. Display the widget using the IPython display call. End of explanation print im.value Explanation: Print selected widget value for clarity. End of explanation
12,806
Given the following text description, write Python code to implement the functionality described below step by step Description: Bonus Material Step1: Convert to list of words Step2: Slower version without translate Step3: Using a regular dictionary Step4: Using a default dictionary Step5: Using a Counter Step6: Using third party function Step7: Counting without dictionaries Step8: Vectorized version
Python Code: text = ''''Twas brillig, and the slithy toves Did gyre and gimble in the wabe; All mimsy were the borogoves, And the mome raths outgrabe. 'Beware the Jabberwock, my son! The jaws that bite, the claws that catch! Beware the Jubjub bird, and shun The frumious Bandersnatch!' He took his vorpal sword in hand: Long time the manxome foe he sought-- So rested he by the Tumtum tree, And stood awhile in thought. And as in uffish thought he stood, The Jabberwock, with eyes of flame, Came whiffling through the tulgey wood, And burbled as it came! One, two! One, two! And through and through The vorpal blade went snicker-snack! He left it dead, and with its head He went galumphing back. 'And hast thou slain the Jabberwock? Come to my arms, my beamish boy! O frabjous day! Callooh! Callay!' He chortled in his joy. 'Twas brillig, and the slithy toves Did gyre and gimble in the wabe; All mimsy were the borogoves, And the mome raths outgrabe.''' Explanation: Bonus Material: Word count The word count problem is the 'Hello world' equivalent of distributed programming. Word count is also the basic process by which text is converted into features for text mining and topic modeling. We show a variety of ways to solve the word count problem in Python to familiarize you with different coding approaches. End of explanation import string table = dict.fromkeys(map(ord, string.punctuation)) words = text.translate(table).strip().lower().split() words[:10] Explanation: Convert to list of words End of explanation for char in string.punctuation: text = text.replace(char, '') words2 = text.strip().lower().split() words2[:10] Explanation: Slower version without translate End of explanation c1 = {} for word in words: c1[word] = c1.get(word, 0) + 1 sorted(c1.items(), key=lambda x: x[1], reverse=True)[:3] Explanation: Using a regular dictionary End of explanation from collections import defaultdict c2 = defaultdict(int) for word in words: c2[word] += 1 sorted(c2.items(), key=lambda x: x[1], reverse=True)[:3] Explanation: Using a default dictionary End of explanation from collections import Counter c3 = Counter(words) c3.most_common(3) Explanation: Using a Counter End of explanation from toolz import frequencies c4 = frequencies(words) sorted(c4.items(), key=lambda x: x[1], reverse=True)[:3] Explanation: Using third party function End of explanation from itertools import groupby c5 = map(lambda x: (x[0], sum(1 for item in x[1])), groupby(sorted(words))) sorted(c5, key=lambda x: x[1], reverse=True)[:3] Explanation: Counting without dictionaries End of explanation import numpy as np values, counts = np.unique(words, return_counts=True) c6 = dict(zip(values, counts)) sorted(c6.items(), key=lambda x: x[1], reverse=True)[:3] Explanation: Vectorized version End of explanation
12,807
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Solutions Problem 1 Implement the Min-Max scaling function ($X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}$) with the parameters Step2: Problem 2 Use tf.placeholder() for features and labels since they are the inputs to the model. Any math operations must have the same type on both sides of the operator. The weights are float32, so the features and labels must also be float32. Use tf.Variable() to allow weights and biases to be modified. The weights must be the dimensions of features by labels. The number of features is the size of the image, 28*28=784. The size of labels is 10. The biases must be the dimensions of the labels, which is 10.
Python Code: # Problem 1 - Implement Min-Max scaling for greyscale image data def normalize_greyscale(image_data): Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data a = 0.1 b = 0.9 greyscale_min = 0 greyscale_max = 255 return a + ( ( (image_data - greyscale_min)*(b - a) )/( greyscale_max - greyscale_min ) ) Explanation: Solutions Problem 1 Implement the Min-Max scaling function ($X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}$) with the parameters: $X_{\min }=0$ $X_{\max }=255$ $a=0.1$ $b=0.9$ End of explanation features_count = 784 labels_count = 10 # Problem 2 - Set the features and labels tensors features = tf.placeholder(tf.float32) labels = tf.placeholder(tf.float32) # Problem 2 - Set the weights and biases tensors weights = tf.Variable(tf.truncated_normal((features_count, labels_count))) biases = tf.Variable(tf.zeros(labels_count)) Explanation: Problem 2 Use tf.placeholder() for features and labels since they are the inputs to the model. Any math operations must have the same type on both sides of the operator. The weights are float32, so the features and labels must also be float32. Use tf.Variable() to allow weights and biases to be modified. The weights must be the dimensions of features by labels. The number of features is the size of the image, 28*28=784. The size of labels is 10. The biases must be the dimensions of the labels, which is 10. End of explanation
12,808
Given the following text description, write Python code to implement the functionality described below step by step Description: Optimization of Degree Distributions on the BEC This code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods. This code illustrates * Using linear programming to optimize degree distributions on the BEC Step1: We specify the check node degree distribution polynomial $\rho(Z)$ by fixing the average check node degree $d_{\mathtt{c},\text{avg}}$ and assuming that the code contains only check nodes with degrees $\tilde{d}{\mathtt{c}} Step2: The following function solves the optimization problem that returns the best $\lambda(Z)$ for a given BEC erasure probability $\epsilon$, for an average check node degree $d_{\mathtt{c},\text{avg}}$, and for a maximum variable node degree $d_{\mathtt{v},\max}$. This optimization problem is derived in the lecture as $$ \begin{aligned} & \underset{\lambda_1,\ldots,\lambda_{d_{\mathtt{v},\max}}}{\text{maximize}} & & \sum_{i=1}^{d_{\mathtt{v},\max}}\frac{\lambda_i}{i} \ & \text{subject to} & & \lambda_1 = 0 \ & & & \lambda_i \geq 0, \quad \forall i \in{2,3,\ldots,d_{\mathtt{v},\max}} \ & & & \sum_{i=2}^{d_{\mathtt{v},\max}}\lambda_i = 1 \ & & & \sum_{i=2}^{d_{\mathtt{v},\max}}\lambda_i\cdot \epsilon(1-\rho(1-\tilde{\xi}j))^{i-1}-\tilde{\xi}_j \leq 0,\quad \forall j \in {1,\ldots, D} \ & & & \lambda_2 \leq \frac{1}{\epsilon\rho^\prime(1)} = \frac{1}{\epsilon\sum{i=2}^{d_{\mathtt{c},\max}}(i-1)\rho_i} \end{aligned} $$ If this optimization problem is feasible, then the function returns the polynomial $\lambda(Z)$ as a coefficient array where the first entry corresponds to the largest exponent ($\lambda_{d_{\mathtt{v},\max}}$) and the last entry to the lowest exponent ($\lambda_1$). If the optimization problem has no solution (e.g., it is unfeasible), then the empty vector is returned. Step3: As an example, we consider the case of optimization carried out in the lecture after 9 iterations, where we have $\epsilon = 0.2949219$ and $d_{\mathtt{c},\text{avg}} = 12.98$ with $d_{\mathtt{v},\max}=16$ Step4: In the following, we provide an interactive widget that allows you to choose the parameters of the optimization yourself and get the best possible $\lambda(Z)$. Additionally, the EXIT chart is plotted to visualize the good fit of the obtained degree distribution. Step5: Now, we carry out the optimization over a wide range of $d_{\mathtt{c},\text{avg}}$ values for a given $\epsilon$ and find the largest possible rate. Step6: Run binary search to find best irregular code for a given target rate on the BEC.
Python Code: import cvxpy as cp import numpy as np import matplotlib.pyplot as plot from ipywidgets import interactive import ipywidgets as widgets import math %matplotlib inline Explanation: Optimization of Degree Distributions on the BEC This code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods. This code illustrates * Using linear programming to optimize degree distributions on the BEC End of explanation # returns rho polynomial (highest exponents first) corresponding to average check node degree c_avg def c_avg_to_rho(c_avg): ct = math.floor(c_avg) r1 = ct*(ct+1-c_avg)/c_avg r2 = (c_avg - ct*(ct+1-c_avg))/c_avg rho_poly = np.concatenate(([r2,r1], np.zeros(ct-1))) return rho_poly Explanation: We specify the check node degree distribution polynomial $\rho(Z)$ by fixing the average check node degree $d_{\mathtt{c},\text{avg}}$ and assuming that the code contains only check nodes with degrees $\tilde{d}{\mathtt{c}} := \lfloor d{\mathtt{c},\text{avg}}\rfloor$ and $\tilde{d}{\mathtt{c}}+1$. This is the so-called check-concentrated degree distribution. As shown in the lecture, we have: $$ \rho(Z) = \frac{\tilde{d}{\mathtt{c}}(\tilde{d}{\mathtt{c}}+1-d{\mathtt{c},\text{avg}})}{d_{\mathtt{c},\text{avg}}}Z^{\tilde{d}{\mathtt{c}}-1} + \frac{d{\mathtt{c},\text{avg}}-\tilde{d}{\mathtt{c}}(\tilde{d}{\mathtt{c}}+1-d_{\mathtt{c},\text{avg}})}{d_{\mathtt{c},\text{avg}}}Z^{\tilde{d}{\mathtt{c}}} $$ The following function converts $d{\mathtt{c},\text{avg}}$ into a polynomial $\rho(Z)$ which is given as an array where the first entry corresponds to the largest exponents and the last entry corresponds to the constant part. End of explanation def find_best_lambda(epsilon, v_max, c_avg): rho = c_avg_to_rho(c_avg) # quantization of fixed-point condition D = 500 xi_range = np.arange(1.0, D+1, 1)/D # Variable to optimize is lambda with v_max entries v_lambda = cp.Variable(shape=v_max) # objective function cv = 1/np.arange(v_max,0,-1) objective = cp.Maximize(v_lambda @ cv) # constraints # constraint 1, v_lambda are fractions between 0 and 1 and sum up to 1 constraints = [cp.sum(v_lambda) == 1, v_lambda >= 0] # constraint 2, no variable nodes of degree 1 constraints += [v_lambda[v_max-1] == 0] # constraints 3, fixed point condition for all the descrete xi values (a total number of D, for each \xi) for xi in xi_range: constraints += [v_lambda @ [epsilon * (1-np.polyval(rho,1.0-xi))**(v_max-1-j) for j in range(v_max)] - xi <= 0] # constraint 4, stability condition constraints += [v_lambda[v_max-2] <= 1/epsilon/np.polyval(np.polyder(rho),1.0)] # set up the problem and solve problem = cp.Problem(objective, constraints) problem.solve() if problem.status == "optimal": r_lambda = v_lambda.value # remove entries close to zero and renormalize r_lambda[r_lambda <= 1e-7] = 0 r_lambda = r_lambda / sum(r_lambda) else: r_lambda = np.array([]) return r_lambda Explanation: The following function solves the optimization problem that returns the best $\lambda(Z)$ for a given BEC erasure probability $\epsilon$, for an average check node degree $d_{\mathtt{c},\text{avg}}$, and for a maximum variable node degree $d_{\mathtt{v},\max}$. This optimization problem is derived in the lecture as $$ \begin{aligned} & \underset{\lambda_1,\ldots,\lambda_{d_{\mathtt{v},\max}}}{\text{maximize}} & & \sum_{i=1}^{d_{\mathtt{v},\max}}\frac{\lambda_i}{i} \ & \text{subject to} & & \lambda_1 = 0 \ & & & \lambda_i \geq 0, \quad \forall i \in{2,3,\ldots,d_{\mathtt{v},\max}} \ & & & \sum_{i=2}^{d_{\mathtt{v},\max}}\lambda_i = 1 \ & & & \sum_{i=2}^{d_{\mathtt{v},\max}}\lambda_i\cdot \epsilon(1-\rho(1-\tilde{\xi}j))^{i-1}-\tilde{\xi}_j \leq 0,\quad \forall j \in {1,\ldots, D} \ & & & \lambda_2 \leq \frac{1}{\epsilon\rho^\prime(1)} = \frac{1}{\epsilon\sum{i=2}^{d_{\mathtt{c},\max}}(i-1)\rho_i} \end{aligned} $$ If this optimization problem is feasible, then the function returns the polynomial $\lambda(Z)$ as a coefficient array where the first entry corresponds to the largest exponent ($\lambda_{d_{\mathtt{v},\max}}$) and the last entry to the lowest exponent ($\lambda_1$). If the optimization problem has no solution (e.g., it is unfeasible), then the empty vector is returned. End of explanation best_lambda = find_best_lambda(0.2949219, 16, 12.98) print(np.poly1d(best_lambda, variable='Z')) Explanation: As an example, we consider the case of optimization carried out in the lecture after 9 iterations, where we have $\epsilon = 0.2949219$ and $d_{\mathtt{c},\text{avg}} = 12.98$ with $d_{\mathtt{v},\max}=16$ End of explanation def best_lambda_interactive(epsilon, c_avg, v_max): # get lambda and rho polynomial from optimization and from c_avg, respectively p_lambda = find_best_lambda(epsilon, v_max, c_avg) p_rho = c_avg_to_rho(c_avg) # if optimization successful, compute rate and show plot if p_lambda.size == 0: print('Optimization infeasible, no solution found') else: design_rate = 1 - np.polyval(np.polyint(p_rho),1)/np.polyval(np.polyint(p_lambda),1) if design_rate <= 0: print('Optimization feasible, but no code with positive rate found') else: print("Lambda polynomial:") print(np.poly1d(p_lambda, variable='Z')) print("Design rate r_d = %1.3f" % design_rate) # Plot EXIT-Chart print("EXIT Chart:") plot.figure(3) x = np.linspace(0, 1, num=100) y_v = [1 - epsilon*np.polyval(p_lambda, 1-xv) for xv in x] y_c = [np.polyval(p_rho,xv) for xv in x] plot.plot(x, y_v, '#7030A0') plot.plot(y_c, x, '#008000') plot.axis('equal') plot.gca().set_aspect('equal', adjustable='box') plot.xlim(0,1) plot.ylim(0,1) plot.xlabel('$I^{[A,V]}$, $I^{[E,C]}$') plot.ylabel('$I^{[E,V]}$, $I^{[A,C]}$') plot.grid() plot.show() interactive_plot = interactive(best_lambda_interactive, \ epsilon=widgets.FloatSlider(min=0.01,max=1,step=0.001,value=0.5, continuous_update=False, description=r'\(\epsilon\)',layout=widgets.Layout(width='50%')), \ c_avg = widgets.FloatSlider(min=3,max=20,step=0.1,value=4, continuous_update=False, description=r'\(d_{\mathtt{c},\text{avg}}\)'), \ v_max = widgets.IntSlider(min=3, max=20, step=1, value=16, continuous_update=False, description=r'\(d_{\mathtt{v},\max}\)')) output = interactive_plot.children[-1] output.layout.height = '400px' interactive_plot Explanation: In the following, we provide an interactive widget that allows you to choose the parameters of the optimization yourself and get the best possible $\lambda(Z)$. Additionally, the EXIT chart is plotted to visualize the good fit of the obtained degree distribution. End of explanation def find_best_rate(epsilon, v_max, c_max): c_range = np.linspace(3, c_max, num=100) rates = np.zeros_like(c_range) # loop over all c_avg, add progress bar f = widgets.FloatProgress(min=0, max=np.size(c_range)) display(f) for index,c_avg in enumerate(c_range): f.value += 1 p_lambda = find_best_lambda(epsilon, v_max, c_avg) p_rho = c_avg_to_rho(c_avg) if np.array(p_lambda).size > 0: design_rate = 1 - np.polyval(np.polyint(p_rho),1)/np.polyval(np.polyint(p_lambda),1) if design_rate >= 0: rates[index] = design_rate # find largest rate largest_rate_index = np.argmax(rates) best_lambda = find_best_lambda(epsilon, v_max, c_range[largest_rate_index]) print("Found best code of rate %1.3f for average check node degree of %1.2f" % (rates[largest_rate_index], c_range[largest_rate_index])) print("Corresponding lambda polynomial") print(np.poly1d(best_lambda, variable='Z')) # Plot curve with all obtained results plot.figure(4, figsize=(10,3)) plot.plot(c_range, rates, 'b') plot.plot(c_range[largest_rate_index], rates[largest_rate_index], 'bs') plot.xlim(3, c_max) plot.ylim(0, (1.1*(1-epsilon))) plot.xlabel('$d_{c,avg}$') plot.ylabel('design rate $r_d$') plot.grid() plot.show() return rates[largest_rate_index] interactive_optim = interactive(find_best_rate, \ epsilon=widgets.FloatSlider(min=0.01,max=1,step=0.001,value=0.5, continuous_update=False, description=r'\(\epsilon\)',layout=widgets.Layout(width='50%')), \ v_max = widgets.IntSlider(min=3, max=20, step=1, value=16, continuous_update=False, description=r'\(d_{\mathtt{v},\max}\)'), \ c_max = widgets.IntSlider(min=3, max=40, step=1, value=22, continuous_update=False, description=r'\(d_{\mathtt{c},\max}\)')) output = interactive_optim.children[-1] output.layout.height = '400px' interactive_optim Explanation: Now, we carry out the optimization over a wide range of $d_{\mathtt{c},\text{avg}}$ values for a given $\epsilon$ and find the largest possible rate. End of explanation target_rate = 0.7 dv_max = 16 dc_max = 22 T_Delta = 0.001 epsilon = 0.5 Delta_epsilon = 0.5 while Delta_epsilon >= T_Delta: print('Running optimization for epsilon = %1.5f' % epsilon) rate = find_best_rate(epsilon, dv_max, dc_max) if rate > target_rate: epsilon = epsilon + Delta_epsilon / 2 else: epsilon = epsilon - Delta_epsilon / 2 Delta_epsilon = Delta_epsilon / 2 Explanation: Run binary search to find best irregular code for a given target rate on the BEC. End of explanation
12,809
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Aerosol MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Meteorological Forcings 5. Key Properties --&gt; Resolution 6. Key Properties --&gt; Tuning Applied 7. Transport 8. Emissions 9. Concentrations 10. Optical Radiative Properties 11. Optical Radiative Properties --&gt; Absorption 12. Optical Radiative Properties --&gt; Mixtures 13. Optical Radiative Properties --&gt; Impact Of H2o 14. Optical Radiative Properties --&gt; Radiative Scheme 15. Optical Radiative Properties --&gt; Cloud Interactions 16. Model 1. Key Properties Key properties of the aerosol model 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Scheme Scope Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Form Is Required Step9: 1.6. Number Of Tracers Is Required Step10: 1.7. Family Approach Is Required Step11: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required Step12: 2.2. Code Version Is Required Step13: 2.3. Code Languages Is Required Step14: 3. Key Properties --&gt; Timestep Framework Physical properties of seawater in ocean 3.1. Method Is Required Step15: 3.2. Split Operator Advection Timestep Is Required Step16: 3.3. Split Operator Physical Timestep Is Required Step17: 3.4. Integrated Timestep Is Required Step18: 3.5. Integrated Scheme Type Is Required Step19: 4. Key Properties --&gt; Meteorological Forcings ** 4.1. Variables 3D Is Required Step20: 4.2. Variables 2D Is Required Step21: 4.3. Frequency Is Required Step22: 5. Key Properties --&gt; Resolution Resolution in the aersosol model grid 5.1. Name Is Required Step23: 5.2. Canonical Horizontal Resolution Is Required Step24: 5.3. Number Of Horizontal Gridpoints Is Required Step25: 5.4. Number Of Vertical Levels Is Required Step26: 5.5. Is Adaptive Grid Is Required Step27: 6. Key Properties --&gt; Tuning Applied Tuning methodology for aerosol model 6.1. Description Is Required Step28: 6.2. Global Mean Metrics Used Is Required Step29: 6.3. Regional Metrics Used Is Required Step30: 6.4. Trend Metrics Used Is Required Step31: 7. Transport Aerosol transport 7.1. Overview Is Required Step32: 7.2. Scheme Is Required Step33: 7.3. Mass Conservation Scheme Is Required Step34: 7.4. Convention Is Required Step35: 8. Emissions Atmospheric aerosol emissions 8.1. Overview Is Required Step36: 8.2. Method Is Required Step37: 8.3. Sources Is Required Step38: 8.4. Prescribed Climatology Is Required Step39: 8.5. Prescribed Climatology Emitted Species Is Required Step40: 8.6. Prescribed Spatially Uniform Emitted Species Is Required Step41: 8.7. Interactive Emitted Species Is Required Step42: 8.8. Other Emitted Species Is Required Step43: 8.9. Other Method Characteristics Is Required Step44: 9. Concentrations Atmospheric aerosol concentrations 9.1. Overview Is Required Step45: 9.2. Prescribed Lower Boundary Is Required Step46: 9.3. Prescribed Upper Boundary Is Required Step47: 9.4. Prescribed Fields Mmr Is Required Step48: 9.5. Prescribed Fields Mmr Is Required Step49: 10. Optical Radiative Properties Aerosol optical and radiative properties 10.1. Overview Is Required Step50: 11. Optical Radiative Properties --&gt; Absorption Absortion properties in aerosol scheme 11.1. Black Carbon Is Required Step51: 11.2. Dust Is Required Step52: 11.3. Organics Is Required Step53: 12. Optical Radiative Properties --&gt; Mixtures ** 12.1. External Is Required Step54: 12.2. Internal Is Required Step55: 12.3. Mixing Rule Is Required Step56: 13. Optical Radiative Properties --&gt; Impact Of H2o ** 13.1. Size Is Required Step57: 13.2. Internal Mixture Is Required Step58: 14. Optical Radiative Properties --&gt; Radiative Scheme Radiative scheme for aerosol 14.1. Overview Is Required Step59: 14.2. Shortwave Bands Is Required Step60: 14.3. Longwave Bands Is Required Step61: 15. Optical Radiative Properties --&gt; Cloud Interactions Aerosol-cloud interactions 15.1. Overview Is Required Step62: 15.2. Twomey Is Required Step63: 15.3. Twomey Minimum Ccn Is Required Step64: 15.4. Drizzle Is Required Step65: 15.5. Cloud Lifetime Is Required Step66: 15.6. Longwave Bands Is Required Step67: 16. Model Aerosol model 16.1. Overview Is Required Step68: 16.2. Processes Is Required Step69: 16.3. Coupling Is Required Step70: 16.4. Gas Phase Precursors Is Required Step71: 16.5. Scheme Type Is Required Step72: 16.6. Bulk Scheme Species Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-2', 'aerosol') Explanation: ES-DOC CMIP6 Model Properties - Aerosol MIP Era: CMIP6 Institute: NIMS-KMA Source ID: SANDBOX-2 Topic: Aerosol Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. Properties: 69 (37 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:28 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Meteorological Forcings 5. Key Properties --&gt; Resolution 6. Key Properties --&gt; Tuning Applied 7. Transport 8. Emissions 9. Concentrations 10. Optical Radiative Properties 11. Optical Radiative Properties --&gt; Absorption 12. Optical Radiative Properties --&gt; Mixtures 13. Optical Radiative Properties --&gt; Impact Of H2o 14. Optical Radiative Properties --&gt; Radiative Scheme 15. Optical Radiative Properties --&gt; Cloud Interactions 16. Model 1. Key Properties Key properties of the aerosol model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of aerosol model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of aerosol model code End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/volume ratio for aerosols" # "3D number concenttration for aerosols" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Prognostic variables in the aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of tracers in the aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are aerosol calculations generalized into families of species? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses atmospheric chemistry time stepping" # "Specific timestepping (operator splitting)" # "Specific timestepping (integrated)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestep Framework Physical properties of seawater in ocean 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the time evolution of the prognostic variables End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for aerosol advection (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for aerosol physics (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the aerosol model (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3.5. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Meteorological Forcings ** 4.1. Variables 3D Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Variables 2D Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Two dimensionsal forcing variables, e.g. land-sea mask definition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Frequency Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Frequency with which meteological forcings are applied (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Resolution Resolution in the aersosol model grid 5.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 5.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 5.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for aerosol model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Transport Aerosol transport 7.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of transport in atmosperic aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Specific transport scheme (eulerian)" # "Specific transport scheme (semi-lagrangian)" # "Specific transport scheme (eulerian and semi-lagrangian)" # "Specific transport scheme (lagrangian)" # TODO - please enter value(s) Explanation: 7.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for aerosol transport modeling End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Mass adjustment" # "Concentrations positivity" # "Gradients monotonicity" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 7.3. Mass Conservation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method used to ensure mass conservation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.convention') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Convective fluxes connected to tracers" # "Vertical velocities connected to tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 7.4. Convention Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Transport by convention End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Emissions Atmospheric aerosol emissions 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of emissions in atmosperic aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Prescribed (climatology)" # "Prescribed CMIP6" # "Prescribed above surface" # "Interactive" # "Interactive above surface" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method used to define aerosol species (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Volcanos" # "Bare ground" # "Sea surface" # "Lightning" # "Fires" # "Aircraft" # "Anthropogenic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.3. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the aerosol species are taken into account in the emissions scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Interannual" # "Annual" # "Monthly" # "Daily" # TODO - please enter value(s) Explanation: 8.4. Prescribed Climatology Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify the climatology type for aerosol emissions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and prescribed via a climatology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.6. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.7. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.8. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of aerosol species emitted and specified via an &quot;other method&quot; End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.9. Other Method Characteristics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Characteristics of the &quot;other method&quot; used for aerosol emissions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Concentrations Atmospheric aerosol concentrations 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of concentrations in atmosperic aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.4. Prescribed Fields Mmr Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed as mass mixing ratios. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.5. Prescribed Fields Mmr Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed as AOD plus CCNs. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10. Optical Radiative Properties Aerosol optical and radiative properties 10.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of optical and radiative properties End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11. Optical Radiative Properties --&gt; Absorption Absortion properties in aerosol scheme 11.1. Black Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Dust Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.3. Organics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 12. Optical Radiative Properties --&gt; Mixtures ** 12.1. External Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there external mixing with respect to chemical composition? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 12.2. Internal Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there internal mixing with respect to chemical composition? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Mixing Rule Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If there is internal mixing with respect to chemical composition then indicate the mixinrg rule End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13. Optical Radiative Properties --&gt; Impact Of H2o ** 13.1. Size Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact size? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.2. Internal Mixture Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact internal mixture? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Optical Radiative Properties --&gt; Radiative Scheme Radiative scheme for aerosol 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.2. Shortwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of shortwave bands End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.3. Longwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of longwave bands End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Optical Radiative Properties --&gt; Cloud Interactions Aerosol-cloud interactions 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of aerosol-cloud interactions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.2. Twomey Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the Twomey effect included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.3. Twomey Minimum Ccn Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the Twomey effect is included, then what is the minimum CCN number? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.4. Drizzle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the scheme affect drizzle? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.5. Cloud Lifetime Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the scheme affect cloud lifetime? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.6. Longwave Bands Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of longwave bands End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Model Aerosol model 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosperic aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dry deposition" # "Sedimentation" # "Wet deposition (impaction scavenging)" # "Wet deposition (nucleation scavenging)" # "Coagulation" # "Oxidation (gas phase)" # "Oxidation (in cloud)" # "Condensation" # "Ageing" # "Advection (horizontal)" # "Advection (vertical)" # "Heterogeneous chemistry" # "Nucleation" # TODO - please enter value(s) Explanation: 16.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the Aerosol model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Radiation" # "Land surface" # "Heterogeneous chemistry" # "Clouds" # "Ocean" # "Cryosphere" # "Gas phase chemistry" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.3. Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other model components coupled to the Aerosol model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.gas_phase_precursors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "DMS" # "SO2" # "Ammonia" # "Iodine" # "Terpene" # "Isoprene" # "VOC" # "NOx" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.4. Gas Phase Precursors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of gas phase aerosol precursors. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Bulk" # "Modal" # "Bin" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.5. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.bulk_scheme_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon / soot" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.6. Bulk Scheme Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of species covered by the bulk scheme. End of explanation
12,810
Given the following text description, write Python code to implement the functionality described below step by step Description: eICU Collaborative Research Database Notebook 3 Step2: 2. Display a list of tables Step4: 3. Selecting a single patient stay 3.1. The patient table The patient table includes general information about the patient admissions (for example, demographics, admission and discharge details). See Step6: 3.2. The vitalperiodic table The vitalperiodic table comprises data that is consistently interfaced from bedside vital signs monitors into eCareManager. Data are generally interfaced as 1 minute averages, and archived into the vitalperiodic table as 5 minute median values. For more detail, see Step8: Questions Which variables are available for this patient? What is the peak heart rate during the period? 3.3. The vitalaperiodic table The vitalAperiodic table provides invasive vital sign data that is recorded at irregular intervals. See Step10: Questions What do the non-invasive variables measure? How do you think the mean is calculated? 3.4. The lab table
Python Code: # Import libraries import pandas as pd import matplotlib.pyplot as plt import psycopg2 import os # Plot settings %matplotlib inline plt.style.use('ggplot') fontsize = 20 # size for x and y ticks plt.rcParams['legend.fontsize'] = fontsize plt.rcParams.update({'font.size': fontsize}) # Connect to the database - which is assumed to be in the current directory fn = 'eicu_demo.sqlite3' con = sqlite3.connect(fn) cur = con.cursor() Explanation: eICU Collaborative Research Database Notebook 3: Plot timeseries data for a single patient stay The aim of this notebook is to create a series of plots using timeseries data available for a single patient stay, using the following tables: patient vitalperiodic vitalaperiodic lab Before starting, you will need to install a copy the eICU Collaborative Research Database. For instructions on installing the database, see: . Documentation on the eICU Collaborative Research Database can be found at: http://eicu-crd.mit.edu/. 1. Getting set up End of explanation query = \ SELECT type, name FROM sqlite_master WHERE type='table' ORDER BY name; list_of_tables = pd.read_sql_query(query,con) list_of_tables Explanation: 2. Display a list of tables End of explanation # select a single ICU stay patientunitstayid = 141296 # query to load data from the patient table query = \ SELECT * FROM patient WHERE patientunitstayid = {} .format(patientunitstayid) print(query) # run the query and assign the output to a variable unitstay = pd.read_sql_query(query,con) # display the first few rows of the dataframe unitstay.head() Explanation: 3. Selecting a single patient stay 3.1. The patient table The patient table includes general information about the patient admissions (for example, demographics, admission and discharge details). See: http://eicu-crd.mit.edu/eicutables/patient/ End of explanation # query to load data from the patient table query = \ SELECT * FROM vitalperiodic WHERE patientunitstayid = {} .format(patientunitstayid) print(query) # run the query and assign the output to a variable vitalperiodic = pd.read_sql_query(query,con) # display the first few rows of the dataframe vitalperiodic.head() # display a full list of columns vitalperiodic.columns # sort the values by the observationoffset (time in minutes from ICU admission) vitalperiodic = vitalperiodic.sort_values(by='observationoffset') vitalperiodic.head() # subselect the variable columns columns = ['observationoffset','temperature','sao2','heartrate','respiration', 'cvp','etco2','systemicsystolic','systemicdiastolic','systemicmean', 'pasystolic','padiastolic','pamean','st1','st2','st3','icp'] vitalperiodic = vitalperiodic[columns].set_index('observationoffset') vitalperiodic.head() # plot the data figsize = (18,8) title = 'Vital signs (periodic) for patientunitstayid = {} \n'.format(patientunitstayid) ax = vitalperiodic.plot(title=title, figsize=figsize, fontsize=fontsize, marker='o') ax.title.set_size(fontsize) ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5)) ax.set_xlabel("Minutes after admission to the ICU") ax.set_ylabel("Absolute value") Explanation: 3.2. The vitalperiodic table The vitalperiodic table comprises data that is consistently interfaced from bedside vital signs monitors into eCareManager. Data are generally interfaced as 1 minute averages, and archived into the vitalperiodic table as 5 minute median values. For more detail, see: http://eicu-crd.mit.edu/eicutables/vitalPeriodic/ End of explanation # query to load data from the patient table query = \ SELECT * FROM vitalaperiodic WHERE patientunitstayid = {} .format(patientunitstayid) print(query) # run the query and assign the output to a variable vitalaperiodic = pd.read_sql_query(query,con) # display the first few rows of the dataframe vitalaperiodic.head() vitalaperiodic.columns # sort the values by the observationoffset (time in minutes from ICU admission) vitalaperiodic = vitalaperiodic.sort_values(by='observationoffset') vitalaperiodic.head() # subselect the variable columns columns = ['observationoffset','noninvasivesystolic','noninvasivediastolic', 'noninvasivemean','paop','cardiacoutput','cardiacinput','svr', 'svri','pvr','pvri'] vitalaperiodic = vitalaperiodic[columns].set_index('observationoffset') vitalaperiodic.head() # plot the data figsize = (18,8) title = 'Vital signs (aperiodic) for patientunitstayid = {} \n'.format(patientunitstayid) ax = vitalaperiodic.plot(title=title, figsize=figsize, fontsize=fontsize, marker='o') ax.title.set_size(fontsize) ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5)) ax.set_xlabel("Minutes after admission to the ICU") ax.set_ylabel("Absolute value") Explanation: Questions Which variables are available for this patient? What is the peak heart rate during the period? 3.3. The vitalaperiodic table The vitalAperiodic table provides invasive vital sign data that is recorded at irregular intervals. See: http://eicu-crd.mit.edu/eicutables/vitalAperiodic/ End of explanation # query to load data from the patient table query = \ SELECT * FROM lab WHERE patientunitstayid = {} .format(patientunitstayid) print(query) # run the query and assign the output to a variable lab = pd.read_sql_query(query,con) # display the first few rows of the dataframe lab.head() # list columns in the table lab.columns # sort the values by the offset time (time in minutes from ICU admission) lab = lab.sort_values(by='labresultoffset') lab.head() # set the index to the offset time lab = lab.set_index('labresultoffset') lab.head() # subselect the variable columns columns = ['labname','labresult','labmeasurenamesystem'] lab = lab[columns] lab.head() # list the distinct labnames lab['labname'].unique() # pivot the lab table to put variables into columns lab = lab.pivot(columns='labname', values='labresult') lab.head() # plot laboratory tests of interest labs_to_plot = ['creatinine','pH','Hgb', 'total bilirubin', 'potassium', 'Tacrolimus-FK506', 'WBC x 1000'] lab[labs_to_plot].head() # plot the data figsize = (18,8) title = 'Laboratory test results for patientunitstayid = {} \n'.format(patientunitstayid) ax = lab[labs_to_plot].plot(title=title, figsize=figsize, fontsize=fontsize, marker='o',ms=10, lw=0) ax.title.set_size(fontsize) ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5)) ax.set_xlabel("Minutes after admission to the ICU") ax.set_ylabel("Absolute value") Explanation: Questions What do the non-invasive variables measure? How do you think the mean is calculated? 3.4. The lab table End of explanation
12,811
Given the following text description, write Python code to implement the functionality described below step by step Description: This Notebook illustrates the usage of the OpenMC Python API's generic eigenvalue search capability. In this Notebook, we will do a critical boron concentration search of a typical PWR pin cell. To use the search functionality, we must create a function which creates our model according to the input parameter we wish to search for (in this case, the boron concentration). This notebook will first create that function, and then, run the search. Step1: Create Parametrized Model To perform the search we will use the openmc.search_for_keff function. This function requires a different function be defined which creates an parametrized model to analyze. This model is required to be stored in an openmc.model.Model object. The first parameter of this function will be modified during the search process for our critical eigenvalue. Our model will be a pin-cell from the Multi-Group Mode Part II assembly, except this time the entire model building process will be contained within a function, and the Boron concentration will be parametrized. Step2: Search for the Critical Boron Concentration To perform the search we imply call the openmc.search_for_keff function and pass in the relvant arguments. For our purposes we will be passing in the model building function (build_model defined above), a bracketed range for the expected critical Boron concentration (1,000 to 2,500 ppm), the tolerance, and the method we wish to use. Instead of the bracketed range we could have used a single initial guess, but have elected not to in this example. Finally, due to the high noise inherent in using as few histories as are used in this example, our tolerance on the final keff value will be rather large (1.e-2) and a bisection method will be used for the search. Step3: Finally, the openmc.search_for_keff function also provided us with Lists of the guesses and corresponding keff values generated during the search process with OpenMC. Let's use that information to make a quick plot of the value of keff versus the boron concentration.
Python Code: # Initialize third-party libraries and the OpenMC Python API import matplotlib.pyplot as plt import numpy as np import openmc import openmc.model %matplotlib inline Explanation: This Notebook illustrates the usage of the OpenMC Python API's generic eigenvalue search capability. In this Notebook, we will do a critical boron concentration search of a typical PWR pin cell. To use the search functionality, we must create a function which creates our model according to the input parameter we wish to search for (in this case, the boron concentration). This notebook will first create that function, and then, run the search. End of explanation # Create the model. `ppm_Boron` will be the parametric variable. def build_model(ppm_Boron): # Create the pin materials fuel = openmc.Material(name='1.6% Fuel') fuel.set_density('g/cm3', 10.31341) fuel.add_element('U', 1., enrichment=1.6) fuel.add_element('O', 2.) zircaloy = openmc.Material(name='Zircaloy') zircaloy.set_density('g/cm3', 6.55) zircaloy.add_element('Zr', 1.) water = openmc.Material(name='Borated Water') water.set_density('g/cm3', 0.741) water.add_element('H', 2.) water.add_element('O', 1.) # Include the amount of boron in the water based on the ppm, # neglecting the other constituents of boric acid water.add_element('B', ppm_Boron * 1e-6) # Instantiate a Materials object materials = openmc.Materials([fuel, zircaloy, water]) # Create cylinders for the fuel and clad fuel_outer_radius = openmc.ZCylinder(r=0.39218) clad_outer_radius = openmc.ZCylinder(r=0.45720) # Create boundary planes to surround the geometry min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective') max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective') min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective') max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective') # Create fuel Cell fuel_cell = openmc.Cell(name='1.6% Fuel') fuel_cell.fill = fuel fuel_cell.region = -fuel_outer_radius # Create a clad Cell clad_cell = openmc.Cell(name='1.6% Clad') clad_cell.fill = zircaloy clad_cell.region = +fuel_outer_radius & -clad_outer_radius # Create a moderator Cell moderator_cell = openmc.Cell(name='1.6% Moderator') moderator_cell.fill = water moderator_cell.region = +clad_outer_radius & (+min_x & -max_x & +min_y & -max_y) # Create root Universe root_universe = openmc.Universe(name='root universe', universe_id=0) root_universe.add_cells([fuel_cell, clad_cell, moderator_cell]) # Create Geometry and set root universe geometry = openmc.Geometry(root_universe) # Finish with the settings file settings = openmc.Settings() settings.batches = 300 settings.inactive = 20 settings.particles = 1000 settings.run_mode = 'eigenvalue' # Create an initial uniform spatial source distribution over fissionable zones bounds = [-0.63, -0.63, -10, 0.63, 0.63, 10.] uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True) settings.source = openmc.source.Source(space=uniform_dist) # We dont need a tallies file so dont waste the disk input/output time settings.output = {'tallies': False} model = openmc.model.Model(geometry, materials, settings) return model Explanation: Create Parametrized Model To perform the search we will use the openmc.search_for_keff function. This function requires a different function be defined which creates an parametrized model to analyze. This model is required to be stored in an openmc.model.Model object. The first parameter of this function will be modified during the search process for our critical eigenvalue. Our model will be a pin-cell from the Multi-Group Mode Part II assembly, except this time the entire model building process will be contained within a function, and the Boron concentration will be parametrized. End of explanation # Perform the search crit_ppm, guesses, keffs = openmc.search_for_keff(build_model, bracket=[1000., 2500.], tol=1e-2, bracketed_method='bisect', print_iterations=True) print('Critical Boron Concentration: {:4.0f} ppm'.format(crit_ppm)) Explanation: Search for the Critical Boron Concentration To perform the search we imply call the openmc.search_for_keff function and pass in the relvant arguments. For our purposes we will be passing in the model building function (build_model defined above), a bracketed range for the expected critical Boron concentration (1,000 to 2,500 ppm), the tolerance, and the method we wish to use. Instead of the bracketed range we could have used a single initial guess, but have elected not to in this example. Finally, due to the high noise inherent in using as few histories as are used in this example, our tolerance on the final keff value will be rather large (1.e-2) and a bisection method will be used for the search. End of explanation plt.figure(figsize=(8, 4.5)) plt.title('Eigenvalue versus Boron Concentration') # Create a scatter plot using the mean value of keff plt.scatter(guesses, [keffs[i].nominal_value for i in range(len(keffs))]) plt.xlabel('Boron Concentration [ppm]') plt.ylabel('Eigenvalue') plt.show() Explanation: Finally, the openmc.search_for_keff function also provided us with Lists of the guesses and corresponding keff values generated during the search process with OpenMC. Let's use that information to make a quick plot of the value of keff versus the boron concentration. End of explanation
12,812
Given the following text description, write Python code to implement the functionality described below step by step Description: Setting the EEG reference This tutorial describes how to set or change the EEG reference in MNE-Python. As usual we'll start by importing the modules we need, loading some example data &lt;sample-dataset&gt;, and cropping it to save memory. Since this tutorial deals specifically with EEG, we'll also restrict the dataset to just a few EEG channels so the plots are easier to see Step1: Background EEG measures a voltage (difference in electric potential) between each electrode and a reference electrode. This means that whatever signal is present at the reference electrode is effectively subtracted from all the measurement electrodes. Therefore, an ideal reference signal is one that captures none of the brain-specific fluctuations in electric potential, while capturing all of the environmental noise/interference that is being picked up by the measurement electrodes. In practice, this means that the reference electrode is often placed in a location on the subject's body and close to their head (so that any environmental interference affects the reference and measurement electrodes similarly) but as far away from the neural sources as possible (so that the reference signal doesn't pick up brain-based fluctuations). Typical reference locations are the subject's earlobe, nose, mastoid process, or collarbone. Each of these has advantages and disadvantages regarding how much brain signal it picks up (e.g., the mastoids pick up a fair amount compared to the others), and regarding the environmental noise it picks up (e.g., earlobe electrodes may shift easily, and have signals more similar to electrodes on the same side of the head). Even in cases where no electrode is specifically designated as the reference, EEG recording hardware will still treat one of the scalp electrodes as the reference, and the recording software may or may not display it to you (it might appear as a completely flat channel, or the software might subtract out the average of all signals before displaying, making it look like there is no reference). Setting or changing the reference channel If you want to recompute your data with a different reference than was used when the raw data were recorded and/or saved, MNE-Python provides the Step2: If a scalp electrode was used as reference but was not saved alongside the raw data (reference channels often aren't), you may wish to add it back to the dataset before re-referencing. For example, if your EEG system recorded with channel Fp1 as the reference but did not include Fp1 in the data file, using Step3: By default, Step4: .. KEEP THESE BLOCKS SEPARATE SO FIGURES ARE BIG ENOUGH TO READ Step5: Notice that the new reference (EEG 050) is now flat, while the original reference channel that we added back to the data (EEG 999) has a non-zero signal. Notice also that EEG 053 (which is marked as "bad" in raw.info['bads']) is not affected by the re-referencing. Setting average reference To set a "virtual reference" that is the average of all channels, you can use Step6: Creating the average reference as a projector If using an average reference, it is possible to create the reference as a Step7: Creating the average reference as a projector has a few advantages Step8: Using an infinite reference (REST) To use the "point at infinity" reference technique described in Step9: Using a bipolar reference To create a bipolar reference, you can use
Python Code: import os import mne sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False) raw.crop(tmax=60).load_data() raw.pick(['EEG 0{:02}'.format(n) for n in range(41, 60)]) Explanation: Setting the EEG reference This tutorial describes how to set or change the EEG reference in MNE-Python. As usual we'll start by importing the modules we need, loading some example data &lt;sample-dataset&gt;, and cropping it to save memory. Since this tutorial deals specifically with EEG, we'll also restrict the dataset to just a few EEG channels so the plots are easier to see: End of explanation # code lines below are commented out because the sample data doesn't have # earlobe or mastoid channels, so this is just for demonstration purposes: # use a single channel reference (left earlobe) # raw.set_eeg_reference(ref_channels=['A1']) # use average of mastoid channels as reference # raw.set_eeg_reference(ref_channels=['M1', 'M2']) # use a bipolar reference (contralateral) # raw.set_bipolar_reference(anode='[F3'], cathode=['F4']) Explanation: Background EEG measures a voltage (difference in electric potential) between each electrode and a reference electrode. This means that whatever signal is present at the reference electrode is effectively subtracted from all the measurement electrodes. Therefore, an ideal reference signal is one that captures none of the brain-specific fluctuations in electric potential, while capturing all of the environmental noise/interference that is being picked up by the measurement electrodes. In practice, this means that the reference electrode is often placed in a location on the subject's body and close to their head (so that any environmental interference affects the reference and measurement electrodes similarly) but as far away from the neural sources as possible (so that the reference signal doesn't pick up brain-based fluctuations). Typical reference locations are the subject's earlobe, nose, mastoid process, or collarbone. Each of these has advantages and disadvantages regarding how much brain signal it picks up (e.g., the mastoids pick up a fair amount compared to the others), and regarding the environmental noise it picks up (e.g., earlobe electrodes may shift easily, and have signals more similar to electrodes on the same side of the head). Even in cases where no electrode is specifically designated as the reference, EEG recording hardware will still treat one of the scalp electrodes as the reference, and the recording software may or may not display it to you (it might appear as a completely flat channel, or the software might subtract out the average of all signals before displaying, making it look like there is no reference). Setting or changing the reference channel If you want to recompute your data with a different reference than was used when the raw data were recorded and/or saved, MNE-Python provides the :meth:~mne.io.Raw.set_eeg_reference method on :class:~mne.io.Raw objects as well as the :func:mne.add_reference_channels function. To use an existing channel as the new reference, use the :meth:~mne.io.Raw.set_eeg_reference method; you can also designate multiple existing electrodes as reference channels, as is sometimes done with mastoid references: End of explanation raw.plot() Explanation: If a scalp electrode was used as reference but was not saved alongside the raw data (reference channels often aren't), you may wish to add it back to the dataset before re-referencing. For example, if your EEG system recorded with channel Fp1 as the reference but did not include Fp1 in the data file, using :meth:~mne.io.Raw.set_eeg_reference to set (say) Cz as the new reference will then subtract out the signal at Cz without restoring the signal at Fp1. In this situation, you can add back Fp1 as a flat channel prior to re-referencing using :func:~mne.add_reference_channels. (Since our example data doesn't use the 10-20 electrode naming system_, the example below adds EEG 999 as the missing reference, then sets the reference to EEG 050.) Here's how the data looks in its original state: End of explanation # add new reference channel (all zero) raw_new_ref = mne.add_reference_channels(raw, ref_channels=['EEG 999']) raw_new_ref.plot() Explanation: By default, :func:~mne.add_reference_channels returns a copy, so we can go back to our original raw object later. If you wanted to alter the existing :class:~mne.io.Raw object in-place you could specify copy=False. End of explanation # set reference to `EEG 050` raw_new_ref.set_eeg_reference(ref_channels=['EEG 050']) raw_new_ref.plot() Explanation: .. KEEP THESE BLOCKS SEPARATE SO FIGURES ARE BIG ENOUGH TO READ End of explanation # use the average of all channels as reference raw_avg_ref = raw.copy().set_eeg_reference(ref_channels='average') raw_avg_ref.plot() Explanation: Notice that the new reference (EEG 050) is now flat, while the original reference channel that we added back to the data (EEG 999) has a non-zero signal. Notice also that EEG 053 (which is marked as "bad" in raw.info['bads']) is not affected by the re-referencing. Setting average reference To set a "virtual reference" that is the average of all channels, you can use :meth:~mne.io.Raw.set_eeg_reference with ref_channels='average'. Just as above, this will not affect any channels marked as "bad", nor will it include bad channels when computing the average. However, it does modify the :class:~mne.io.Raw object in-place, so we'll make a copy first so we can still go back to the unmodified :class:~mne.io.Raw object later: End of explanation raw.set_eeg_reference('average', projection=True) print(raw.info['projs']) Explanation: Creating the average reference as a projector If using an average reference, it is possible to create the reference as a :term:projector rather than subtracting the reference from the data immediately by specifying projection=True: End of explanation for title, proj in zip(['Original', 'Average'], [False, True]): fig = raw.plot(proj=proj, n_channels=len(raw)) # make room for title fig.subplots_adjust(top=0.9) fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold') Explanation: Creating the average reference as a projector has a few advantages: It is possible to turn projectors on or off when plotting, so it is easy to visualize the effect that the average reference has on the data. If additional channels are marked as "bad" or if a subset of channels are later selected, the projector will be re-computed to take these changes into account (thus guaranteeing that the signal is zero-mean). If there are other unapplied projectors affecting the EEG channels (such as SSP projectors for removing heartbeat or blink artifacts), EEG re-referencing cannot be performed until those projectors are either applied or removed; adding the EEG reference as a projector is not subject to that constraint. (The reason this wasn't a problem when we applied the non-projector average reference to raw_avg_ref above is that the empty-room projectors included in the sample data :file:.fif file were only computed for the magnetometers.) End of explanation raw.del_proj() # remove our average reference projector first sphere = mne.make_sphere_model('auto', 'auto', raw.info) src = mne.setup_volume_source_space(sphere=sphere, exclude=30., pos=15.) forward = mne.make_forward_solution(raw.info, trans=None, src=src, bem=sphere) raw_rest = raw.copy().set_eeg_reference('REST', forward=forward) for title, _raw in zip(['Original', 'REST (∞)'], [raw, raw_rest]): fig = _raw.plot(n_channels=len(raw), scalings=dict(eeg=5e-5)) # make room for title fig.subplots_adjust(top=0.9) fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold') Explanation: Using an infinite reference (REST) To use the "point at infinity" reference technique described in :footcite:Yao2001 requires a forward model, which we can create in a few steps. Here we use a fairly large spacing of vertices (pos = 15 mm) to reduce computation time; a 5 mm spacing is more typical for real data analysis: End of explanation raw_bip_ref = mne.set_bipolar_reference(raw, anode=['EEG 054'], cathode=['EEG 055']) raw_bip_ref.plot() Explanation: Using a bipolar reference To create a bipolar reference, you can use :meth:~mne.set_bipolar_reference along with the respective channel names for anode and cathode which creates a new virtual channel that takes the difference between two specified channels (anode and cathode) and drops the original channels by default. The new virtual channel will be annotated with the channel info of the anode with location set to (0, 0, 0) and coil type set to EEG_BIPOLAR by default. Here we use a contralateral/transverse bipolar reference between channels EEG 054 and EEG 055 as described in :footcite:YaoEtAl2019 which creates a new virtual channel named EEG 054-EEG 055. End of explanation
12,813
Given the following text description, write Python code to implement the functionality described below step by step Description: Sentiment Analysis of Movie Reviews This tutorial will guide you through the implementation of a recurrent neural network to analyze movie reviews on IMDB and decide if they are positive or negative reviews. The IMDB dataset consists of 25,000 reviews, each with a binary label (1 = positive, 0 = negative). Here is an example review Step1: The data dictionary contains four numpy arrays for the data Step2: Compute backend We first generate a backend to tell neon what hardware to run the model on. This is shared by all neon objects. Step3: To train the model, we use neon's ArrayIterator object which will iterate over these numpy arrays, returning a minibatch of data with each call to pass to the model. Step4: Model Specification For most of the layers, we randomly initialize the parameters either randomly uniform numbers or Xavier Glorot's initialization scheme. Step5: The network consists of sequential list of the following layers Step6: Cost, Optimizers, and Callbacks For training, we use the Adagrad optimizer and the Cross Entropy cost function. Step7: Callbacks allow the model to report its progress during the course of training. Here we tell neon to save the model every epoch . Step8: Train model To train the model, we call the fit() function and pass in the training set. Here we train for 2 epochs, meaning two complete passes through the dataset. Step9: Accuracy We can then measure the model's accuracy on both the training set but also the held-out validation set.
Python Code: import pickle as pkl data = pkl.load(open('data/imdb_data.pkl', 'r')) Explanation: Sentiment Analysis of Movie Reviews This tutorial will guide you through the implementation of a recurrent neural network to analyze movie reviews on IMDB and decide if they are positive or negative reviews. The IMDB dataset consists of 25,000 reviews, each with a binary label (1 = positive, 0 = negative). Here is an example review: “Okay, sorry, but I loved this movie. I just love the whole 80’s genre of these kind of movies, because you don’t see many like this...” -~CupidGrl~ The dataset contains a large vocabulary of words, and reviews have variable length ranging from tens to hundreds of words. We reduce the complexity of the dataset with several steps: 1. Limit the vocabulary size to vocab_size = 20000 words by replacing the less frequent words with a Out-Of-Vocab (OOV) character. 2. Truncate each example to max_len = 128 words. 3. For reviews with less than max_len words, pad the review with whitespace. This equalizes the review lengths across examples. We have already done this preprocessing and saved the data in a pickle file: imdb_data.pkl. The needed file can be downloaded from https://s3-us-west-1.amazonaws.com/nervana-course/imdb_data.pkl and placed in the data directory. End of explanation print data['X_train'].shape Explanation: The data dictionary contains four numpy arrays for the data: data['X_train'] is an array with shape (20009, 128) for 20009 example reviews, each with up to 128 words. data['Y_train'] is an array with shape (20009, 1) with a target label (positive=1, negative=0) for each review. data['X_valid'] is an array with shape (4991, 128) for the 4991 examples in the test set. data['Y_valid'] is an array with shape (4991, 1) for the 4991 examples in the test set. End of explanation from neon.backends import gen_backend be = gen_backend(backend='gpu', batch_size=128) Explanation: Compute backend We first generate a backend to tell neon what hardware to run the model on. This is shared by all neon objects. End of explanation from neon.data import ArrayIterator import numpy as np data['Y_train'] = np.array(data['Y_train'], dtype=np.int32) data['Y_valid'] = np.array(data['Y_valid'], dtype=np.int32) train_set = ArrayIterator(data['X_train'], data['Y_train'], nclass=2) valid_set = ArrayIterator(data['X_valid'], data['Y_valid'], nclass=2) Explanation: To train the model, we use neon's ArrayIterator object which will iterate over these numpy arrays, returning a minibatch of data with each call to pass to the model. End of explanation from neon.initializers import Uniform, GlorotUniform init_glorot = GlorotUniform() init_uniform = Uniform(-0.1/128, 0.1/128) Explanation: Model Specification For most of the layers, we randomly initialize the parameters either randomly uniform numbers or Xavier Glorot's initialization scheme. End of explanation from neon.layers import LSTM, Affine, Dropout, LookupTable, RecurrentSum from neon.transforms import Logistic, Tanh, Softmax from neon.models import Model layers = [ LookupTable(vocab_size=20000, embedding_dim=128, init=init_uniform), LSTM(output_size=128, init=init_glorot, activation=Tanh(), gate_activation=Logistic(), reset_cells=True), RecurrentSum(), Dropout(keep=0.5), Affine(nout=2, init=init_glorot, bias=init_glorot, activation=Softmax()) ] # create model object model = Model(layers=layers) Explanation: The network consists of sequential list of the following layers: LookupTable is a word embedding that maps from a sparse one-hot representation to dense word vectors. The embedding is learned from the data. LSTM is a recurrent layer with “long short-term memory” units. LSTM networks are good at learning temporal dependencies during training, and often perform better than standard RNN layers. RecurrentSum is a recurrent output layer that collapses over the time dimension of the LSTM by summing outputs from individual steps. Dropout performs regularization by silencing a random subset of the units during training. Affine is a fully connected layer for the binary classification of the outputs. End of explanation from neon.optimizers import Adagrad from neon.transforms import CrossEntropyMulti from neon.layers import GeneralizedCost cost = GeneralizedCost(costfunc=CrossEntropyMulti(usebits=True)) optimizer = Adagrad(learning_rate=0.01) Explanation: Cost, Optimizers, and Callbacks For training, we use the Adagrad optimizer and the Cross Entropy cost function. End of explanation from neon.callbacks import Callbacks model_file = 'imdb_lstm.pkl' callbacks = Callbacks(model, eval_set=valid_set, serialize=1, save_path=model_file) Explanation: Callbacks allow the model to report its progress during the course of training. Here we tell neon to save the model every epoch . End of explanation model.fit(train_set, optimizer=optimizer, num_epochs=2, cost=cost, callbacks=callbacks) Explanation: Train model To train the model, we call the fit() function and pass in the training set. Here we train for 2 epochs, meaning two complete passes through the dataset. End of explanation from neon.transforms import Accuracy print "Test Accuracy - {}".format(100 * model.eval(valid_set, metric=Accuracy())) print "Train Accuracy - {}".format(100 * model.eval(train_set, metric=Accuracy())) Explanation: Accuracy We can then measure the model's accuracy on both the training set but also the held-out validation set. End of explanation
12,814
Given the following text description, write Python code to implement the functionality described below step by step Description: Análisis de los datos obtenidos Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 13 de Agosto del 2015 Los datos del experimento Step1: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica Step2: Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento Step3: Filtrado de datos Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas. Step4: Representación de X/Y Step5: Analizamos datos del ratio Step6: Límites de calidad Calculamos el número de veces que traspasamos unos límites de calidad. $Th^+ = 1.85$ and $Th^- = 1.65$
Python Code: #Importamos las librerías utilizadas import numpy as np import pandas as pd import seaborn as sns #Mostramos las versiones usadas de cada librerías print ("Numpy v{}".format(np.__version__)) print ("Pandas v{}".format(pd.__version__)) print ("Seaborn v{}".format(sns.__version__)) #Abrimos el fichero csv con los datos de la muestra datos = pd.read_csv('ensayo2.CSV') %pylab inline #Almacenamos en una lista las columnas del fichero con las que vamos a trabajar columns = ['Diametro X','Diametro Y', 'RPM TRAC'] #Mostramos un resumen de los datos obtenidoss datos[columns].describe() #datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']] Explanation: Análisis de los datos obtenidos Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 13 de Agosto del 2015 Los datos del experimento: * Hora de inicio: 12:06 * Hora final : 12:26 * Filamento extruido: 314Ccm * $T: 150ºC$ * $V_{min} tractora: 1.5 mm/s$ * $V_{max} tractora: 5.3 mm/s$ * Los incrementos de velocidades en las reglas del sistema experto son distintas: * En los caso 3 y 5 se mantiene un incremento de +2. * En los casos 4 y 6 se reduce el incremento a -1. Este experimento dura 20min por que a simple vista se ve que no aporta ninguna mejora, de hecho, añade más inestabilidad al sitema. Se opta por añadir más reglas al sistema, e intentar hacer que la velocidad de tracción no llegue a los límites. End of explanation graf = datos.ix[:, "Diametro X"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r') graf.axhspan(1.65,1.85, alpha=0.2) graf.set_xlabel('Tiempo (s)') graf.set_ylabel('Diámetro (mm)') #datos['RPM TRAC'].plot(secondary_y='RPM TRAC') datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes') Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica End of explanation plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.') Explanation: Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento End of explanation datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)] #datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes') Explanation: Filtrado de datos Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas. End of explanation plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.') Explanation: Representación de X/Y End of explanation ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y'] ratio.describe() rolling_mean = pd.rolling_mean(ratio, 50) rolling_std = pd.rolling_std(ratio, 50) rolling_mean.plot(figsize=(12,6)) # plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5) ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5)) Explanation: Analizamos datos del ratio End of explanation Th_u = 1.85 Th_d = 1.65 data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) | (datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)] data_violations.describe() data_violations.plot(subplots=True, figsize=(12,12)) Explanation: Límites de calidad Calculamos el número de veces que traspasamos unos límites de calidad. $Th^+ = 1.85$ and $Th^- = 1.65$ End of explanation
12,815
Given the following text description, write Python code to implement the functionality described below step by step Description: Remote Interactive Task Manager LSASS Dump Metadata | | | | Step1: Download & Process Mordor Dataset Step2: Analytic I Look for taskmgr creating files which name contains the string lsass and with extension .dmp. | Data source | Event Provider | Relationship | Event | | Step3: Analytic II Look for task manager access lsass and with functions from dbgcore.dll or dbghelp.dll libraries | Data source | Event Provider | Relationship | Event | | Step4: Analytic III Look for any process accessing lsass and with functions from dbgcore.dll or dbghelp.dll libraries | Data source | Event Provider | Relationship | Event | | Step5: Analytic IV Look for combinations of process access and process creation to get more context around potential lsass dump form task manager or other binaries | Data source | Event Provider | Relationship | Event | | Step6: Analytic V Look for binaries accessing lsass that are running under the same logon context of a user over an RDP session | Data source | Event Provider | Relationship | Event | |
Python Code: from openhunt.mordorutils import * spark = get_spark() Explanation: Remote Interactive Task Manager LSASS Dump Metadata | | | |:------------------|:---| | collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] | | creation date | 2019/10/30 | | modification date | 2020/09/20 | | playbook related | ['WIN-1904101010'] | Hypothesis Adversaries might be RDPing to computers in my environment and interactively dumping the memory contents of LSASS with task manager. Technical Context None Offensive Tradecraft The Windows Task Manager may be used to dump the memory space of lsass.exe to disk for processing with a credential access tool such as Mimikatz. This is performed by launching Task Manager as a privileged user, selecting lsass.exe, and clicking “Create dump file”. This saves a dump file to disk with a deterministic name that includes the name of the process being dumped. Mordor Test Data | | | |:----------|:----------| | metadata | https://mordordatasets.com/notebooks/small/windows/06_credential_access/SDWIN-191027055035.html | | link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/credential_access/host/rdp_interactive_taskmanager_lsass_dump.zip | Analytics Initialize Analytics Engine End of explanation mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/credential_access/host/rdp_interactive_taskmanager_lsass_dump.zip" registerMordorSQLTable(spark, mordor_file, "mordorTable") Explanation: Download & Process Mordor Dataset End of explanation df = spark.sql( ''' SELECT `@timestamp`, Hostname, Image, TargetFilename, ProcessGuid FROM mordorTable WHERE Channel = "Microsoft-Windows-Sysmon/Operational" AND EventID = 11 AND Image LIKE "%taskmgr.exe" AND lower(TargetFilename) RLIKE ".*lsass.*\.dmp" ''' ) df.show(10,False) Explanation: Analytic I Look for taskmgr creating files which name contains the string lsass and with extension .dmp. | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | File | Microsoft-Windows-Sysmon/Operational | Process created File | 11 | End of explanation df = spark.sql( ''' SELECT `@timestamp`, Hostname, SourceImage, TargetImage, GrantedAccess FROM mordorTable WHERE Channel = "Microsoft-Windows-Sysmon/Operational" AND EventID = 10 AND lower(SourceImage) LIKE "%taskmgr.exe" AND lower(TargetImage) LIKE "%lsass.exe" AND (lower(CallTrace) RLIKE ".*dbgcore\.dll.*" OR lower(CallTrace) RLIKE ".*dbghelp\.dll.*") ''' ) df.show(10,False) Explanation: Analytic II Look for task manager access lsass and with functions from dbgcore.dll or dbghelp.dll libraries | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Process | Microsoft-Windows-Sysmon/Operational | Process accessed Process | 10 | End of explanation df = spark.sql( ''' SELECT `@timestamp`, Hostname, SourceImage, TargetImage, GrantedAccess FROM mordorTable WHERE Channel = "Microsoft-Windows-Sysmon/Operational" AND EventID = 10 AND lower(TargetImage) LIKE "%lsass.exe" AND (lower(CallTrace) RLIKE ".*dbgcore\.dll.*" OR lower(CallTrace) RLIKE ".*dbghelp\.dll.*") ''' ) df.show(10,False) Explanation: Analytic III Look for any process accessing lsass and with functions from dbgcore.dll or dbghelp.dll libraries | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Process | Microsoft-Windows-Sysmon/Operational | Process accessed Process | 10 | End of explanation df = spark.sql( ''' SELECT o.`@timestamp`, o.Hostname, o.Image, o.LogonId, o.ProcessGuid, a.SourceProcessGUID, o.CommandLine FROM mordorTable o INNER JOIN ( SELECT Hostname,SourceProcessGUID FROM mordorTable WHERE Channel = "Microsoft-Windows-Sysmon/Operational" AND EventID = 10 AND lower(TargetImage) LIKE "%lsass.exe" AND (lower(CallTrace) RLIKE ".*dbgcore\.dll.*" OR lower(CallTrace) RLIKE ".*dbghelp\.dll.*") ) a ON o.ProcessGuid = a.SourceProcessGUID WHERE o.Channel = "Microsoft-Windows-Sysmon/Operational" AND o.EventID = 1 ''' ) df.show(10,False) Explanation: Analytic IV Look for combinations of process access and process creation to get more context around potential lsass dump form task manager or other binaries | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Process | Microsoft-Windows-Sysmon/Operational | Process accessed Process | 10 | | Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 | End of explanation df = spark.sql( ''' SELECT o.`@timestamp`, o.Hostname, o.SessionName, o.AccountName, o.ClientName, o.ClientAddress, a.Image, a.CommandLine FROM mordorTable o INNER JOIN ( SELECT LogonId, Image, CommandLine FROM ( SELECT o.Image, o.LogonId, o.CommandLine FROM mordorTable o INNER JOIN ( SELECT Hostname,SourceProcessGUID FROM mordorTable WHERE Channel = "Microsoft-Windows-Sysmon/Operational" AND EventID = 10 AND lower(TargetImage) LIKE "%lsass.exe" AND (lower(CallTrace) RLIKE ".*dbgcore\.dll.*" OR lower(CallTrace) RLIKE ".*dbghelp\.dll.*") ) a ON o.ProcessGuid = a.SourceProcessGUID WHERE o.Channel = "Microsoft-Windows-Sysmon/Operational" AND o.EventID = 1 ) ) a ON o.LogonID = a.LogonId WHERE lower(o.Channel) = "security" AND o.EventID = 4778 ''' ) df.show(10,False) Explanation: Analytic V Look for binaries accessing lsass that are running under the same logon context of a user over an RDP session | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Process | Microsoft-Windows-Sysmon/Operational | Process accessed Process | 10 | | Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 | | Authentication log | Microsoft-Windows-Security-Auditing | User authenticated Host | 4778 | End of explanation
12,816
Given the following text description, write Python code to implement the functionality described below step by step Description: Importing and Processing CO2 Respiration Data from GC-MS This script will convert output from the GC-MS from multiple sampling timepoints into a table. It can also calculate the mols C based on peak area and prep a graph in ggplot. Last Modified by R. Wilhelm on October 20th, 2017 Step 1 Step1: Define Import Function Step2: Step 2 Step3: Step 3 Step4: Define Functions Used in R Step5: Step 4 Step6: Step 5 Step7: Step 6 Step8: Step 7 Step9: Step 8 Step10: Step 9 Step11: Step 10
Python Code: ## Provide the directory the contains subdirectories with timepoints # example: '/home/roli/PROJECT/ which would contain sub-directories corresponding to timepoints T1, T2, T3 ... that containing the text output from the GC-MS directory = '/home/roli/scripts/gcms/example_data/' ## Name the output for GC-MS data refinement (still raw, but in tabular form) output_name = 'example' ## Provide an 'events' table in '.tsv' format which contains at least three columns: 'Timepoint', 'Sampling Date', and 'Sampling Time' # note: It is critical to include T0 (i.e. the start date and time) #Timepoint Date Time #T0 21/04/17 21:30:00 #T1 22/04/17 09:30:00 events = 'events.tsv' ## Provide information on the volume of microcosm sampled (in L) # note: you'll have to alter the code for calculating mol.C if you've used mixed container sizes. microcosm_size = 0.25 # ## Provide name and concentration of each standard (in ppm) import pandas as pd standards = pd.DataFrame({'ppm': [0, 855, 1701, 3422, 8510, 17227, 34488]}, index=['standard1','standard2','standard3','standard4','standard5','standard6','standard7']) Explanation: Importing and Processing CO2 Respiration Data from GC-MS This script will convert output from the GC-MS from multiple sampling timepoints into a table. It can also calculate the mols C based on peak area and prep a graph in ggplot. Last Modified by R. Wilhelm on October 20th, 2017 Step 1: User Input End of explanation import os, re, glob, sys from collections import defaultdict def import_me(directory): import_dict = defaultdict(list) for dir_path in os.walk(directory): if dir_path[0] != directory: dir_path = (str(dir_path[0])) dir_name = re.sub(directory,"",dir_path) for file in glob.glob(dir_path+"/*.txt"): name = re.sub(dir_path+"/","",file) name = re.sub(".txt","",name) import_dict[dir_name].append([name, file]) return import_dict Explanation: Define Import Function End of explanation output = open(directory+"/"+output_name+".raw.co2.table.tsv","w") output.write("timepoint\tsampleID\tion\tconcentration\tpeak area\trt\n") input_dictionary = import_me(directory) for timepoint, sample_files in input_dictionary.items(): for sample_file in sample_files: name = sample_file[0] file = sample_file[1] for line in open(file,"r"): if re.search("TIC|m/z 44|m/z 45",line): line = line.strip() line = line.split("\t") ion = line[1] retention = line[5] area = line[9] concentration = line[11] output.write(timepoint+"\t"+name+"\t"+ion+"\t"+concentration+"\t"+area+"\t"+retention+"\n") output.close() Explanation: Step 2: Convert GC-MS Raw Data to Table End of explanation ## Setup R-Magic for Jupyter Notebooks import rpy2 import pandas as pd %load_ext rpy2.ipython ## Description: use Pandas to create a dataframe and then pipe that to R # Import CO2 Data co2_data = pd.read_csv(directory+"/"+output_name+".raw.co2.table.tsv", sep="\t") %R -i co2_data # Segregate Standards %R raw_standards <- co2_data[grep("standard",co2_data$sampleID),] %R co2_data <- co2_data[-grep("standard",co2_data$sampleID),] # Import Standards %R -i standards # Import Microcosm Size %R -i microcosm_size # Import Analysis Directory %R -i directory %R -i output_name # Import Events try: events = pd.read_csv(directory+"/"+events, sep="\t") %R -i events except: pass Explanation: Step 3: Import and Work-up in R End of explanation %%R ######################## ## Convert ppm to mols C calc_mol_C = function(ppm,ion,volume.L){ # moles (n) = PV / RT # x = ppm temp.K=294.261 pressure.atm=1 R=0.08206 ppm = as.numeric(ppm) ion = as.numeric(ion) mol.volume = (pressure.atm * volume.L) / (R * temp.K) # mol gas in container mol.CO2 = mol.volume * (ppm / 1000000) # fraction of mol as CO2 if (ion == 44){ mol.C = mol.CO2 * 12/44 # fraction of CO2 that is C } else { mol.C = mol.CO2 * 13/45 # fraction of CO2 that is C } return(mol.C) } ######################### #Calculate Standard Error # Taken from http://www.cookbook-r.com/Graphs/Plotting_means_and_error_bars_(ggplot2)/ summarySE <- function(data=NULL, measurevar, groupvars=NULL, na.rm=FALSE, conf.interval=.95, .drop=TRUE) { library(plyr) # New version of length which can handle NA's: if na.rm==T, don't count them length2 <- function (x, na.rm=FALSE) { if (na.rm) sum(!is.na(x)) else length(x) } # This does the summary. For each group's data frame, return a vector with # N, mean, and sd datac <- ddply(data, groupvars, .drop=.drop, .fun = function(xx, col) { c(N = length2(xx[[col]], na.rm=na.rm), mean = mean (xx[[col]], na.rm=na.rm), sd = sd (xx[[col]], na.rm=na.rm) ) }, measurevar ) # Rename the "mean" column datac <- rename(datac, c("mean" = measurevar)) datac$se <- datac$sd / sqrt(datac$N) # Calculate standard error of the mean # Confidence interval multiplier for standard error # Calculate t-statistic for confidence interval: # e.g., if conf.interval is .95, use .975 (above/below), and use df=N-1 ciMult <- qt(conf.interval/2 + .5, datac$N-1) datac$ci <- datac$se * ciMult return(datac) } ############################################## ### Join Data and Time into Single POSIX stamp time_converter <- function(date, time){ x <- data.frame(posix = rep(NA,length(date))) x$posix <- as.POSIXct(x$posix) ## This is ugly becaue it is trying to catch various irregularities in time and date input for (n in 1:length(date)){ if (!is.na(as.POSIXct(strptime(paste(date[n],time[n],sep=" "), '%d/%m/%y %R'), tz="EST"))){ x$posix[n] = as.POSIXct(strptime(paste(date[n],time[n],sep=" "), '%d/%m/%y %R'), tz="EST") } else if (!is.na(as.POSIXct(strptime(paste(date[n],time[n],sep=" "), '%d/%m/%Y %R'), tz="EST"))) { x$posix[n] = as.POSIXct(strptime(paste(date[n],time[n],sep=" "), '%d/%m/%Y %R'), tz="EST") } else { x$posix[n] = as.POSIXct(strptime(paste(date[n],time[n],sep=" "), '%d-%m-%y %R'), tz="EST") } } return(x) } #################################################### ### Calculate Duration from Start for All Timepoints duration_calculator <- function(start, time_series){ return(as.numeric(difftime(time_series, start), units='hours')) } Explanation: Define Functions Used in R End of explanation %%R # Combine all dates and times into single 'POSIX' time-stamp posix <- time_converter(events$date, events$time) events <- cbind(events, posix) # Calculate Duration for Each Timepoint start <- subset(events, timepoint == "T0") events <- subset(events, timepoint != "T0") events$duration <- duration_calculator(start$posix, events$posix) # duration_calculator(start_time, all_time_points) # Merge CO2 Data with duration co2_data <- merge(co2_data, events, by = "timepoint") # print current work-up print(head(co2_data)) Explanation: Step 4: Calculate Durations End of explanation %%R ## Note: This script assumes very low inter-run variability ## From my experience, preparing the standards by hand introduces greater inter-run variability than the instrument. ## Therefore, this script will average all standard data and calculate ppm based off of this average ## a bit of clean-up raw_standards$ion <- gsub("m/z ","",raw_standards$ion) raw_standards <- subset(raw_standards, ion != "TIC") ## Concentrations are calculated based on total CO2 (converting 13C to 12C-equivalent is negligible) # sum ion 44 and ion 45 raw_standards <- ddply(raw_standards, ~ timepoint + sampleID, summarise, total.peak.area = sum(peak.area)) raw_standards$combo <- paste(raw_standards$timepoint, raw_standards$sampleID,sep="_") ## regress standards standards$sampleID <- rownames(standards) raw_standards <- merge(raw_standards, standards, by = "sampleID") ## Plot Curve plot <- ggplot(raw_standards, aes(total.peak.area, ppm, label = combo)) + geom_point() + geom_smooth(method=lm, se=F) + ggtitle("Raw Standards") print(plot + geom_label(size = 4, hjust = -0.1)) ## Remove Outliers from Standard Curve remove_me <- c("T2_standard7","T2_standard6") if (length(remove_me) > 0){ refined_standards <- raw_standards[-which(raw_standards$combo %in% remove_me),] plot <- ggplot(refined_standards, aes(total.peak.area, ppm, label = combo)) + geom_point() + geom_smooth(method=lm, se=F) + ggtitle("Refined Standards") print(plot + geom_label(size = 4, hjust = -0.1)) } else { refined_standards <- raw_standards } ## Calculate regression coefficients (force through zero) m <- as.numeric(coef(lm(ppm ~ total.peak.area -1, refined_standards))[1]) #b <- as.numeric(coef(lm(ppm ~ total.peak.area, refined_standards))[1]) Explanation: Step 5: Calculate Standard Curve End of explanation %%R ## a bit of clean-up co2_data$ion <- gsub("m/z ","", co2_data$ion) co2_data <- subset(co2_data, ion != "TIC") ## Calculate ppm based on curve co2_data$adj.conc <- m*co2_data$peak.area print(head(co2_data)) Explanation: Step 6: Convert peak area to ppm End of explanation %%R # Calculate mol co2_data$mol.C <- apply(co2_data[,c("adj.conc","ion")], 1, function(x) calc_mol_C(x[1],x[2],microcosm_size)) print(head(co2_data)) Explanation: Step 7: Convert ppm to mols C End of explanation %%R ## Calculate Cumulative Respiration count = 1 for (sample in unique(co2_data$sampleID)){ for (i in c(44, 45)){ foo<-subset(co2_data, sampleID == sample & ion == i) foo<-foo[order(foo$duration),] foo$cum.mol.C <- cumsum(foo$mol.C) if (count == 1){ cumulative <- foo count = count + 1 } else { cumulative <- rbind(cumulative, foo) } } } print(head(cumulative)) Explanation: Step 8: Calculate Cumulative Respiration End of explanation %%R # Plot Curve 1 : Respiration over Sampling Intervals plot_me <- summarySE(co2_data, measurevar="mol.C", groupvars=c("sampleID","duration","ion")) print(ggplot(plot_me, aes(duration, mol.C, color = sampleID)) + geom_point() + geom_smooth(method = "lm", formula = y ~ splines::bs(x, 3), se = FALSE) + facet_grid(~ion) + ggtitle("Net CO2 Flux across Sampling Intervals")) # Plot Curve 2 : Cumulative Respiration print(ggplot(cumulative, aes(duration, cum.mol.C, color = sampleID)) + geom_point() + geom_smooth(method = "lm", formula = y ~ splines::bs(x, 3), se = FALSE) + facet_grid(~ion) + ggtitle("Cumulative CO2 Over Time")) Explanation: Step 9: Plot Curves End of explanation %%R ## Export as '.csv' for safe-keeping write.csv(co2_data, file = paste(directory,"/",output_name,".final.csv",sep="")) ## Export as '.rds' for analysis in R saveRDS(co2_data, file = paste(directory,"/",output_name,".final.rds",sep="")) Explanation: Step 10: Export and Save Dataset End of explanation
12,817
Given the following text description, write Python code to implement the functionality described below step by step Description: Emojify! Welcome to the second assignment of Week 2. You are going to use word vector representations to build an Emojifier. Have you ever wanted to make your text messages more expressive? Your emojifier app will help you do that. So rather than writing "Congratulations on the promotion! Lets get coffee and talk. Love you!" the emojifier can automatically turn this into "Congratulations on the promotion! 👍 Lets get coffee and talk. ☕️ Love you! ❤️" You will implement a model which inputs a sentence (such as "Let's go see the baseball game tonight!") and finds the most appropriate emoji to be used with this sentence (⚾️). In many emoji interfaces, you need to remember that ❤️ is the "heart" symbol rather than the "love" symbol. But using word vectors, you'll see that even if your training set explicitly relates only a few words to a particular emoji, your algorithm will be able to generalize and associate words in the test set to the same emoji even if those words don't even appear in the training set. This allows you to build an accurate classifier mapping from sentences to emojis, even using a small training set. In this exercise, you'll start with a baseline model (Emojifier-V1) using word embeddings, then build a more sophisticated model (Emojifier-V2) that further incorporates an LSTM. Lets get started! Run the following cell to load the package you are going to use. Step1: 1 - Baseline model Step2: Run the following cell to print sentences from X_train and corresponding labels from Y_train. Change index to see different examples. Because of the font the iPython notebook uses, the heart emoji may be colored black rather than red. Step3: 1.2 - Overview of the Emojifier-V1 In this part, you are going to implement a baseline model called "Emojifier-v1". <center> <img src="images/image_1.png" style="width Step4: Let's see what convert_to_one_hot() did. Feel free to change index to print out different values. Step5: All the data is now ready to be fed into the Emojify-V1 model. Let's implement the model! 1.3 - Implementing Emojifier-V1 As shown in Figure (2), the first step is to convert an input sentence into the word vector representation, which then get averaged together. Similar to the previous exercise, we will use pretrained 50-dimensional GloVe embeddings. Run the following cell to load the word_to_vec_map, which contains all the vector representations. Step6: You've loaded Step8: Exercise Step10: Expected Output Step11: Run the next cell to train your model and learn the softmax parameters (W,b). Step12: Expected Output (on a subset of iterations) Step13: Expected Output Step14: Amazing! Because adore has a similar embedding as love, the algorithm has generalized correctly even to a word it has never seen before. Words such as heart, dear, beloved or adore have embedding vectors similar to love, and so might work too---feel free to modify the inputs above and try out a variety of input sentences. How well does it work? Note though that it doesn't get "not feeling happy" correct. This algorithm ignores word ordering, so is not good at understanding phrases like "not happy." Printing the confusion matrix can also help understand which classes are more difficult for your model. A confusion matrix shows how often an example whose label is one class ("actual" class) is mislabeled by the algorithm with a different class ("predicted" class). Step15: <font color='blue'> What you should remember from this part Step17: 2.1 - Overview of the model Here is the Emojifier-v2 you will implement Step18: Run the following cell to check what sentences_to_indices() does, and check your results. Step20: Expected Output Step22: Expected Output Step23: Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose max_len = 10. You should see your architecture, it uses "20,223,927" parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001*50 = 20,000,050 non-trainable parameters. Step24: As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, adam optimizer and ['accuracy'] metrics Step25: It's time to train your model. Your Emojifier-V2 model takes as input an array of shape (m, max_len) and outputs probability vectors of shape (m, number of classes). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors). Step26: Fit the Keras model on X_train_indices and Y_train_oh. We will use epochs = 50 and batch_size = 32. Step27: Your model should perform close to 100% accuracy on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set. Step28: You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples. Step29: Now you can try it on your own example. Write your own sentence below.
Python Code: import numpy as np from emo_utils import * import emoji import matplotlib.pyplot as plt %matplotlib inline Explanation: Emojify! Welcome to the second assignment of Week 2. You are going to use word vector representations to build an Emojifier. Have you ever wanted to make your text messages more expressive? Your emojifier app will help you do that. So rather than writing "Congratulations on the promotion! Lets get coffee and talk. Love you!" the emojifier can automatically turn this into "Congratulations on the promotion! 👍 Lets get coffee and talk. ☕️ Love you! ❤️" You will implement a model which inputs a sentence (such as "Let's go see the baseball game tonight!") and finds the most appropriate emoji to be used with this sentence (⚾️). In many emoji interfaces, you need to remember that ❤️ is the "heart" symbol rather than the "love" symbol. But using word vectors, you'll see that even if your training set explicitly relates only a few words to a particular emoji, your algorithm will be able to generalize and associate words in the test set to the same emoji even if those words don't even appear in the training set. This allows you to build an accurate classifier mapping from sentences to emojis, even using a small training set. In this exercise, you'll start with a baseline model (Emojifier-V1) using word embeddings, then build a more sophisticated model (Emojifier-V2) that further incorporates an LSTM. Lets get started! Run the following cell to load the package you are going to use. End of explanation X_train, Y_train = read_csv('data/train_emoji.csv') X_test, Y_test = read_csv('data/tesss.csv') maxLen = len(max(X_train, key=len).split()) Explanation: 1 - Baseline model: Emojifier-V1 1.1 - Dataset EMOJISET Let's start by building a simple baseline classifier. You have a tiny dataset (X, Y) where: - X contains 127 sentences (strings) - Y contains a integer label between 0 and 4 corresponding to an emoji for each sentence <img src="images/data_set.png" style="width:700px;height:300px;"> <caption><center> Figure 1: EMOJISET - a classification problem with 5 classes. A few examples of sentences are given here. </center></caption> Let's load the dataset using the code below. We split the dataset between training (127 examples) and testing (56 examples). End of explanation index = 59 print(X_train[index], label_to_emoji(Y_train[index])) Explanation: Run the following cell to print sentences from X_train and corresponding labels from Y_train. Change index to see different examples. Because of the font the iPython notebook uses, the heart emoji may be colored black rather than red. End of explanation Y_oh_train = convert_to_one_hot(Y_train, C = 5) Y_oh_test = convert_to_one_hot(Y_test, C = 5) Explanation: 1.2 - Overview of the Emojifier-V1 In this part, you are going to implement a baseline model called "Emojifier-v1". <center> <img src="images/image_1.png" style="width:900px;height:300px;"> <caption><center> Figure 2: Baseline model (Emojifier-V1).</center></caption> </center> The input of the model is a string corresponding to a sentence (e.g. "I love you). In the code, the output will be a probability vector of shape (1,5), that you then pass in an argmax layer to extract the index of the most likely emoji output. To get our labels into a format suitable for training a softmax classifier, lets convert $Y$ from its current shape current shape $(m, 1)$ into a "one-hot representation" $(m, 5)$, where each row is a one-hot vector giving the label of one example, You can do so using this next code snipper. Here, Y_oh stands for "Y-one-hot" in the variable names Y_oh_train and Y_oh_test: End of explanation index = 59 print(Y_train[index], "is converted into one hot", Y_oh_train[index]) Explanation: Let's see what convert_to_one_hot() did. Feel free to change index to print out different values. End of explanation word_to_index, index_to_word, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt') Explanation: All the data is now ready to be fed into the Emojify-V1 model. Let's implement the model! 1.3 - Implementing Emojifier-V1 As shown in Figure (2), the first step is to convert an input sentence into the word vector representation, which then get averaged together. Similar to the previous exercise, we will use pretrained 50-dimensional GloVe embeddings. Run the following cell to load the word_to_vec_map, which contains all the vector representations. End of explanation word = "cucumber" index = 289846 print("the index of", word, "in the vocabulary is", word_to_index[word]) print("the", str(index) + "th word in the vocabulary is", index_to_word[index]) Explanation: You've loaded: - word_to_index: dictionary mapping from words to their indices in the vocabulary (400,001 words, with the valid indices ranging from 0 to 400,000) - index_to_word: dictionary mapping from indices to their corresponding words in the vocabulary - word_to_vec_map: dictionary mapping words to their GloVe vector representation. Run the following cell to check if it works. End of explanation # GRADED FUNCTION: sentence_to_avg def sentence_to_avg(sentence, word_to_vec_map): Converts a sentence (string) into a list of words (strings). Extracts the GloVe representation of each word and averages its value into a single vector encoding the meaning of the sentence. Arguments: sentence -- string, one training example from X word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation Returns: avg -- average vector encoding information about the sentence, numpy-array of shape (50,) ### START CODE HERE ### # Step 1: Split sentence into list of lower case words (≈ 1 line) words = sentence.lower().split() # Initialize the average word vector, should have the same shape as your word vectors. avg = np.zeros((50,)) # Step 2: average the word vectors. You can loop over the words in the list "words". for w in words: avg += word_to_vec_map[w] avg = avg/len(words) ### END CODE HERE ### return avg avg = sentence_to_avg("Morrocan couscous is my favorite dish", word_to_vec_map) print("avg = ", avg) Explanation: Exercise: Implement sentence_to_avg(). You will need to carry out two steps: 1. Convert every sentence to lower-case, then split the sentence into a list of words. X.lower() and X.split() might be useful. 2. For each word in the sentence, access its GloVe representation. Then, average all these values. End of explanation # GRADED FUNCTION: model def model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 400): Model to train word vector representations in numpy. Arguments: X -- input data, numpy array of sentences as strings, of shape (m, 1) Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1) word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation learning_rate -- learning_rate for the stochastic gradient descent algorithm num_iterations -- number of iterations Returns: pred -- vector of predictions, numpy-array of shape (m, 1) W -- weight matrix of the softmax layer, of shape (n_y, n_h) b -- bias of the softmax layer, of shape (n_y,) np.random.seed(1) # Define number of training examples m = Y.shape[0] # number of training examples n_y = 5 # number of classes n_h = 50 # dimensions of the GloVe vectors # Initialize parameters using Xavier initialization W = np.random.randn(n_y, n_h) / np.sqrt(n_h) b = np.zeros((n_y,)) # Convert Y to Y_onehot with n_y classes Y_oh = convert_to_one_hot(Y, C = n_y) # Optimization loop for t in range(num_iterations): # Loop over the number of iterations for i in range(m): # Loop over the training examples ### START CODE HERE ### (≈ 4 lines of code) # Average the word vectors of the words from the i'th training example avg = sentence_to_avg(X[i], word_to_vec_map) # Forward propagate the avg through the softmax layer z = np.dot(W, avg)+b a = softmax(z) # Compute cost using the i'th training label's one hot representation and "A" (the output of the softmax) cost = -np.sum(np.multiply(Y_oh, np.log(a))) ### END CODE HERE ### # Compute gradients dz = a - Y_oh[i] dW = np.dot(dz.reshape(n_y,1), avg.reshape(1, n_h)) db = dz # Update parameters with Stochastic Gradient Descent W = W - learning_rate * dW b = b - learning_rate * db if t % 100 == 0: print("Epoch: " + str(t) + " --- cost = " + str(cost)) pred = predict(X, Y, W, b, word_to_vec_map) return pred, W, b print(X_train.shape) print(Y_train.shape) print(np.eye(5)[Y_train.reshape(-1)].shape) print(X_train[0]) print(type(X_train)) Y = np.asarray([5,0,0,5, 4, 4, 4, 6, 6, 4, 1, 1, 5, 6, 6, 3, 6, 3, 4, 4]) print(Y.shape) X = np.asarray(['I am going to the bar tonight', 'I love you', 'miss you my dear', 'Lets go party and drinks','Congrats on the new job','Congratulations', 'I am so happy for you', 'Why are you feeling bad', 'What is wrong with you', 'You totally deserve this prize', 'Let us go play football', 'Are you down for football this afternoon', 'Work hard play harder', 'It is suprising how people can be dumb sometimes', 'I am very disappointed','It is the best day in my life', 'I think I will end up alone','My life is so boring','Good job', 'Great so awesome']) print(X.shape) print(np.eye(5)[Y_train.reshape(-1)].shape) print(type(X_train)) Explanation: Expected Output: <table> <tr> <td> **avg= ** </td> <td> [-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983 -0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867 0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767 0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061 0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265 1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925 -0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333 -0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433 0.1445417 0.09808667] </td> </tr> </table> Model You now have all the pieces to finish implementing the model() function. After using sentence_to_avg() you need to pass the average through forward propagation, compute the cost, and then backpropagate to update the softmax's parameters. Exercise: Implement the model() function described in Figure (2). Assuming here that $Yoh$ ("Y one hot") is the one-hot encoding of the output labels, the equations you need to implement in the forward pass and to compute the cross-entropy cost are: $$ z^{(i)} = W . avg^{(i)} + b$$ $$ a^{(i)} = softmax(z^{(i)})$$ $$ \mathcal{L}^{(i)} = - \sum_{k = 0}^{n_y - 1} Yoh^{(i)}_k * log(a^{(i)}_k)$$ It is possible to come up with a more efficient vectorized implementation. But since we are using a for-loop to convert the sentences one at a time into the avg^{(i)} representation anyway, let's not bother this time. We provided you a function softmax(). End of explanation pred, W, b = model(X_train, Y_train, word_to_vec_map) print(pred) Explanation: Run the next cell to train your model and learn the softmax parameters (W,b). End of explanation print("Training set:") pred_train = predict(X_train, Y_train, W, b, word_to_vec_map) print('Test set:') pred_test = predict(X_test, Y_test, W, b, word_to_vec_map) Explanation: Expected Output (on a subset of iterations): <table> <tr> <td> **Epoch: 0** </td> <td> cost = 1.95204988128 </td> <td> Accuracy: 0.348484848485 </td> </tr> <tr> <td> **Epoch: 100** </td> <td> cost = 0.0797181872601 </td> <td> Accuracy: 0.931818181818 </td> </tr> <tr> <td> **Epoch: 200** </td> <td> cost = 0.0445636924368 </td> <td> Accuracy: 0.954545454545 </td> </tr> <tr> <td> **Epoch: 300** </td> <td> cost = 0.0343226737879 </td> <td> Accuracy: 0.969696969697 </td> </tr> </table> Great! Your model has pretty high accuracy on the training set. Lets now see how it does on the test set. 1.4 - Examining test set performance End of explanation X_my_sentences = np.array(["i adore you", "i love you", "funny lol", "lets play with a ball", "food is ready", "not feeling happy"]) Y_my_labels = np.array([[0], [0], [2], [1], [4],[3]]) pred = predict(X_my_sentences, Y_my_labels , W, b, word_to_vec_map) print_predictions(X_my_sentences, pred) Explanation: Expected Output: <table> <tr> <td> **Train set accuracy** </td> <td> 97.7 </td> </tr> <tr> <td> **Test set accuracy** </td> <td> 85.7 </td> </tr> </table> Random guessing would have had 20% accuracy given that there are 5 classes. This is pretty good performance after training on only 127 examples. In the training set, the algorithm saw the sentence "I love you" with the label ❤️. You can check however that the word "adore" does not appear in the training set. Nonetheless, lets see what happens if you write "I adore you." End of explanation print(Y_test.shape) print(' '+ label_to_emoji(0)+ ' ' + label_to_emoji(1) + ' ' + label_to_emoji(2)+ ' ' + label_to_emoji(3)+' ' + label_to_emoji(4)) print(pd.crosstab(Y_test, pred_test.reshape(56,), rownames=['Actual'], colnames=['Predicted'], margins=True)) plot_confusion_matrix(Y_test, pred_test) Explanation: Amazing! Because adore has a similar embedding as love, the algorithm has generalized correctly even to a word it has never seen before. Words such as heart, dear, beloved or adore have embedding vectors similar to love, and so might work too---feel free to modify the inputs above and try out a variety of input sentences. How well does it work? Note though that it doesn't get "not feeling happy" correct. This algorithm ignores word ordering, so is not good at understanding phrases like "not happy." Printing the confusion matrix can also help understand which classes are more difficult for your model. A confusion matrix shows how often an example whose label is one class ("actual" class) is mislabeled by the algorithm with a different class ("predicted" class). End of explanation import numpy as np np.random.seed(0) from keras.models import Model from keras.layers import Dense, Input, Dropout, LSTM, Activation from keras.layers.embeddings import Embedding from keras.preprocessing import sequence from keras.initializers import glorot_uniform np.random.seed(1) Explanation: <font color='blue'> What you should remember from this part: - Even with a 127 training examples, you can get a reasonably good model for Emojifying. This is due to the generalization power word vectors gives you. - Emojify-V1 will perform poorly on sentences such as "This movie is not good and not enjoyable" because it doesn't understand combinations of words--it just averages all the words' embedding vectors together, without paying attention to the ordering of words. You will build a better algorithm in the next part. 2 - Emojifier-V2: Using LSTMs in Keras: Let's build an LSTM model that takes as input word sequences. This model will be able to take word ordering into account. Emojifier-V2 will continue to use pre-trained word embeddings to represent words, but will feed them into an LSTM, whose job it is to predict the most appropriate emoji. Run the following cell to load the Keras packages. End of explanation # GRADED FUNCTION: sentences_to_indices def sentences_to_indices(X, word_to_index, max_len): Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences. The output shape should be such that it can be given to `Embedding()` (described in Figure 4). Arguments: X -- array of sentences (strings), of shape (m, 1) word_to_index -- a dictionary containing the each word mapped to its index max_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this. Returns: X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len) m = X.shape[0] # number of training examples ### START CODE HERE ### # Initialize X_indices as a numpy matrix of zeros and the correct shape (≈ 1 line) X_indices = np.zeros((m,max_len)) for i in range(m): # loop over training examples # Convert the ith training sentence in lower case and split is into words. You should get a list of words. sentence_words = [w.lower() for w in X[i].split()] # Initialize j to 0 j = 0 # Loop over the words of sentence_words for w in sentence_words: # Set the (i,j)th entry of X_indices to the index of the correct word. X_indices[i, j] = word_to_index[w] # Increment j to j + 1 j = j+1 ### END CODE HERE ### return X_indices Explanation: 2.1 - Overview of the model Here is the Emojifier-v2 you will implement: <img src="images/emojifier-v2.png" style="width:700px;height:400px;"> <br> <caption><center> Figure 3: Emojifier-V2. A 2-layer LSTM sequence classifier. </center></caption> 2.2 Keras and mini-batching In this exercise, we want to train Keras using mini-batches. However, most deep learning frameworks require that all sequences in the same mini-batch have the same length. This is what allows vectorization to work: If you had a 3-word sentence and a 4-word sentence, then the computations needed for them are different (one takes 3 steps of an LSTM, one takes 4 steps) so it's just not possible to do them both at the same time. The common solution to this is to use padding. Specifically, set a maximum sequence length, and pad all sequences to the same length. For example, of the maximum sequence length is 20, we could pad every sentence with "0"s so that each input sentence is of length 20. Thus, a sentence "i love you" would be represented as $(e_{i}, e_{love}, e_{you}, \vec{0}, \vec{0}, \ldots, \vec{0})$. In this example, any sentences longer than 20 words would have to be truncated. One simple way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set. 2.3 - The Embedding layer In Keras, the embedding matrix is represented as a "layer", and maps positive integers (indices corresponding to words) into dense vectors of fixed size (the embedding vectors). It can be trained or initialized with a pretrained embedding. In this part, you will learn how to create an Embedding() layer in Keras, initialize it with the GloVe 50-dimensional vectors loaded earlier in the notebook. Because our training set is quite small, we will not update the word embeddings but will instead leave their values fixed. But in the code below, we'll show you how Keras allows you to either train or leave fixed this layer. The Embedding() layer takes an integer matrix of size (batch size, max input length) as input. This corresponds to sentences converted into lists of indices (integers), as shown in the figure below. <img src="images/embedding1.png" style="width:700px;height:250px;"> <caption><center> Figure 4: Embedding layer. This example shows the propagation of two examples through the embedding layer. Both have been zero-padded to a length of max_len=5. The final dimension of the representation is (2,max_len,50) because the word embeddings we are using are 50 dimensional. </center></caption> The largest integer (i.e. word index) in the input should be no larger than the vocabulary size. The layer outputs an array of shape (batch size, max input length, dimension of word vectors). The first step is to convert all your training sentences into lists of indices, and then zero-pad all these lists so that their length is the length of the longest sentence. Exercise: Implement the function below to convert X (array of sentences as strings) into an array of indices corresponding to words in the sentences. The output shape should be such that it can be given to Embedding() (described in Figure 4). End of explanation X1 = np.array(["funny lol", "lets play baseball", "food is ready for you"]) X1_indices = sentences_to_indices(X1,word_to_index, max_len = 5) print("X1 =", X1) print("X1_indices =", X1_indices) Explanation: Run the following cell to check what sentences_to_indices() does, and check your results. End of explanation # GRADED FUNCTION: pretrained_embedding_layer def pretrained_embedding_layer(word_to_vec_map, word_to_index): Creates a Keras Embedding() layer and loads in pre-trained GloVe 50-dimensional vectors. Arguments: word_to_vec_map -- dictionary mapping words to their GloVe vector representation. word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words) Returns: embedding_layer -- pretrained layer Keras instance vocab_len = len(word_to_index) + 1 # adding 1 to fit Keras embedding (requirement) emb_dim = word_to_vec_map["cucumber"].shape[0] # define dimensionality of your GloVe word vectors (= 50) ### START CODE HERE ### # Initialize the embedding matrix as a numpy array of zeros of shape (vocab_len, dimensions of word vectors = emb_dim) emb_matrix = np.zeros((vocab_len, emb_dim)) # Set each row "index" of the embedding matrix to be the word vector representation of the "index"th word of the vocabulary for word, index in word_to_index.items(): emb_matrix[index, :] = word_to_vec_map[word] # Define Keras embedding layer with the correct output/input sizes, make it trainable. Use Embedding(...). Make sure to set trainable=False. embedding_layer = Embedding(vocab_len, emb_dim, trainable=False) ### END CODE HERE ### # Build the embedding layer, it is required before setting the weights of the embedding layer. Do not modify the "None". embedding_layer.build((None,)) # Set the weights of the embedding layer to the embedding matrix. Your layer is now pretrained. embedding_layer.set_weights([emb_matrix]) return embedding_layer embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index) print("weights[0][1][3] =", embedding_layer.get_weights()[0][1][3]) Explanation: Expected Output: <table> <tr> <td> **X1 =** </td> <td> ['funny lol' 'lets play football' 'food is ready for you'] </td> </tr> <tr> <td> **X1_indices =** </td> <td> [[ 155345. 225122. 0. 0. 0.] <br> [ 220930. 286375. 151266. 0. 0.] <br> [ 151204. 192973. 302254. 151349. 394475.]] </td> </tr> </table> Let's build the Embedding() layer in Keras, using pre-trained word vectors. After this layer is built, you will pass the output of sentences_to_indices() to it as an input, and the Embedding() layer will return the word embeddings for a sentence. Exercise: Implement pretrained_embedding_layer(). You will need to carry out the following steps: 1. Initialize the embedding matrix as a numpy array of zeroes with the correct shape. 2. Fill in the embedding matrix with all the word embeddings extracted from word_to_vec_map. 3. Define Keras embedding layer. Use Embedding(). Be sure to make this layer non-trainable, by setting trainable = False when calling Embedding(). If you were to set trainable = True, then it will allow the optimization algorithm to modify the values of the word embeddings. 4. Set the embedding weights to be equal to the embedding matrix End of explanation # GRADED FUNCTION: Emojify_V2 def Emojify_V2(input_shape, word_to_vec_map, word_to_index): Function creating the Emojify-v2 model's graph. Arguments: input_shape -- shape of the input, usually (max_len,) word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words) Returns: model -- a model instance in Keras ### START CODE HERE ### # Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices). sentence_indices = Input(input_shape, dtype='int32') # Create the embedding layer pretrained with GloVe Vectors (≈1 line) embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index) # Propagate sentence_indices through your embedding layer, you get back the embeddings embeddings = embedding_layer(sentence_indices) # Propagate the embeddings through an LSTM layer with 128-dimensional hidden state # Be careful, the returned output should be a batch of sequences. X = LSTM(128, return_sequences=True)(embeddings) # Add dropout with a probability of 0.5 X = Dropout(0.5)(X) # Propagate X trough another LSTM layer with 128-dimensional hidden state # Be careful, the returned output should be a single hidden state, not a batch of sequences. X = LSTM(128, return_sequences=False)(X) # Add dropout with a probability of 0.5 X = Dropout(0.5)(X) # Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors. X = Dense(5)(X) # Add a softmax activation X = Activation('softmax')(X) # Create Model instance which converts sentence_indices into X. model = Model(inputs=sentence_indices ,outputs=X) ### END CODE HERE ### return model Explanation: Expected Output: <table> <tr> <td> **weights[0][1][3] =** </td> <td> -0.3403 </td> </tr> </table> 2.3 Building the Emojifier-V2 Lets now build the Emojifier-V2 model. You will do so using the embedding layer you have built, and feed its output to an LSTM network. <img src="images/emojifier-v2.png" style="width:700px;height:400px;"> <br> <caption><center> Figure 3: Emojifier-v2. A 2-layer LSTM sequence classifier. </center></caption> Exercise: Implement Emojify_V2(), which builds a Keras graph of the architecture shown in Figure 3. The model takes as input an array of sentences of shape (m, max_len, ) defined by input_shape. It should output a softmax probability vector of shape (m, C = 5). You may need Input(shape = ..., dtype = '...'), LSTM(), Dropout(), Dense(), and Activation(). End of explanation model = Emojify_V2((maxLen,), word_to_vec_map, word_to_index) model.summary() Explanation: Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose max_len = 10. You should see your architecture, it uses "20,223,927" parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001*50 = 20,000,050 non-trainable parameters. End of explanation model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) Explanation: As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, adam optimizer and ['accuracy'] metrics: End of explanation X_train_indices = sentences_to_indices(X_train, word_to_index, maxLen) Y_train_oh = convert_to_one_hot(Y_train, C = 5) Explanation: It's time to train your model. Your Emojifier-V2 model takes as input an array of shape (m, max_len) and outputs probability vectors of shape (m, number of classes). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors). End of explanation model.fit(X_train_indices, Y_train_oh, epochs = 50, batch_size = 32, shuffle=True) Explanation: Fit the Keras model on X_train_indices and Y_train_oh. We will use epochs = 50 and batch_size = 32. End of explanation X_test_indices = sentences_to_indices(X_test, word_to_index, max_len = maxLen) Y_test_oh = convert_to_one_hot(Y_test, C = 5) loss, acc = model.evaluate(X_test_indices, Y_test_oh) print() print("Test accuracy = ", acc) Explanation: Your model should perform close to 100% accuracy on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set. End of explanation # This code allows you to see the mislabelled examples C = 5 y_test_oh = np.eye(C)[Y_test.reshape(-1)] X_test_indices = sentences_to_indices(X_test, word_to_index, maxLen) pred = model.predict(X_test_indices) for i in range(len(X_test)): x = X_test_indices num = np.argmax(pred[i]) if(num != Y_test[i]): print('Expected emoji:'+ label_to_emoji(Y_test[i]) + ' prediction: '+ X_test[i] + label_to_emoji(num).strip()) Explanation: You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples. End of explanation # Change the sentence below to see your prediction. Make sure all the words are in the Glove embeddings. x_test = np.array(['not feeling happy']) X_test_indices = sentences_to_indices(x_test, word_to_index, maxLen) print(x_test[0] +' '+ label_to_emoji(np.argmax(model.predict(X_test_indices)))) Explanation: Now you can try it on your own example. Write your own sentence below. End of explanation
12,818
Given the following text description, write Python code to implement the functionality described below step by step Description: Access-lists and firewall rules This category of questions allows you to analyze the behavior of access control lists and firewall rules. It also allows you to comprehensively validate (aka verification) that some traffic is or is not allowed. Filter Line Reachability Search Filters Test Filters Find Matching Filter Lines Step1: Filter Line Reachability Returns unreachable lines in filters (ACLs and firewall rules). Finds all lines in the specified filters that will not match any packet, either because of being shadowed by prior lines or because of its match condition being empty. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Examine filters on nodes matching this specifier. | NodeSpec | True | filters | Specifier for filters to test. | FilterSpec | True | ignoreComposites | Whether to ignore filters that are composed of multiple filters defined in the configs. | bool | True | False Invocation Step2: Return Value Name | Description | Type --- | --- | --- Sources | Filter sources | List of str Unreachable_Line | Filter line that cannot be matched (i.e., unreachable) | str Unreachable_Line_Action | Action performed by the unreachable line (e.g., PERMIT or DENY) | str Blocking_Lines | Lines that, when combined, cover the unreachable line | List of str Different_Action | Whether unreachable line has an action different from the blocking line(s) | bool Reason | The reason a line is unreachable | str Additional_Info | Additional information | str Print the first 5 rows of the returned Dataframe Step3: Print the first row of the returned Dataframe Step4: Search Filters Finds flows for which a filter takes a particular behavior. This question searches for flows for which a filter (access control list) has a particular behavior. The behaviors can be Step5: Return Value Name | Description | Type --- | --- | --- Node | Node | str Filter_Name | Filter name | str Flow | Evaluated flow | Flow Action | Outcome | str Line_Content | Line content | str Trace | ACL trace | List of TraceTree Print the first 5 rows of the returned Dataframe Step6: Print the first row of the returned Dataframe Step7: Test Filters Returns how a flow is processed by a filter (ACLs, firewall rules). Shows how the specified flow is processed through the specified filters, returning its permit/deny status as well as the line(s) it matched. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Only examine filters on nodes matching this specifier. | NodeSpec | True | filters | Only consider filters that match this specifier. | FilterSpec | True | headers | Packet header constraints. | HeaderConstraints | False | startLocation | Location to start tracing from. | LocationSpec | True | Invocation Step8: Return Value Name | Description | Type --- | --- | --- Node | Node | str Filter_Name | Filter name | str Flow | Evaluated flow | Flow Action | Outcome | str Line_Content | Line content | str Trace | ACL trace | List of TraceTree Print the first 5 rows of the returned Dataframe Step9: Print the first row of the returned Dataframe Step10: Find Matching Filter Lines Returns lines in filters (ACLs and firewall rules) that match any packet within the specified header constraints. Finds all lines in the specified filters that match any packet within the specified header constraints. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Examine filters on nodes matching this specifier. | NodeSpec | True | filters | Specifier for filters to check. | FilterSpec | True | headers | Packet header constraints for which to find matching filter lines. | HeaderConstraints | True | action | Show filter lines with this action. By default returns lines with either action. | str | True | ignoreComposites | Whether to ignore filters that are composed of multiple filters defined in the configs. | bool | True | False Invocation Step11: Return Value Name | Description | Type --- | --- | --- Node | Node | str Filter | Filter name | str Line | Line text | str Line_Index | Index of line | int Action | Action performed by the line (e.g., PERMIT or DENY) | str Print the first 5 rows of the returned Dataframe Step12: Print the first row of the returned Dataframe
Python Code: bf.set_network('generate_questions') bf.set_snapshot('generate_questions') Explanation: Access-lists and firewall rules This category of questions allows you to analyze the behavior of access control lists and firewall rules. It also allows you to comprehensively validate (aka verification) that some traffic is or is not allowed. Filter Line Reachability Search Filters Test Filters Find Matching Filter Lines End of explanation result = bf.q.filterLineReachability().answer().frame() Explanation: Filter Line Reachability Returns unreachable lines in filters (ACLs and firewall rules). Finds all lines in the specified filters that will not match any packet, either because of being shadowed by prior lines or because of its match condition being empty. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Examine filters on nodes matching this specifier. | NodeSpec | True | filters | Specifier for filters to test. | FilterSpec | True | ignoreComposites | Whether to ignore filters that are composed of multiple filters defined in the configs. | bool | True | False Invocation End of explanation result.head(5) Explanation: Return Value Name | Description | Type --- | --- | --- Sources | Filter sources | List of str Unreachable_Line | Filter line that cannot be matched (i.e., unreachable) | str Unreachable_Line_Action | Action performed by the unreachable line (e.g., PERMIT or DENY) | str Blocking_Lines | Lines that, when combined, cover the unreachable line | List of str Different_Action | Whether unreachable line has an action different from the blocking line(s) | bool Reason | The reason a line is unreachable | str Additional_Info | Additional information | str Print the first 5 rows of the returned Dataframe End of explanation result.iloc[0] bf.set_network('generate_questions') bf.set_snapshot('filters') Explanation: Print the first row of the returned Dataframe End of explanation result = bf.q.searchFilters(headers=HeaderConstraints(srcIps='10.10.10.0/24', dstIps='218.8.104.58', applications = ['dns']), action='deny', filters='acl_in').answer().frame() Explanation: Search Filters Finds flows for which a filter takes a particular behavior. This question searches for flows for which a filter (access control list) has a particular behavior. The behaviors can be: that the filter permits the flow (permit), that it denies the flow (deny), or that the flow is matched by a particular line (matchLine &lt;lineNumber&gt;). Filters are selected using node and filter specifiers, which might match multiple filters. In this case, a (possibly different) flow will be found for each filter. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Only evaluate filters present on nodes matching this specifier. | NodeSpec | True | filters | Only evaluate filters that match this specifier. | FilterSpec | True | headers | Packet header constraints on the flows being searched. | HeaderConstraints | True | action | The behavior that you want evaluated. Specify exactly one of permit, deny, or matchLine &lt;line number&gt;. | str | True | startLocation | Only consider specified locations as possible sources. | LocationSpec | True | invertSearch | Search for packet headers outside the specified headerspace, rather than inside the space. | bool | True | Invocation End of explanation result.head(5) Explanation: Return Value Name | Description | Type --- | --- | --- Node | Node | str Filter_Name | Filter name | str Flow | Evaluated flow | Flow Action | Outcome | str Line_Content | Line content | str Trace | ACL trace | List of TraceTree Print the first 5 rows of the returned Dataframe End of explanation result.iloc[0] bf.set_network('generate_questions') bf.set_snapshot('filters') Explanation: Print the first row of the returned Dataframe End of explanation result = bf.q.testFilters(headers=HeaderConstraints(srcIps='10.10.10.1', dstIps='218.8.104.58', applications = ['dns']), nodes='rtr-with-acl', filters='acl_in').answer().frame() Explanation: Test Filters Returns how a flow is processed by a filter (ACLs, firewall rules). Shows how the specified flow is processed through the specified filters, returning its permit/deny status as well as the line(s) it matched. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Only examine filters on nodes matching this specifier. | NodeSpec | True | filters | Only consider filters that match this specifier. | FilterSpec | True | headers | Packet header constraints. | HeaderConstraints | False | startLocation | Location to start tracing from. | LocationSpec | True | Invocation End of explanation result.head(5) Explanation: Return Value Name | Description | Type --- | --- | --- Node | Node | str Filter_Name | Filter name | str Flow | Evaluated flow | Flow Action | Outcome | str Line_Content | Line content | str Trace | ACL trace | List of TraceTree Print the first 5 rows of the returned Dataframe End of explanation result.iloc[0] bf.set_network('generate_questions') bf.set_snapshot('generate_questions') Explanation: Print the first row of the returned Dataframe End of explanation result = bf.q.findMatchingFilterLines(headers=HeaderConstraints(applications='DNS')).answer().frame() Explanation: Find Matching Filter Lines Returns lines in filters (ACLs and firewall rules) that match any packet within the specified header constraints. Finds all lines in the specified filters that match any packet within the specified header constraints. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Examine filters on nodes matching this specifier. | NodeSpec | True | filters | Specifier for filters to check. | FilterSpec | True | headers | Packet header constraints for which to find matching filter lines. | HeaderConstraints | True | action | Show filter lines with this action. By default returns lines with either action. | str | True | ignoreComposites | Whether to ignore filters that are composed of multiple filters defined in the configs. | bool | True | False Invocation End of explanation result.head(5) Explanation: Return Value Name | Description | Type --- | --- | --- Node | Node | str Filter | Filter name | str Line | Line text | str Line_Index | Index of line | int Action | Action performed by the line (e.g., PERMIT or DENY) | str Print the first 5 rows of the returned Dataframe End of explanation result.iloc[0] Explanation: Print the first row of the returned Dataframe End of explanation
12,819
Given the following text description, write Python code to implement the functionality described below step by step Description: Publication ready figures with matplotlib and Jupyter notebook A very convenient workflow to analyze data and create figures that can be used in various ways for publication is to use the IPython Notebook or Jupyer notebook in combination with matplotlib. I faced the problem that one often needs different file formats for different kind of publications, such as on a webpage or in a paper. For instance, to put the figure on a webpage, most softwares support only png or jpg formats, so that a fixed resolution must be provided. On the other hand, a scalable figure format can be scaled as needed and when putting it into a pdf document, there won't be artifacts when zooming into the figure. In this blog post, I'll provide a small function that saves the matplotlib figure to various file formats, which can then be used where needed. Creating a simple plot A simple plot can be created within an ipython notebook with Step1: Creating a quatratic plot Step3: Save the figure The previous figure can be saved with calling matplotlib.pyplot.savefig and matplotlib will save the figure in the output format based on the extension of the filename. To save to various formats, one would need to call this function several times or instead define a new function that can be included as boilerplate in the first cell of a notebook such as Step4: And it can be easily saved with
Python Code: %matplotlib inline import seaborn as snb import numpy as np import matplotlib.pyplot as plt Explanation: Publication ready figures with matplotlib and Jupyter notebook A very convenient workflow to analyze data and create figures that can be used in various ways for publication is to use the IPython Notebook or Jupyer notebook in combination with matplotlib. I faced the problem that one often needs different file formats for different kind of publications, such as on a webpage or in a paper. For instance, to put the figure on a webpage, most softwares support only png or jpg formats, so that a fixed resolution must be provided. On the other hand, a scalable figure format can be scaled as needed and when putting it into a pdf document, there won't be artifacts when zooming into the figure. In this blog post, I'll provide a small function that saves the matplotlib figure to various file formats, which can then be used where needed. Creating a simple plot A simple plot can be created within an ipython notebook with: Loading matplotlib and setting up ipython notebook to display the graphics inline: End of explanation def create_plot(): x = np.arange(0.0, 10.0, 0.1) plt.plot(x, x**2) plt.xlabel("$x$") plt.ylabel("$y=x^2$") create_plot() plt.show() Explanation: Creating a quatratic plot: End of explanation def save_to_file(filename, fig=None): Save to @filename with a custom set of file formats. By default, this function takes to most recent figure, but a @fig can also be passed to this function as an argument. formats = [ "pdf", "eps", "png", "pgf", ] if fig is None: for form in formats: plt.savefig("%s.%s"%(filename, form)) else: for form in formats: fig.savefig("%s.%s"%(filename, form)) Explanation: Save the figure The previous figure can be saved with calling matplotlib.pyplot.savefig and matplotlib will save the figure in the output format based on the extension of the filename. To save to various formats, one would need to call this function several times or instead define a new function that can be included as boilerplate in the first cell of a notebook such as: End of explanation create_plot() save_to_file("simple_plot") Explanation: And it can be easily saved with: End of explanation
12,820
Given the following text description, write Python code to implement the functionality described below step by step Description: Kalman Filters By Evgenia "Jenny" Nitishinskaya, Dr. Aidan O'Mahony, and Delaney Granizo-Mackenzie. Algorithms by David Edwards. Kalman Filter Beta Estimation Example from Dr. Aidan O'Mahony's blog. Part of the Quantopian Lecture Series Step1: Toy example Step2: At each point in time we plot the state estimate <i>after</i> accounting for the most recent measurement, which is why we are not at position 30 at time 0. The filter's attentiveness to the measurements allows it to correct for the initial bogus state we gave it. Then, by weighing its model and knowledge of the physical laws against new measurements, it is able to filter out much of the noise in the camera data. Meanwhile the confidence in the estimate increases with time, as shown by the graph below Step3: The Kalman filter can also do <i>smoothing</i>, which takes in all of the input data at once and then constructs its best guess for the state of the system in each period post factum. That is, it does not provide online, running estimates, but instead uses all of the data to estimate the historical state, which is useful if we only want to use the data after we have collected all of it. Step4: Example Step5: This is a little hard to see, so we'll plot a subsection of the graph.
Python Code: %pylab inline # Import a Kalman filter and other useful libraries from pykalman import KalmanFilter import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy import poly1d Explanation: Kalman Filters By Evgenia "Jenny" Nitishinskaya, Dr. Aidan O'Mahony, and Delaney Granizo-Mackenzie. Algorithms by David Edwards. Kalman Filter Beta Estimation Example from Dr. Aidan O'Mahony's blog. Part of the Quantopian Lecture Series: www.quantopian.com/lectures github.com/quantopian/research_public Notebook released under the Creative Commons Attribution 4.0 License. What is a Kalman Filter? The Kalman filter is an algorithm that uses noisy observations of a system over time to estimate the parameters of the system (some of which are unobservable) and predict future observations. At each time step, it makes a prediction, takes in a measurement, and updates itself based on how the prediction and measurement compare. The algorithm is as follows: 1. Take as input a mathematical model of the system, i.e. * the transition matrix, which tells us how the system evolves from one state to another. For instance, if we are modeling the movement of a car, then the next values of position and velocity can be computed from the previous ones using kinematic equations. Alternatively, if we have a system which is fairly stable, we might model its evolution as a random walk. If you want to read up on Kalman filters, note that this matrix is usually called $A$. * the observation matrix, which tells us the next measurement we should expect given the predicted next state. If we are measuring the position of the car, we just extract the position values stored in the state. For a more complex example, consider estimating a linear regression model for the data. Then our state is the coefficients of the model, and we can predict the next measurement from the linear equation. This is denoted $H$. * any control factors that affect the state transitions but are not part of the measurements. For instance, if our car were falling, gravity would be a control factor. If the noise does not have mean 0, it should be shifted over and the offset put into the control factors. The control factors are summarized in a matrix $B$ with time-varying control vector $u_t$, which give the offset $Bu_t$. * covariance matrices of the transition noise (i.e. noise in the evolution of the system) and measurement noise, denoted $Q$ and $R$, respectively. 2. Take as input an initial estimate of the state of the system and the error of the estimate, $\mu_0$ and $\sigma_0$. 3. At each timestep: * estimate the current state of the system $x_t$ using the transition matrix * take as input new measurements $z_t$ * use the conditional probability of the measurements given the state, taking into account the uncertainties of the measurement and the state estimate, to update the estimated current state of the system $x_t$ and the covariance matrix of the estimate $P_t$ This graphic illustrates the procedure followed by the algorithm. It's very important for the algorithm to keep track of the covariances of its estimates. This way, it can give us a more nuanced result than simply a point value when we ask for it, and it can use its confidence to decide how much to be influenced by new measurements during the update process. The more certain it is of its estimate of the state, the more skeptical it will be of measurements that disagree with the state. By default, the errors are assumed to be normally distributed, and this assumption allows the algorithm to calculate precise confidence intervals. It can, however, be implemented for non-normal errors. End of explanation tau = 0.1 # Set up the filter kf = KalmanFilter(n_dim_obs=1, n_dim_state=2, # position is 1-dimensional, (x,v) is 2-dimensional initial_state_mean=[30,10], initial_state_covariance=np.eye(2), transition_matrices=[[1,tau], [0,1]], observation_matrices=[[1,0]], observation_covariance=3, transition_covariance=np.zeros((2,2)), transition_offsets=[-4.9*tau**2, -9.8*tau]) # Create a simulation of a ball falling for 40 units of time (each of length tau) times = np.arange(40) actual = -4.9*tau**2*times**2 # Simulate the noisy camera data sim = actual + 3*np.random.randn(40) # Run filter on camera data state_means, state_covs = kf.filter(sim) plt.plot(times, state_means[:,0]) plt.plot(times, sim) plt.plot(times, actual) plt.legend(['Filter estimate', 'Camera data', 'Actual']) plt.xlabel('Time') plt.ylabel('Height'); print times print state_means[:,0] Explanation: Toy example: falling ball Imagine we have a falling ball whose motion we are tracking with a camera. The state of the ball consists of its position and velocity. We know that we have the relationship $x_t = x_{t-1} + v_{t-1}\tau - \frac{1}{2} g \tau^2$, where $\tau$ is the time (in seconds) elapsed between $t-1$ and $t$ and $g$ is gravitational acceleration. Meanwhile, our camera can tell us the position of the ball every second, but we know from the manufacturer that the camera accuracy, translated into the position of the ball, implies variance in the position estimate of about 3 meters. In order to use a Kalman filter, we need to give it transition and observation matrices, transition and observation covariance matrices, and the initial state. The state of the system is (position, velocity), so it follows the transition matrix $$ \left( \begin{array}{cc} 1 & \tau \ 0 & 1 \end{array} \right) $$ with offset $(-\tau^2 \cdot g/2, -\tau\cdot g)$. The observation matrix just extracts the position coordinate, (1 0), since we are measuring position. We know that the observation variance is 1, and transition covariance is 0 since we will be simulating the data the same way we specified our model. For the inital state, let's feed our model something bogus like (30, 10) and see how our system evolves. End of explanation # Plot variances of x and v, extracting the appropriate values from the covariance matrix plt.plot(times, state_covs[:,0,0]) plt.plot(times, state_covs[:,1,1]) plt.legend(['Var(x)', 'Var(v)']) plt.ylabel('Variance') plt.xlabel('Time'); Explanation: At each point in time we plot the state estimate <i>after</i> accounting for the most recent measurement, which is why we are not at position 30 at time 0. The filter's attentiveness to the measurements allows it to correct for the initial bogus state we gave it. Then, by weighing its model and knowledge of the physical laws against new measurements, it is able to filter out much of the noise in the camera data. Meanwhile the confidence in the estimate increases with time, as shown by the graph below: End of explanation # Use smoothing to estimate what the state of the system has been smoothed_state_means, _ = kf.smooth(sim) # Plot results plt.plot(times, smoothed_state_means[:,0]) plt.plot(times, sim) plt.plot(times, actual) plt.legend(['Smoothed estimate', 'Camera data', 'Actual']) plt.xlabel('Time') plt.ylabel('Height'); Explanation: The Kalman filter can also do <i>smoothing</i>, which takes in all of the input data at once and then constructs its best guess for the state of the system in each period post factum. That is, it does not provide online, running estimates, but instead uses all of the data to estimate the historical state, which is useful if we only want to use the data after we have collected all of it. End of explanation df = pd.read_csv("../data/ChungCheonDC/CompositeETCdata.csv") df_DC = pd.read_csv("../data/ChungCheonDC/CompositeDCdata.csv") df_DCstd = pd.read_csv("../data/ChungCheonDC/CompositeDCstddata.csv") ax1 = plt.subplot(111) ax1_1 = ax1.twinx() df.plot(figsize=(12,3), x='date', y='reservoirH', ax=ax1_1, color='k', linestyle='-', lw=2) print df.reservoirH[5:100] # Load pricing data for a security # start = '2013-01-01' # end = '2015-01-01' #x = get_pricing('reservoirH', fields='price', start_date=start, end_date=end) x= df.reservoirH # Construct a Kalman filter kf = KalmanFilter(transition_matrices = [1], observation_matrices = [1], initial_state_mean = 39.3, initial_state_covariance = 1, observation_covariance=1, transition_covariance=.01) # Use the observed values of the price to get a rolling mean state_means, _ = kf.filter(x.values) # Compute the rolling mean with various lookback windows mean10 = pd.rolling_mean(x, 10) mean20 = pd.rolling_mean(x, 20) mean30 = pd.rolling_mean(x, 30) # Plot original data and estimated mean plt.plot(state_means) plt.plot(x, 'k.', ms=2) plt.plot(mean10) plt.plot(mean20) plt.plot(mean30) plt.title('Kalman filter estimate of average') plt.legend(['Kalman Estimate', 'Reseroir H', '30-day Moving Average', '60-day Moving Average','90-day Moving Average']) plt.xlabel('Day') plt.ylabel('Reservoir Level'); plt.plot(state_means) plt.plot(x) plt.title('Kalman filter estimate of average') plt.legend(['Kalman Estimate', 'Reseroir H']) plt.xlabel('Day') plt.ylabel('Reservoir Level'); Explanation: Example: moving average Because the Kalman filter updates its estimates at every time step and tends to weigh recent observations more than older ones, a particularly useful application is estimation of rolling parameters of the data. When using a Kalman filter, there's no window length that we need to specify. This is useful for computing the moving average if that's what we are interested in, or for smoothing out estimates of other quantities. For instance, if we have already computed the moving Sharpe ratio, we can smooth it using a Kalman filter. Below, we'll use both a Kalman filter and an n-day moving average to estimate the rolling mean of a dataset. We hope that the mean describes our observations well, so it shouldn't change too much when we add an observation; therefore, we assume that it evolves as a random walk with a small error term. The mean is the model's guess for the mean of the distribution from which measurements are drawn, so our prediction of the next value is simply equal to our estimate of the mean. We assume that the observations have variance 1 around the rolling mean, for lack of a better estimate. Our initial guess for the mean is 0, but the filter quickly realizes that that is incorrect and adjusts. End of explanation plt.plot(state_means[-400:]) plt.plot(x[-400:]) plt.plot(mean10[-400:]) plt.title('Kalman filter estimate of average') plt.legend(['Kalman Estimate', 'Reseroir H', '10-day Moving Average']) plt.xlabel('Day') plt.ylabel('Reservoir Level'); # Load pricing data for a security # start = '2013-01-01' # end = '2015-01-01' #x = get_pricing('reservoirH', fields='price', start_date=start, end_date=end) xH= df.upperH_med # Construct a Kalman filter kf = KalmanFilter(transition_matrices = [1], observation_matrices = [1], initial_state_mean = 35.5, initial_state_covariance = 1, observation_covariance=1, transition_covariance=.01) # Use the observed values of the price to get a rolling mean state_means, _ = kf.filter(xH.values) # Compute the rolling mean with various lookback windows mean10 = pd.rolling_mean(xH, 10) mean20 = pd.rolling_mean(xH, 20) mean30 = pd.rolling_mean(xH, 30) # Plot original data and estimated mean plt.plot(state_means) plt.plot(xH) plt.plot(mean10) plt.plot(mean20) plt.plot(mean30) plt.title('Kalman filter estimate of average') plt.legend(['Kalman Estimate', 'upperH_med', '10-day Moving Average', '20-day Moving Average','30-day Moving Average']) plt.xlabel('Day') plt.ylabel('upperH_med'); txrxID = df_DC.keys()[1:-1] xmasking = lambda x: np.ma.masked_where(np.isnan(x.values), x.values) x= df_DC[txrxID[118]] # Masking array having NaN xm = xmasking(x) # Construct a Kalman filter kf = KalmanFilter(transition_matrices = [1], observation_matrices = [1], initial_state_mean = 75.3, initial_state_covariance = 1, observation_covariance=1, transition_covariance=.01) # Use the observed values of the price to get a rolling mean state_means, _ = kf.filter(xm) print x plt.plot(df_DC[txrxID[118]]) plt.plot(state_means) plt.legend(['Resistivity', 'state_means']) upperH_med = xmasking(df.upperH_med) state_means, _ = kf.filter(upperH_med) plt.plot(df.upperH_med) plt.plot(state_means) # plt.plot(xH) # plt.plot(mean10) plt.title('Kalman filter estimate of average') plt.legend(['Kalman Estimate', 'Reseroir H','10-day Moving Average']) plt.xlabel('Day') plt.ylabel('upperH_med'); Explanation: This is a little hard to see, so we'll plot a subsection of the graph. End of explanation
12,821
Given the following text description, write Python code to implement the functionality described below step by step Description: Convolutional Cubic Splines $C^2$-continuous cubic splines through evenly spaced data points can be created by convolving the data points with a $C^2$-continuous piecewise cubic kernel, characterized as follows Step1: The Kernel The kernel decays quickly around $x=0$, which is why cubic splines suffer from very little "ringing" -- moving one point doesn't significantly affect the curve at points far away.
Python Code: import math #given an array of Y values at consecutive integral x abscissas, #return array of corresponding derivatives to make a natural cubic spline def naturalSpline(ys): vs = [0.0] * len(ys) if (len(ys) < 2): return vs DECAY = math.sqrt(3)-2; endi = len(ys)-1 # make convolutional spline S = 0.0;E = 0.0 for i in range(len(Y)): vs[i]+=S;vs[endi-i]+=E; S=(S+3.0*ys[i])*DECAY; E=(E-3.0*ys[endi-i])*DECAY; #Natural Boundaries S2 = 6.0*(ys[1]-ys[0]) - 4.0*vs[0] - 2.0*vs[1] E2 = 6.0*(ys[endi-1]-ys[endi]) + 4.0*vs[endi] + 2.0*vs[endi-1] # A = dE2/dE = -dS2/dS, B = dE2/dS = -dS2/dS A = 4.0+2.0*DECAY B = (4.0*DECAY+2.0)*(DECAY**(len(ys)-2)) DEN = A*A - B*B S = (A*S2 + B*E2) / DEN E = (-A*E2 - B*S2) / DEN for i in range(len(ys)): vs[i]+=S;vs[endi-i]+=E S*=DECAY;E*=DECAY return vs # #Plot a different natural spline, along with its 1st and 2nd derivatives, each time you run this # %run plothelp.py %matplotlib inline import random import numpy Y = [random.random()*10.0+2 for _ in range(5)] V = naturalSpline(Y) xs = numpy.linspace(0,len(Y)-1, 1000) plt.figure(0, figsize=(12.0,4.0)) plt.plot(xs,[hermite_interp(Y,V,x) for x in xs]) plt.plot(range(0,len(Y)),[Y[x] for x in range(0,len(Y))], "bo") plt.figure(1, figsize=(12.0,4.0));plt.grid(True) plt.plot(xs,[hermite_interp1(Y,V,x) for x in xs]) plt.plot(xs,[hermite_interp2(Y,V,x) for x in xs]) Explanation: Convolutional Cubic Splines $C^2$-continuous cubic splines through evenly spaced data points can be created by convolving the data points with a $C^2$-continuous piecewise cubic kernel, characterized as follows: \begin{split} y(0) &= 1\ y(x) &= 0,\text{ for all integer }x \neq 0\ y'(x) &= sgn(x) * 3(\sqrt3-2)^{|x|},\text{ for all integer }x\ \end{split} which implies $$y''(x) = -6\sqrt 3(\sqrt3-2)^{|x|},\text{ for all integer }x \neq 0$$ The double-sided exponential allows the convolution to be performed extremely efficiently. Any of the standard boundary conditions can then be applied by adjusting the start and end derivatives appropritately, and propagating the change to the rest of the curve. The following function calculates "natural" cubic splines (with $y'' = 0$ at start and end) using this technique, which is much easiser than the way everyone is taught! Use it however you like. Cheers, Matt Timmermans End of explanation # # Plot the kernel # DECAY = math.sqrt(3)-2; vs = [3*(DECAY**x) for x in range(1,7)] ys = [0]*len(vs) + [1] + [0]*len(vs) vs = [-v for v in vs[::-1]] + [0.0] + vs xs = numpy.linspace(0,len(ys)-1, 1000) plt.figure(0, figsize=(12.0,4.0));plt.grid(True);plt.ylim([-0.2,1.1]);plt.xticks(range(-5,6)) plt.plot([x-6.0 for x in xs],[hermite_interp(ys,vs,x) for x in xs]) Explanation: The Kernel The kernel decays quickly around $x=0$, which is why cubic splines suffer from very little "ringing" -- moving one point doesn't significantly affect the curve at points far away. End of explanation
12,822
Given the following text description, write Python code to implement the functionality described below step by step Description: Machine Learning Engineer Nanodegree Introduction and Foundations Project 0 Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i]. To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers. Think Step5: Tip Step6: Question 1 Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived? Hint Step7: Answer Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction Step10: Question 2 How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive? Hint Step11: Answer Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction Step14: Question 3 How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived? Hint Step15: Answer Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction. Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model. Hint Step18: Question 4 Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions? Hint
Python Code: import numpy as np import pandas as pd # RMS Titanic data visualization code from titanic_visualizations import survival_stats from IPython.display import display %matplotlib inline # Load the dataset in_file = 'titanic_data.csv' full_data = pd.read_csv(in_file) # Print the first few entries of the RMS Titanic data display(full_data.head()) Explanation: Machine Learning Engineer Nanodegree Introduction and Foundations Project 0: Titanic Survival Exploration In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions. Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook. Getting Started To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame. Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function. Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML. End of explanation # Store the 'Survived' feature in a new variable and remove it from the dataset outcomes = full_data['Survived'] data = full_data.drop('Survived', axis = 1) # Show the new dataset with 'Survived' removed display(data.head()) Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship: - Survived: Outcome of survival (0 = No; 1 = Yes) - Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class) - Name: Name of passenger - Sex: Sex of the passenger - Age: Age of the passenger (Some entries contain NaN) - SibSp: Number of siblings and spouses of the passenger aboard - Parch: Number of parents and children of the passenger aboard - Ticket: Ticket number of the passenger - Fare: Fare paid by the passenger - Cabin Cabin number of the passenger (Some entries contain NaN) - Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton) Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets. Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes. End of explanation def accuracy_score(truth, pred): Returns accuracy score for input truth and predictions. # Ensure that the number of predictions matches number of outcomes if len(truth) == len(pred): # Calculate and return the accuracy as a percent return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100) else: return "Number of predictions does not match number of outcomes!" # Test the 'accuracy_score' function predictions = pd.Series(np.ones(5, dtype = int)) print accuracy_score(outcomes[:5], predictions) Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i]. To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers. Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be? End of explanation def predictions_0(data): Model with no features. Always predicts a passenger did not survive. predictions = [] for _, passenger in data.iterrows(): # Predict the survival of 'passenger' predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_0(data) Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off. Making Predictions If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking. The predictions_0 function below will always predict that a passenger did not survive. End of explanation print accuracy_score(outcomes, predictions) Explanation: Question 1 Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived? Hint: Run the code cell below to see the accuracy of this prediction. End of explanation survival_stats(data, outcomes, 'Sex') Explanation: Answer: 61.62% Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across. Run the code cell below to plot the survival outcomes of passengers based on their sex. End of explanation def predictions_1(data): Model with one feature: - Predict a passenger survived if they are female. predictions = [] for _, passenger in data.iterrows(): res = 1 if passenger['Sex'] != 'female': res = 0 predictions.append(res) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_1(data) Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive. Fill in the missing code below so that the function will make this prediction. Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger. End of explanation print accuracy_score(outcomes, predictions) Explanation: Question 2 How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive? Hint: Run the code cell below to see the accuracy of this prediction. End of explanation survival_stats(data, outcomes, 'Age', ["Sex == 'male'"]) Explanation: Answer: 78.68% Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included. Run the code cell below to plot the survival outcomes of male passengers based on their age. End of explanation def predictions_2(data): Model with two features: - Predict a passenger survived if they are female. - Predict a passenger survived if they are male and younger than 10. predictions = [] for _, passenger in data.iterrows(): res = 0 if passenger['Sex'] == 'female': res = 1 elif passenger['Sex'] == 'male' and passenger['Age'] < 10: res = 1 predictions.append(res) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_2(data) Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive. Fill in the missing code below so that the function will make this prediction. Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1. End of explanation print accuracy_score(outcomes, predictions) Explanation: Question 3 How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived? Hint: Run the code cell below to see the accuracy of this prediction. End of explanation survival_stats(data, outcomes, 'SibSp', ["Sex == 'female'", "Pclass > 2", "SibSp < 1"]) Explanation: Answer: Replace this text with the prediction accuracy you found above. Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. Pclass, Sex, Age, SibSp, and Parch are some suggested features to try. Use the survival_stats function below to to examine various survival statistics. Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age &lt; 18"] End of explanation def predictions_3(data): Model with multiple features. Makes a prediction with an accuracy of at least 80%. predictions = [] for _, passenger in data.iterrows(): ''' The 'Sex' variable has a main influence in survival rate. Into the 'Sex' variable, there is differentes influences to each value. To 'female' passengers, the 'Pclass' and 'SibSp' variable has better influence in survival. In case of 'male', the influence came from 'Fare' and 'Age', accordly the research. At least where I researched In resume, the prediction follow this flow: female -> (Pclass, SibSp) male -> (Fare, Age) ''' res = 0 if passenger['Sex'] == 'female': # For females, the Pclass has a lot of survival information if passenger['Pclass'] in (1,2): res = 1 # for female in Pclass 3, those with SibSp = 0, there is a large chance of survival elif passenger['SibSp'] == 0: res = 1 # For male, the variable fare has influence in survival if passenger['Sex'] == 'male': fare = passenger['Fare'] # Age if passenger["Age"] < 10: res = 1 # Fare variable has relation with survival rate in male elif fare >= 120 and fare < 200: res = 1 predictions.append(res) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_3(data) Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction. Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model. Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2. End of explanation print accuracy_score(outcomes, predictions) Explanation: Question 4 Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions? Hint: Run the code cell below to see the accuracy of your predictions. End of explanation
12,823
Given the following text description, write Python code to implement the functionality described below step by step Description: Pandas Sometimes you want a spreadsheet. Starting point Step1: Write a piece of code that prints the number 5, taken from some_dict. Step2: A pandas dataframe is a dictionary of dictionaries on steroids. (Note, its said "pan-dis" not "pandas" like the cute bears). We import it in the cell below. It's almost always imported this way (import pandas as pd) just like numpy is almost always imported import numpy as np. Step3: Getting values A pandas dataframe organizes data with named columns and named rows Step4: Predict What will this cell print out when run? Step5: Summarize How does loc work? Predict What will this cell print out when run? Step6: Summarize How does iloc work? Modify Change the cell below so it prints out all of column "y" (hint Step7: Predict what this will return Step8: Modify Change this cell so it prints out column "x" and rows "b" and "c". Step9: Summarize How does loc slicing work for a dataframe? loc slicing works using specified lists of named rows and columns (or " Step10: Summarize How does iloc slicing work for a dataframe? IMPORTANT CONCEPT loc Step11: Notice that the row names are 0, 1, and 2. (We did not specify row names in our input dictionary, so pandas gave these row names 0-2). We can access rows using loc with these integer row labels Step12: Now, let's whack out the middle row. Step13: You might expect you can now access second row with loc[1,"col1"], but you can't. The cell below will fail Step14: REMEMBER Step15: iloc, though, points to the location in the dataframe. This has now changed Step16: The following line will now fail because there is no more 3rd row in the data frame Step17: After deleting the second row (labeld 1) Step18: Setting values Predict What does the data frame look like after this code is run? Step19: Summarize Step20: Creating and saving dataframes Summarize Step21: Predict Step22: Summary You can get a data frame by Step23: Accessing elements using True/False arrays (masks) (Note this only works for loc, not iloc) Step24: This lets you do some powerful stuff. Below, I am going to set all values in this data frame that are less than 4 to 0 Step25: Final aside
Python Code: some_dict = {"x":{"a":1,"b":2,"c":3}, "y":{"a":4,"b":5,"c":6}} Explanation: Pandas Sometimes you want a spreadsheet. Starting point End of explanation # Answer some_dict["y"]["b"] Explanation: Write a piece of code that prints the number 5, taken from some_dict. End of explanation import pandas as pd # Turn the dictionary into a data frame. # Note that jupyter renders the data frame in pretty form. df = pd.DataFrame(some_dict) df Explanation: A pandas dataframe is a dictionary of dictionaries on steroids. (Note, its said "pan-dis" not "pandas" like the cute bears). We import it in the cell below. It's almost always imported this way (import pandas as pd) just like numpy is almost always imported import numpy as np. End of explanation # column names print(df.columns) # row names print(df.index) Explanation: Getting values A pandas dataframe organizes data with named columns and named rows End of explanation print(df.loc["a","y"]) Explanation: Predict What will this cell print out when run? End of explanation print(df.iloc[0,1]) Explanation: Summarize How does loc work? Predict What will this cell print out when run? End of explanation print(df.loc["a","y"]) Explanation: Summarize How does iloc work? Modify Change the cell below so it prints out all of column "y" (hint: you can slice) End of explanation df.loc[["a","b"],:] Explanation: Predict what this will return End of explanation df.loc[["a","b"],:] Explanation: Modify Change this cell so it prints out column "x" and rows "b" and "c". End of explanation df.iloc[:,:] Explanation: Summarize How does loc slicing work for a dataframe? loc slicing works using specified lists of named rows and columns (or ":" for all rows/columns). Modify Change this cell so it prints out only the first column. End of explanation another_dict = {"col1":[1,2,3],"col2":[3,4,5]} another_df = pd.DataFrame(another_dict) another_df Explanation: Summarize How does iloc slicing work for a dataframe? IMPORTANT CONCEPT loc: loc['x','y'] will always refer to the same data. This is true even if you delete or reorder rows and columns. loc['x','y'] will not always refer to the same place in the data frame. iloc: iloc[i,j] will always refer to the same place in the data frame. iloc[0,0] is the top-left, iloc[n_row-1,n_col-1] is the bottom-right. iloc[i,j] will not always refer to the same data. If you delete or reorder rows and columns, different data could be in each cell. Confusing thing that happens with loc and iloc Data frames often have numbers as labels for each row. This means loc takes a number for the row. But a loc row number is different from an iloc row number. I'll demonstrate: End of explanation print(another_df.loc[1,"col1"]) Explanation: Notice that the row names are 0, 1, and 2. (We did not specify row names in our input dictionary, so pandas gave these row names 0-2). We can access rows using loc with these integer row labels: End of explanation another_df = another_df.loc[[0,2],:] another_df Explanation: Now, let's whack out the middle row. End of explanation another_df.loc[1,"col1"] Explanation: You might expect you can now access second row with loc[1,"col1"], but you can't. The cell below will fail: End of explanation print(another_df.loc[0,"col1"]) print(another_df.loc[2,"col1"]) Explanation: REMEMBER: loc[x,y] will always point to the same data. This means another_df.loc[1,"col"] can't point to anything because we deleted the data. The row labeled 1 is gone. The other two rows are still there: End of explanation print(another_df.iloc[0,0]) print(another_df.iloc[1,0]) Explanation: iloc, though, points to the location in the dataframe. This has now changed: End of explanation print(another_df.iloc[2,0]) Explanation: The following line will now fail because there is no more 3rd row in the data frame: End of explanation df = pd.DataFrame({"x":{"a":1,"b":2,"c":3}, "y":{"a":4,"b":5,"c":6}}) print(df) print("---") print(df.loc["a","y"]) print(df.iloc[0,1]) print(df["y"]["a"]) print(df["y"][0]) print(df.y[0]) Explanation: After deleting the second row (labeld 1): + The first row (labeled 0) is accessed by loc[0,:] or iloc[0,:]. + The second row (labeled 2) is accessed by loc[2,:] or iloc[1,:]. Confused yet? KEEP IN MIND:loc refers to data and iloc refers to location in the data frame. Final note on accessing data: there are lots of ways (too many ways?) to access data in pandas DataFrames The following calls all access the same values. I'm putting these here in case you run across them in the wild, but I strongly recommend using loc and iloc exclusively, both for your own sanity and for readability... End of explanation df = pd.DataFrame({"x":{"a":1,"b":2,"c":3}, "y":{"a":4,"b":5,"c":6}}) df.iloc[0,0] = 22 df.loc["a","y"] = 14 df Explanation: Setting values Predict What does the data frame look like after this code is run? End of explanation df = pd.DataFrame({"x":{"a":1,"b":2,"c":3}, "y":{"a":4,"b":5,"c":6}}) # Setting multiple locations to a single value df.loc[("a","b"),("x","y")] = 5 df # Setting a square of locations (rows a,b, columns x,y) to 1 using # a 2x2 array df.loc[("a","b"),("x","y")] = np.ones((2,2),dtype=np.int) df Explanation: Summarize: How can you set values in a data frame? Fancy setting You can do all kinds of interesting setting moves: End of explanation # You can write to a csv file df.to_csv("a-file.csv") # You can read the csv back in new_df = pd.read_csv("a-file.csv",index_col=0) new_df Explanation: Creating and saving dataframes Summarize: What does the following do? End of explanation names = {"harry":[],"jane":[],"sally":[]} for i in range(5): names["harry"].append(i) names["jane"].append(i*5) names["sally"].append(-i) df_names = pd.DataFrame(names) df_names Explanation: Predict: What does the data frame look like that comes out? (This is a very common way to generate a data frame.) End of explanation df = pd.DataFrame({"x":{"a":1,"b":2,"c":3}, "y":{"a":4,"b":5,"c":6}}) sorted_df = df.sort_values("y",ascending=False) sorted_df Explanation: Summary You can get a data frame by: + reading in a spreadsheet by pd.read_csv (or pd.read_excel or many other options) + constructing one by pd.DataFrame(some_dictionary) You can save out a data frame by: + df.to_csv (or df.to_excel or many other options) Some Really Useful Stuff Presented In Non-Inductive Fashion Sorting by a column End of explanation mask = np.array([True,False,True],dtype=np.bool) df.loc[mask,"x"] Explanation: Accessing elements using True/False arrays (masks) (Note this only works for loc, not iloc) End of explanation # Copy the df_names data frame we made above... new_df_names = df_names.copy() new_df_names mask = new_df_names < 4 new_df_names[mask] = 0 new_df_names Explanation: This lets you do some powerful stuff. Below, I am going to set all values in this data frame that are less than 4 to 0: End of explanation mask = np.arange(10) > 6 x = np.arange(10) x[mask] = 42 x Explanation: Final aside: you can do this sort of mask slicing on numpy arrays too: End of explanation
12,824
Given the following text description, write Python code to implement the functionality described below step by step Description: 20/10 Tipos de datos compuestos. Estructuras de control repetitivas. Índices y slices Diccionarios como acumuladores/contadores Listas Step1: Es fácil saber si un número está en la lista o no Step2: El operador in también funciona para strings (es case sensitive) Step3: O modificar un elemento de la lista Step4: En ningún momento dijimos que la lista era de enteros, por lo que tranquilamente podemos guardar elementos de distintos tipos de datos Step5: Para eliminar un elemento sólo tenemos que usar la función del e indicar la posición. Step6: Y con las listas también se pueden hacer slices Step7: Existe una función llamada range que crea permite crear listas de números Step8: Tuplas Las tuplas son listas inmutables, es decir, que no se pueden modificar. Si no se pueden modificar, ¿para qué existen?. Porque crearlas es mucho más eficiente que crear listas y en muchas ocasiones, como con las constantes, queremos crear variables que no se modifiquen. Step9: Diccionarios El equivalente a los registros de Pascal serían los diccionarios, pero éstos también ofrecen mayor flexibilidad Step10: Además, se pueden usar los campos de un registro para armar una forma más simple los strings Step11: Y si le queremos modificar la nota a un alumno, sólo tenemos que acceder a ese campo y asignarle un nuevo valor Step12: O incluso se le puede cambiar el tipo de dato a un campo y agregar uno nuevo Step13: Algo que hay que tener en cuenta es que el orden en que se asignan los campos a un registro no es el orden interno de esos campos. Estructuras de control Así como en Pascal se delimitan los bloques de código con las palabras reservadas begin y end, en Python se usan la indentación (espacios) para determinar qué se encuentra dentro de una estructura de control y qué no. for Si queremos imprimir los números del 0 al 14 podemos crear una lista con range y usar el for para imprimir cada valor Step14: Incluso, si queremos imprimir los valores de una lista que nosotros armamos, también podemos hacerlo Step15: Y si queremos imprimir cada elemento de la lista junto con su posición podemos usar la función enumerate Step16: También se puede usar la función zip para ir tomando los primeros elementos de una lista, después los segundos, y así sucesivamente Step17: Y en realidad, se puede iterar sobre cualquier elemento iterable, como por ejemplo los strings Step18: También se pueden iterar listas que tengan distintos tipos de elementos, pero hay que tener en cuenta qué se quiere hacer con ellos Step19: while El ciclo while también ejecuta un bloque de código mientras la condición sea verdadera Step20: Las listas tienen una función llamada pop que lo que hace es tomar el último elemento de ella y lo elimina Step21: Aunque también podría obtener el primero
Python Code: lista_de_numeros = [1, 6, 3, 9, 5, 2] print lista_de_numeros print type(lista_de_numeros) Explanation: 20/10 Tipos de datos compuestos. Estructuras de control repetitivas. Índices y slices Diccionarios como acumuladores/contadores Listas End of explanation print 'El %s esta en %s?: %s' % (5, lista_de_numeros, 5 in lista_de_numeros) print 'El %s esta en %s?: %s' % (7, lista_de_numeros, 7 in lista_de_numeros) Explanation: Es fácil saber si un número está en la lista o no: End of explanation print 'mundo in "Hola mundo": ', 'mundo' in "Hola mundo" print 'MUNDO in "Hola mundo": ', 'MUNDO' in "Hola mundo" Explanation: El operador in también funciona para strings (es case sensitive): End of explanation lista_de_numeros[3] = 152 print lista_de_numeros Explanation: O modificar un elemento de la lista: End of explanation lista_de_cosas = [2, 5.5, 'letras', [1, 2, 3], ('tupla', 'de', 'strings')] print lista_de_cosas Explanation: En ningún momento dijimos que la lista era de enteros, por lo que tranquilamente podemos guardar elementos de distintos tipos de datos End of explanation lista_de_cosas = [2, 5.5, 'letras', [1, 2, 3], ('tupla', 'de', 'strings')] print 'Lista de cosas:', lista_de_cosas del lista_de_cosas[3] print 'Después de eliminar la posición 3:', lista_de_cosas Explanation: Para eliminar un elemento sólo tenemos que usar la función del e indicar la posición. End of explanation print 'primer elemento:', lista_de_cosas[0] ultimo = lista_de_cosas[-1] print 'último:', ultimo print 'del_segundo_al_ultimo_sin_incluirlo:', lista_de_cosas[1:4] print 'del_segundo_al_ultimo_sin_incluirlo:', lista_de_cosas[1:-1] print 'del_segundo_al_ultimo_incluyendolo:', lista_de_cosas[1:] Explanation: Y con las listas también se pueden hacer slices: End of explanation print range.__doc__ print print 'Ejemplos:' print ' range(15):', range(15) print ' range(15)[2:9]:', range(15)[2:9] print ' range(15)[2:9:3]:', range(15)[2:9:3] print ' range(2,9):', range(2,9) print ' range(2,9,3):', range(2,9,3) Explanation: Existe una función llamada range que crea permite crear listas de números: End of explanation tupla = (1, 2, 3, 4) # Se usa paréntesis en lugar de corchetes print tupla tupla = tupla[2:4] print tupl print type(tupla) Explanation: Tuplas Las tuplas son listas inmutables, es decir, que no se pueden modificar. Si no se pueden modificar, ¿para qué existen?. Porque crearlas es mucho más eficiente que crear listas y en muchas ocasiones, como con las constantes, queremos crear variables que no se modifiquen. End of explanation registros_con_campos_variables = {'campo1': 12, 'campo2': 'valor campo2'} print registros_con_campos_variables print type(registros_con_campos_variables) print print 'Le agrego un campo al diccionario' registros_con_campos_variables['otro_campo'] = 432 print registros_con_campos_variables print print 'Y ahora otro, pero con un int como índice' registros_con_campos_variables[123] = 'también puede usarse los números como clave' print registros_con_campos_variables Explanation: Diccionarios El equivalente a los registros de Pascal serían los diccionarios, pero éstos también ofrecen mayor flexibilidad: End of explanation alumno = { 'nombre': 'Juan', 'apellido': 'Perez', 'nota': 2 } print 'El alumno %(nombre)s %(apellido)s se sacó un %(nota)s' % alumno print 'El alumno {nombre} {apellido} se sacó un {nota}'.format(**alumno) Explanation: Además, se pueden usar los campos de un registro para armar una forma más simple los strings: End of explanation alumno = { 'nombre': 'Juan', 'apellido': 'Perez', 'nota': 2 } print alumno alumno['nota'] = 5 print alumno Explanation: Y si le queremos modificar la nota a un alumno, sólo tenemos que acceder a ese campo y asignarle un nuevo valor: End of explanation alumno = { 'nombre': 'Juan', 'apellido': 'Perez', 'parcial': 2 } print 'Alumno:', alumno alumno['parcial'] = [2, 6] # Cambio el tipo de dato de int a list print 'Agrego la nota del recuperatorio:', alumno alumno['coloquio'] = 8 # Agrego un nuevo campo print 'Agrego la nota del coloquio:', alumno del alumno['parcial'] # Elimino el campo nota print 'Elimino las notas del parcial:', alumno Explanation: O incluso se le puede cambiar el tipo de dato a un campo y agregar uno nuevo: End of explanation for i in range(15): print i Explanation: Algo que hay que tener en cuenta es que el orden en que se asignan los campos a un registro no es el orden interno de esos campos. Estructuras de control Así como en Pascal se delimitan los bloques de código con las palabras reservadas begin y end, en Python se usan la indentación (espacios) para determinar qué se encuentra dentro de una estructura de control y qué no. for Si queremos imprimir los números del 0 al 14 podemos crear una lista con range y usar el for para imprimir cada valor: End of explanation for i in [1, 6, 3, 9, 5, 2]: print i Explanation: Incluso, si queremos imprimir los valores de una lista que nosotros armamos, también podemos hacerlo: End of explanation lista = range(15, 30, 3) print lista for idx, value in enumerate(lista): print '%s: %s' % (idx, value) Explanation: Y si queremos imprimir cada elemento de la lista junto con su posición podemos usar la función enumerate: End of explanation for par in zip([1, 2, 3], [4, 5, 6]): print par Explanation: También se puede usar la función zip para ir tomando los primeros elementos de una lista, después los segundos, y así sucesivamente End of explanation for caracter in "Hola mundo": print caracter Explanation: Y en realidad, se puede iterar sobre cualquier elemento iterable, como por ejemplo los strings: End of explanation lista = [1, 2, "12", "34", [5, 6]] print 'La lista tiene los elementos:', lista for elemento in lista: print '{0}*2: {1}:'.format(elemento, elemento*2) Explanation: También se pueden iterar listas que tengan distintos tipos de elementos, pero hay que tener en cuenta qué se quiere hacer con ellos: End of explanation numero = 5 while numero < 10: print numero numero += 1 Explanation: while El ciclo while también ejecuta un bloque de código mientras la condición sea verdadera: End of explanation lista = range(5) print 'La lista antes de entrar al while tiene:', lista while lista: # Si la lista no esta vacía, sigo sacando elementos print lista.pop() print 'La lista después de salir del while tiene:', lista Explanation: Las listas tienen una función llamada pop que lo que hace es tomar el último elemento de ella y lo elimina: End of explanation lista = range(5) print 'La lista antes de entrar al while tiene:', lista while lista: # Si la lista no esta vacía, sigo sacando elementos print lista.pop(0) print 'La lista después de salir del while tiene:', lista Explanation: Aunque también podría obtener el primero: End of explanation
12,825
Given the following text description, write Python code to implement the functionality described below step by step Description: Testing the back normalscore transformation Step1: Getting the data ready for work If the data is in GSLIB format you can use the function gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame. Step2: The nscore transformation table function Step3: Get the transformation table Step4: Get the normal score transformation Note that the declustering is applied on the transformation tables Step5: Doing the back transformation
Python Code: #general imports import matplotlib.pyplot as plt import pygslib from matplotlib.patches import Ellipse import numpy as np import pandas as pd #make the plots inline %matplotlib inline Explanation: Testing the back normalscore transformation End of explanation #get the data in gslib format into a pandas Dataframe mydata= pygslib.gslib.read_gslib_file('../data/cluster.dat') #view data in a 2D projection plt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=mydata['Primary']) plt.colorbar() plt.grid(True) plt.show() Explanation: Getting the data ready for work If the data is in GSLIB format you can use the function gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame. End of explanation print (pygslib.gslib.__dist_transf.backtr.__doc__) Explanation: The nscore transformation table function End of explanation transin,transout, error = pygslib.gslib.__dist_transf.ns_ttable(mydata['Primary'],mydata['Declustering Weight']) print ('there was any error?: ', error!=0) Explanation: Get the transformation table End of explanation mydata['NS_Primary'] = pygslib.gslib.__dist_transf.nscore(mydata['Primary'],transin,transout,getrank=False) mydata['NS_Primary'].hist(bins=30) Explanation: Get the normal score transformation Note that the declustering is applied on the transformation tables End of explanation mydata['NS_Primary_BT'],error = pygslib.gslib.__dist_transf.backtr(mydata['NS_Primary'], transin,transout, ltail=1,utail=1,ltpar=0,utpar=60, zmin=0,zmax=60,getrank=False) print ('there was any error?: ', error!=0, error) mydata[['Primary','NS_Primary_BT']].hist(bins=30) mydata[['Primary','NS_Primary_BT', 'NS_Primary']].head() Explanation: Doing the back transformation End of explanation
12,826
Given the following text description, write Python code to implement the functionality described below step by step Description: Cross-validation In the machine learning examples, we have already shown the importance of split training and splitting data. However, a rough trial is not enough. Because if we randomly assign a training and testing data, it could be bias. We could improve it by cross validation. First, let's review how we split training and spliting Step1: Question 1 Step2: From this example, we could see that if we just split training and testing data for just once, sometimes we could get a very "good model and sometimes we may have got a very ""bad" model. From the example, we could know that it is not about model itself. It is just because we use different set of training data and test data. Your exercise for cross-validation From the previous example, we may have question, how we choose the parameter for knn? (n_neighbors=?). A good way to do it is called tuning parameters. In this exercise, we could learn how to tune your parameter by taking the advantage of corss-validation. Goal Step3: A new Cross-validation task
Python Code: from sklearn.datasets import load_iris from sklearn.cross_validation import train_test_split from sklearn.neighbors import KNeighborsClassifier from sklearn import metrics # read in the iris data iris = load_iris() # create X (features) and y (response) X = iris.data y = iris.target # use train/test split with different random_state values X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=4) # check classification accuracy of KNN with K=5 knn = KNeighborsClassifier(n_neighbors=5) knn.fit(X_train, y_train) y_pred = knn.predict(X_test) print(metrics.accuracy_score(y_test, y_pred)) Explanation: Cross-validation In the machine learning examples, we have already shown the importance of split training and splitting data. However, a rough trial is not enough. Because if we randomly assign a training and testing data, it could be bias. We could improve it by cross validation. First, let's review how we split training and spliting: End of explanation from sklearn.cross_validation import cross_val_score # 10-fold cross-validation with K=5 for KNN (the n_neighbors parameter) # First we initialize a knn model knn = KNeighborsClassifier(n_neighbors=5) # Secondly we use cross_val_scores to get all possible accuracies. # It works like this, first we make the data into 10 chunks. # Then we run KNN for 10 times and we make each chunk as testing data for each iteration. scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy') print(scores) # use average accuracy as an estimate of out-of-sample accuracy print(scores.mean()) Explanation: Question 1: If you haven't learn KNN, please first search it and please answer this simple question: Is it a supervised learning or an unsupervised learning? Also please answer, in the previous example, what does n_neighbors=5 mean? Answer: Double click this cell and input your answer here. Steps for K-fold cross-validation Split the dataset into K equal partitions (or "folds"). Use fold 1 as the testing set and the union of the other folds as the training set. Calculate testing accuracy. Cross-validation example: End of explanation # search for an optimal value of K for KNN # Suppose we set the range of K is from 1 to 31. k_range = list(range(1, 31)) # An list that stores different accuracy scores. k_scores = [] for k in k_range: # Your code: # First, initilize a knn model with number k # Second, use 10-fold cross validation to get 10 scores with that model. k_scores.append(scores.mean()) # Make a visuliazaion for it, and please check what is the best k for knn import matplotlib.pyplot as plt %matplotlib inline # plot the value of K for KNN (x-axis) versus the cross-validated accuracy (y-axis) plt.plot(k_range, k_scores) plt.xlabel('Value of K for KNN') plt.ylabel('Cross-Validated Accuracy') Explanation: From this example, we could see that if we just split training and testing data for just once, sometimes we could get a very "good model and sometimes we may have got a very ""bad" model. From the example, we could know that it is not about model itself. It is just because we use different set of training data and test data. Your exercise for cross-validation From the previous example, we may have question, how we choose the parameter for knn? (n_neighbors=?). A good way to do it is called tuning parameters. In this exercise, we could learn how to tune your parameter by taking the advantage of corss-validation. Goal: Select the best tuning parameters (aka "hyperparameters") for KNN on the iris dataset Your programming task: From the above example, we know that, if we set number of neighbors as K=5, we could get an average accuracy as 0.97. However, if we want to find a better number, what should we do? It is very straight forward, we could iteratively set different numbers for K and find what K could bring us the best accuaracy. End of explanation # 10-fold cross-validation with the best KNN model knn = KNeighborsClassifier(n_neighbors=20) print(cross_val_score(knn, X, y, cv=10, scoring='accuracy').mean()) # How about logistic regression? Please finish the code below and make a comparison. # Hint, please check how we make it by knn. from sklearn.linear_model import LogisticRegression # initialize a logistic regression model here. # Then print the average score of logistic model. Explanation: A new Cross-validation task: model selection We already apply cross-validation to knn model. How about other models? Please continue to read the notes and do another exercise. End of explanation
12,827
Given the following text description, write Python code to implement the functionality described below step by step Description: Comparing SDO/AIA Response Functions Step1: Wavelength Response First, load the SSW results into some convenient data structure. Step2: Run the SunPy calculation. Step3: Plot the results against each other. Step4: Now, do a "residual plot" of the differences between the two results. Step5: Now, zooming in on the two spikes in the 335 and 304 $\mathrm{\mathring{A}}$ channels... Step6: It looks like there is contamination from the 94 $\mathrm{\mathring{A}}$ channel in the 304 $\mathrm{\mathring{A}}$ channel and contamination from 131 $\mathrm{\mathring{A}}$ in the 335 $\mathrm{\mathring{A}}$ channel. Why? Is this a mistake or just something we haven't accounted for in the SunPy calculation? Now what if we test using the pre-calculated effective area loaded from the SSW genx files?
Python Code: import numpy as np import matplotlib.pyplot as plt import sunpy.instr.aia %matplotlib inline Explanation: Comparing SDO/AIA Response Functions: SSW and SunPy This notebook runs comparisons between the results of SSW and SunPy in calculating the wavelength and temperature response functions of the AIA instrument on board SDO. End of explanation data = np.loadtxt('../aia_sample_data/aia_wresponse_raw.dat') channels = [94,131,171,193,211,304,335] ssw_results = {} for i in range(len(channels)): ssw_results[channels[i]] = {'wavelength':data[:,0], 'response':data[:,i+1]} Explanation: Wavelength Response First, load the SSW results into some convenient data structure. End of explanation response = sunpy.instr.aia.Response(path_to_genx_dir='../ssw_aia_response_data/') response.calculate_wavelength_response() Explanation: Run the SunPy calculation. End of explanation fig,axes = plt.subplots(3,3,figsize=(12,12)) for c,ax in zip(channels,axes.flatten()): #ssw ax.plot(ssw_results[c]['wavelength'],ssw_results[c]['response'], color=response.channel_colors[c],label='ssw') #sunpy ax.plot(response.wavelength_response[c]['wavelength'],response.wavelength_response[c]['response'], color=response.channel_colors[c],marker='.',ms=12,label='SunPy') if c!=335 and c!=304: ax.set_xlim([c-20,c+20]) ax.set_title('{} $\mathrm{{\mathring{{A}}}}$'.format(c),fontsize=20) ax.set_xlabel(r'$\lambda$ ({0:latex})'.format(response.wavelength_response[c]['wavelength'].unit),fontsize=20) ax.set_ylabel(r'$R_i(\lambda)$ ({0:latex})'.format(response.wavelength_response[c]['response'].unit),fontsize=20) axes[0,0].legend(loc='best') plt.tight_layout() Explanation: Plot the results against each other. End of explanation fig,axes = plt.subplots(3,3,figsize=(12,12),sharey=True,sharex=True) for c,ax in zip(channels,axes.flatten()): #ssw ax2 = ax.twinx() ssw_interp = ssw_results[c]['response']*response.wavelength_response[c]['response'].unit delta_response = np.fabs(response.wavelength_response[c]['response'] - ssw_interp)/(ssw_interp) ax.plot(response.wavelength_response[c]['wavelength'],delta_response,color=response.channel_colors[c]) ax2.plot(response.wavelength_response[c]['wavelength'],response.wavelength_response[c]['response'], color='k',linestyle='--') ax.set_title('{} $\mathrm{{\mathring{{A}}}}$'.format(c),fontsize=20) ax.set_xlabel(r'$\lambda$ ({0:latex})'.format(response.wavelength_response[c]['wavelength'].unit),fontsize=20) ax.set_ylabel(r'$\frac{|\mathrm{SSW}-\mathrm{SunPy}|}{\mathrm{SSW}}$',fontsize=20) ax2.set_ylabel(r'$R_i(\lambda)$ ({0:latex})'.format(response.wavelength_response[c]['response'].unit)) ax.set_ylim([-1.1,1.1]) plt.tight_layout() Explanation: Now, do a "residual plot" of the differences between the two results. End of explanation fig,axes = plt.subplots(1,2,figsize=(10,5)) for c,ax in zip([304,335],axes.flatten()): #ssw ax.plot(ssw_results[c]['wavelength'],ssw_results[c]['response'], color=response.channel_colors[c],label='ssw') #sunpy ax.plot(response.wavelength_response[c]['wavelength'],response.wavelength_response[c]['response'], color=response.channel_colors[c],marker='.',ms=12,label='SunPy') if c==304: ax.set_xlim([80,100]) if c==335: ax.set_xlim([120,140]) ax.set_title('{} $\mathrm{{\mathring{{A}}}}$'.format(c),fontsize=20) ax.set_xlabel(r'$\lambda$ ({0:latex})'.format(response.wavelength_response[c]['wavelength'].unit),fontsize=20) ax.set_ylabel(r'$R_i(\lambda)$ ({0:latex})'.format(response.wavelength_response[c]['response'].unit),fontsize=20) axes[0].legend(loc='best') plt.tight_layout() Explanation: Now, zooming in on the two spikes in the 335 and 304 $\mathrm{\mathring{A}}$ channels... End of explanation fig,axes = plt.subplots(1,2,figsize=(10,5)) for ax,c in zip(axes.flatten(),[304,335]): wvl_response = response._channel_info[c]['effective_area']*response._calculate_system_gain(c) ax.plot(response.wavelength_response[c]['wavelength'], wvl_response,'--',color=response.channel_colors[c],label=r'EA from SSW') ax.plot(response.wavelength_response[c]['wavelength'], response.wavelength_response[c]['response'],'.',ms=12,color=response.channel_colors[c], label=r'SunPy') if c==304: ax.set_xlim([80,100]) if c==335: ax.set_xlim([120,140]) ax.set_title('{} $\mathrm{{\mathring{{A}}}}$'.format(c),fontsize=20) ax.set_xlabel(r'$\lambda$ ({0:latex})'.format(response.wavelength_response[c]['wavelength'].unit),fontsize=20) ax.set_ylabel(r'$R_i(\lambda)$ ({0:latex})'.format(response.wavelength_response[c]['response'].unit),fontsize=20) axes[0].legend(loc='best') plt.tight_layout() Explanation: It looks like there is contamination from the 94 $\mathrm{\mathring{A}}$ channel in the 304 $\mathrm{\mathring{A}}$ channel and contamination from 131 $\mathrm{\mathring{A}}$ in the 335 $\mathrm{\mathring{A}}$ channel. Why? Is this a mistake or just something we haven't accounted for in the SunPy calculation? Now what if we test using the pre-calculated effective area loaded from the SSW genx files? End of explanation
12,828
Given the following text description, write Python code to implement the functionality described below step by step Description: Getting Started <a name='contents'></a> Contents <a href='#magic'>The <tt>matmodlab</tt> namespace</a> <a href='#model.def'>Defining a Model</a> <a href='#model.def.mat'>Material Model Definition</a> <a href='#model.def.step'>Step Definitions</a> <a href='#model.run'>Running a Model</a> <a href='#model.out'>Model Outputs</a> <a href='#model.view'>Viewing Model Results</a> <a name='magic'></a> The matmodlab Namespace A notebook should include the following statement to import the Matmodlab namespace from matmodlab2 import * Step1: <a name='model.def'></a> Defining a Model The purpose of a Matmodlab model is to predict the response of a material to deformation. A Matmodlab model requires two parts to be fully defined Step2: Other optional arguments to MaterialPointSimulator are output_format defines the output format of the simulation results. Valid choices are REC [default] and TXT. d specifies the directory to which simulation results are written. The default is the current directory. Note Step3: The ElasticMaterial is a linear elastic model implemented in Python. The source code is contained in matmodlab/materials/elastic.py. The parameters E and Nu represent the Young's modulus and Poisson's ratio, respectively. <a name='model.def.step'></a> Step Definitions Deformation steps define the components of deformation and/or stress to be seen by the material model. Deformation steps are defined by the MaterialPointSimulator.run_step method Step4: To reverse the step of uniaxial strain defined in the previous cell to a state of zero strain, simply define another step in which all components of strain are zero Step5: If 3$\leq$ len(components)&lt;6, the missing components are assumed to be zero (if len(components)=1, it is assumed to be volumetric strain). From elementary linear elasticity, the axial and lateral stresses associated with the step of uniaxial strain are Step6: where K and G are the bulk and shear modulus, respectively. Using a stress defined step, an equivalent deformation path is Step7: The optional frames keyword was passed to run_step which instructs the MaterialPointSimulator object to perform the step in frames increments (50 in this case). For stress controlled steps, it is a good idea to increase the number of frames since the solution procedure involves a nonlinear Newton solve. Mixed-mode deformations of stress and strain can also be defined. The previous deformation path could have been defined by Step8: The deformation path can be defined equivalently through the specification of stress and strain rate steps Step9: The keyword scale is a scale factor applied to each of the components of components. Components of the deformation gradient and displacement can also be prescribed with the F and U descriptors, respectively. A deformation gradient step requires the nine components of the deformation gradient, arranged in row-major fashion. A displacement step method requires the three components of the displacement. Step10: <a name='model.run'></a> Running the Model Steps are run as they are added. <a name='model.out'></a> Model Outputs Model outputs computed by the MaterialPointSimulator are stored in a pandas.DataFrame Step11: The output can also be written to a file with the MaterialPointSimulator.dump method Step12: The MaterialPointSimulator.dump method takes an optional filename. If not given, the jobid will be used as the base filename. The file extension must be one of .npz for output to be written to a compressed numpy file are .exo for output to be written to the ExodusII format. Model outputs can be retrieved from the MaterialPointSimulator via the get method. For example, the components of stress throughout the history of the simulation are Step13: Individual components are also accessed Step14: Equivalently, the MaterialPointSimulator.get method can retrieve components field outputs from the output database
Python Code: %pylab inline from matmodlab2 import * Explanation: Getting Started <a name='contents'></a> Contents <a href='#magic'>The <tt>matmodlab</tt> namespace</a> <a href='#model.def'>Defining a Model</a> <a href='#model.def.mat'>Material Model Definition</a> <a href='#model.def.step'>Step Definitions</a> <a href='#model.run'>Running a Model</a> <a href='#model.out'>Model Outputs</a> <a href='#model.view'>Viewing Model Results</a> <a name='magic'></a> The matmodlab Namespace A notebook should include the following statement to import the Matmodlab namespace from matmodlab2 import * End of explanation mps = MaterialPointSimulator('jobid') Explanation: <a name='model.def'></a> Defining a Model The purpose of a Matmodlab model is to predict the response of a material to deformation. A Matmodlab model requires two parts to be fully defined: Material model: the material type and associated parameters. Deformation step[s]: defines deformation paths through which the material model is exercised. The MaterialPointSimulator object manages and allocates memory for materials and analysis steps. Minimally, instantiating a MaterialPointSimulator object requires a simulation ID: End of explanation E = 10 Nu = .1 mat = ElasticMaterial(E=E, Nu=Nu) mps.assign_material(mat) Explanation: Other optional arguments to MaterialPointSimulator are output_format defines the output format of the simulation results. Valid choices are REC [default] and TXT. d specifies the directory to which simulation results are written. The default is the current directory. Note: by default results are not written when exercised from the Notebook. If written results are required, the MaterialPointSimulator.dump method must be called explicitly. <a name='model.def.mat'></a> Material model definition A material model must be instantiated and assigned to the MaterialPointSimulator object. In this example, the ElasticMaterial (provided in the matmodlab namespace) is used. End of explanation ea = .1 mps.run_step('EEEEEE', (ea, 0, 0, 0, 0, 0)) Explanation: The ElasticMaterial is a linear elastic model implemented in Python. The source code is contained in matmodlab/materials/elastic.py. The parameters E and Nu represent the Young's modulus and Poisson's ratio, respectively. <a name='model.def.step'></a> Step Definitions Deformation steps define the components of deformation and/or stress to be seen by the material model. Deformation steps are defined by the MaterialPointSimulator.run_step method: mps.run_step(descriptors, components) where the argument descriptors is a string or list of strings describing each component of deformation. Each descriptor in descriptors must be one of: E: representing strain DE: representing an increment in strain S: representing stress DS: representing an increment in stress F: representing the deformation gradient U: representing displacement components is an array containing the components of deformation. The descriptors argument instructs the MaterialPointSimulator the intent of each component. The i$^{\rm th}$ descriptor corresponds to the i$^{\rm th}$ component. For example, python descriptors = ['E', 'E', 'E', 'S', 'S', 'S'] declares that the first three components of components are to be interpreted as strain ('E') and the last three as stress ('S'). Accordingly, len(components) must equal len(descriptors). Generally speaking, descriptors must be an iterable object with length equal to the length of components. Since string objects are iterable in python, the following representation of descriptors is equivalent: python descriptors = 'EEESSS' The run_step method also accepts the following optional arguments: increment: The length of the step in time units, default is 1. frames The number of discrete increments in the step, default is 1 scale: Scaling factor to be applied to components. If scale is a scalar, it is applied to all components equally. If scale is a list, scale[i] is applied to components[i] (and must, therefore, have the same length as components) kappa: The Seth-Hill parameter of generalized strain. Default is 0. temperature: The temperature at the end of the step. Default is 0. A note on tensor component ordering Component ordering for components is: Symmetric tensors: XX, YY, ZZ, XY, YZ, XZ Unsymmetric tensors: XX, XY, XZ YX, YY, YZ ZX, ZY, ZZ Vectors: X, Y, Z Example Steps Run a step of uniaxial strain by prescribing all 6 components of the strain tensor. End of explanation mps.run_step('EEE', (0, 0, 0)) Explanation: To reverse the step of uniaxial strain defined in the previous cell to a state of zero strain, simply define another step in which all components of strain are zero: End of explanation G = E / 2. / (1. + Nu) K = E / 3. / (1. - 2. * Nu) sa = (K + 4 * G / 3) * ea sl = (K - 2 * G / 3) * ea Explanation: If 3$\leq$ len(components)&lt;6, the missing components are assumed to be zero (if len(components)=1, it is assumed to be volumetric strain). From elementary linear elasticity, the axial and lateral stresses associated with the step of uniaxial strain are End of explanation mps.run_step('SSS', (sa, sl, sl), frames=50) mps.run_step('SSS', (0, 0, 0), frames=50) Explanation: where K and G are the bulk and shear modulus, respectively. Using a stress defined step, an equivalent deformation path is End of explanation mps.run_step('ESS', (ea, sl, sl), frames=50) mps.run_step('ESS', (0, 0, 0), frames=50) Explanation: The optional frames keyword was passed to run_step which instructs the MaterialPointSimulator object to perform the step in frames increments (50 in this case). For stress controlled steps, it is a good idea to increase the number of frames since the solution procedure involves a nonlinear Newton solve. Mixed-mode deformations of stress and strain can also be defined. The previous deformation path could have been defined by End of explanation mps.run_step(('DE', 'DE', 'DE'), (ea, 0, 0), frames=50) mps.run_step(('DE', 'DE', 'DE'), (ea, 0, 0), frames=50, scale=-1) mps.run_step(('DS', 'DS', 'DS'), (sa, sl, sl), frames=50) mps.run_step(('DS', 'DS', 'DS'), (sa, sl, sl), frames=50, scale=-1) Explanation: The deformation path can be defined equivalently through the specification of stress and strain rate steps: End of explanation fa = exp(ea) mps.run_step('FFFFFFFFF', (fa,0,0,0,1,0,0,0,1)) mps.run_step('FFFFFFFFF', (1,0,0,0,1,0,0,0,1)) Explanation: The keyword scale is a scale factor applied to each of the components of components. Components of the deformation gradient and displacement can also be prescribed with the F and U descriptors, respectively. A deformation gradient step requires the nine components of the deformation gradient, arranged in row-major fashion. A displacement step method requires the three components of the displacement. End of explanation mps.df Explanation: <a name='model.run'></a> Running the Model Steps are run as they are added. <a name='model.out'></a> Model Outputs Model outputs computed by the MaterialPointSimulator are stored in a pandas.DataFrame: End of explanation mps.dump() Explanation: The output can also be written to a file with the MaterialPointSimulator.dump method: End of explanation s = mps.get('S') Explanation: The MaterialPointSimulator.dump method takes an optional filename. If not given, the jobid will be used as the base filename. The file extension must be one of .npz for output to be written to a compressed numpy file are .exo for output to be written to the ExodusII format. Model outputs can be retrieved from the MaterialPointSimulator via the get method. For example, the components of stress throughout the history of the simulation are: End of explanation sxx = mps.get('S.XX') assert (amax(sxx) - sa) / amax(sxx) < 1e-8 Explanation: Individual components are also accessed: End of explanation mps.df.plot('Time', 'E.XX') Explanation: Equivalently, the MaterialPointSimulator.get method can retrieve components field outputs from the output database: <a name='model.view'></a> Viewing Model Outputs The simplest method of viewing model outputs is using the pandas.DataFrame.plot method, accessed through MaterialPointSimulator.df: End of explanation
12,829
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: Linear SVC Sklearn - Training a Linear SVM Classification Model
Python Code:: from sklearn.svm import SVC from sklearn.metrics import classification_report # create a linear SVC model with balanced class weights model = SVC(C=1, kernel='linear', class_weight='balanced') # fit model model.fit(X_train, y_train) # make predictions on test data y_pred = model.predict(X_test) # create a dataframe of feature coefficients coef = pd.DataFrame(model.coef_,columns=X_train.columns) print(coef) # print classification report print(classification_report(y_test, y_pred))
12,830
Given the following text description, write Python code to implement the functionality described below step by step Description: Data Analysis using Pandas Pandas has become the defacto package for data analysis. In this workshop, we are going to use the basics of pandas to analyze the interests of today's group. We are going to use meetup.com's api and fetch the list of interests that are listed in each of our meetup.com profile. We will compute which interests are common, which are uncommon, and find out which of the two members have most similar interests. Lets get started by importing the essentials. You would need meetup.com's python api and pandas installed. Step1: Next we need your meetup.com API. You will find it https Step2: The following function uses the api and loads the data into a pandas data frame. Note we are a bit sloppy both in style and how we load the data. In actual production code, we should add adequate logging with well-defined exceptions to indicate what's going wrong.
Python Code: import meetup.api import pandas as pd from IPython.display import Image, display, HTML from itertools import combinations Explanation: Data Analysis using Pandas Pandas has become the defacto package for data analysis. In this workshop, we are going to use the basics of pandas to analyze the interests of today's group. We are going to use meetup.com's api and fetch the list of interests that are listed in each of our meetup.com profile. We will compute which interests are common, which are uncommon, and find out which of the two members have most similar interests. Lets get started by importing the essentials. You would need meetup.com's python api and pandas installed. End of explanation API_KEY = '' event_id='' Explanation: Next we need your meetup.com API. You will find it https://secure.meetup.com/meetup_api/key/ Also we need today's event id. The event id created under Chicago Pythonistas is 233460758 and that under Chicago Python user group is 236205125. Use the one that has the higher number of RSVPs so that you get more data points. As an additional exercise, you might go for merging the two sets of RSVPs - but that's not needed for the workshop. End of explanation def get_members(event_id): client = meetup.api.Client(API_KEY) rsvps=client.GetRsvps(event_id=event_id, urlname='_ChiPy_') member_id = ','.join([str(i['member']['member_id']) for i in rsvps.results]) return client.GetMembers(member_id=member_id) def get_topics(members): topics = set() for member in members.results: try: for t in member['topics']: topics.add(t['name']) except: pass return list(topics) def df_topics(event_id): members = get_members(event_id=event_id) topics = get_topics(members) columns=['name','id','thumb_link'] + topics data = [] for member in members.results: topic_vector = [0]*len(topics) for topic in member['topics']: index = topics.index(topic['name']) topic_vector[index-1] = 1 try: data.append([member['name'], member['id'], member['photo']['thumb_link']] + topic_vector) except: pass return pd.DataFrame(data=data, columns=columns) #df.to_csv('output.csv', sep=";") Explanation: The following function uses the api and loads the data into a pandas data frame. Note we are a bit sloppy both in style and how we load the data. In actual production code, we should add adequate logging with well-defined exceptions to indicate what's going wrong. End of explanation
12,831
Given the following text description, write Python code to implement the functionality described below step by step Description: Table of Contents <p><div class="lev1 toc-item"><a href="#Lists" data-toc-modified-id="Lists-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Lists</a></div><div class="lev2 toc-item"><a href="#Final-Problem" data-toc-modified-id="Final-Problem-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Final Problem</a></div><div class="lev2 toc-item"><a href="#Indexing-and-Slicing- Step1: <img src="images/pylists.jpg"> Lists are a great way to store items of multiple types. Step2: Indexing and Slicing Step3: Exercise Print out the second and last items of the list within the list more_info all_info = ["Sherlock", True, 42] more_info = ["Watson", "Hooley", all_info] Step4: List Functions These come in handy when working with larger data sets where you need to add to your existing information base. Step5: Let's look at this again, comparing the two approaches. Step6: As a general rule though, if you're adding a single item, use append. Try what happens if you use the .extend method to add a single item, like "Jian Yang" to our list of original Incubees. Finding Items Step8: Exercise Convert the passage below into a list after replacing all punctuation Who is mentioned more times - Sherlock or Moriarty? How many times does the author refer to himself? (Hint Step9: Numerical Functions with Lists Lists are useful for more than just processing text. Step10: Exercise list_a = [21,14,7,19,15,47,42,55,97,92] * Find the average of list_a * Find the median of list_a If you need a refresher on how to find the average (or mean) and median, we will cover that later. Step12: Exercise Let's combine lists, string methods and a bit of logic. Strip the passage of all punctuation How many times does the word 'Titanic' appear? How many times does 'Carpathia' appear? Slightly trickier question - how many words does each paragraph have? (Hint Step13: Sets Here's how you make a set. Step14: That's it. As simple as that. So why do we have sets, as opposed to just using lists? Sets are really fast when it comes to checking for membership. Here's how Step15: But wait, there's more! set_a.add(x) Step16: Exercise An analyst is looking at two portfolios, and wants to identify the unique ones. pf1 = {"AA", "AAC", "AAP", "ABB", "AC", "ACCO", "AAPL", "AZO", "ZEN", "PX", "GS"} pf2 = {"AA", "GRUB", "AAC", "GWR", "AAP", "C", "AC", "CVS"} Write code for the following Step17: <img src="images/sets_easy.jpg"> Tuples Pronounced too-puhl We will keep this section very short in this section, but will revisit this later once we have introduced some more advanced concepts. For now, remember that a tuple is used when the values are fixed. In Python terms, it is what is referred to as 'immutable'. <img src="images/tuples.jpg"> Examples Step18: Tuples and Numbers Step19: <img src="images/commando.gif"> <img src="images/czech.gif"> Dictionaries Dictionaries contain a key and a value. They are also referred to as dicts, maps, or hashes. Step20: Common Dictionary Operations Step21: Exercise Find matching key between the two dictionaries.
Python Code: final = "It is with a heavy heart that I take up my pen to write these the last words in which I shall ever record the singular gifts by which my friend Mr. Sherlock Holmes was distinguished." final = final.replace(".", "") final = final.split(" ") final type(final) Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Lists" data-toc-modified-id="Lists-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Lists</a></div><div class="lev2 toc-item"><a href="#Final-Problem" data-toc-modified-id="Final-Problem-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Final Problem</a></div><div class="lev2 toc-item"><a href="#Indexing-and-Slicing-:-How-to-access-parts-of-a-list" data-toc-modified-id="Indexing-and-Slicing-:-How-to-access-parts-of-a-list-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Indexing and Slicing : How to access parts of a list</a></div><div class="lev3 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-121"><span class="toc-item-num">1.2.1&nbsp;&nbsp;</span>Exercise</a></div><div class="lev2 toc-item"><a href="#List-Functions" data-toc-modified-id="List-Functions-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>List Functions</a></div><div class="lev2 toc-item"><a href="#Finding-Items" data-toc-modified-id="Finding-Items-14"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Finding Items</a></div><div class="lev3 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-141"><span class="toc-item-num">1.4.1&nbsp;&nbsp;</span>Exercise</a></div><div class="lev2 toc-item"><a href="#Numerical-Functions-with-Lists" data-toc-modified-id="Numerical-Functions-with-Lists-15"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Numerical Functions with Lists</a></div><div class="lev3 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-151"><span class="toc-item-num">1.5.1&nbsp;&nbsp;</span>Exercise</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-16"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Exercise</a></div><div class="lev1 toc-item"><a href="#Sets" data-toc-modified-id="Sets-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Sets</a></div><div class="lev3 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-201"><span class="toc-item-num">2.0.1&nbsp;&nbsp;</span>Exercise</a></div><div class="lev1 toc-item"><a href="#Tuples" data-toc-modified-id="Tuples-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Tuples</a></div><div class="lev2 toc-item"><a href="#Tuples-and-Numbers" data-toc-modified-id="Tuples-and-Numbers-31"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Tuples and Numbers</a></div><div class="lev1 toc-item"><a href="#Dictionaries" data-toc-modified-id="Dictionaries-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Dictionaries</a></div><div class="lev2 toc-item"><a href="#Common-Dictionary-Operations" data-toc-modified-id="Common-Dictionary-Operations-41"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>Common Dictionary Operations</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-42"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>Exercise</a></div> Collections of Items: Lists, Sets, Tuples, Dictionaries # Lists Python has many ways to store a collection of similar or dissimilar items. You have already encountered lists, even though you haven't been formally introduced. ## Final Problem End of explanation all_info = ["Sherlock", True, 42] print(all_info) print(type(all_info)) more_info = ["Watson", "Hooley", all_info] print(more_info) print(type(more_info)) Explanation: <img src="images/pylists.jpg"> Lists are a great way to store items of multiple types. End of explanation all_info = ["Sherlock", True, 42] all_info[0] all_info[1] more_info = ["Watson", "Hooley", all_info] more_info[-1] more_info[-1][0] a_list = [1,2,3,4,5,6,7,8,9,10] a_list[0] a_list[2:4] a_list[1:] a_list[-3:] a_list[3:6] a_list[0:3] Explanation: Indexing and Slicing : How to access parts of a list End of explanation # Your code here Explanation: Exercise Print out the second and last items of the list within the list more_info all_info = ["Sherlock", True, 42] more_info = ["Watson", "Hooley", all_info] End of explanation incubees = ["Richard", "Gilfoyle", "Dinesh", "Nelson"] type(incubees) incubees.append("Jian Yang") incubees brogrammers = ["Aly", "Jason"] incubees.append(brogrammers) incubees incubees = ["Richard", "Gilfoyle", "Dinesh", "Nelson"] incubees.extend(brogrammers) incubees Explanation: List Functions These come in handy when working with larger data sets where you need to add to your existing information base. End of explanation incubees1 = ["Richard", "Gilfoyle", "Dinesh", "Nelson"] brogrammers1 = ["Aly", "Jason"] incubees1.append(brogrammers1) print(incubees1) print(len(incubees1)) incubees2 = ["Richard", "Gilfoyle", "Dinesh", "Nelson"] brogrammers2 = ["Aly", "Jason"] incubees2.extend(brogrammers2) print(incubees2) print(len(incubees2)) incubees = ["Richard", "Gilfoyle", "Dinesh", "Nelson"] incubees.sort() print(incubees) incubees.reverse() print(incubees) incubees = ["Richard", "Gilfoyle", "Dinesh", "Nelson"] brogrammers = ["Aly", "Jason"] print(incubees + brogrammers) new_list = incubees + brogrammers print (new_list*2) Explanation: Let's look at this again, comparing the two approaches. End of explanation incubees = ["Richard", "Gilfoyle", "Dinesh", "Nelson"] incubees.index("Richard") incubees.index("Dinesh") incubees.index("Jian Yang") incubees2 = incubees*2 incubees2.count("Richard") incubees = ["Richard", "Gilfoyle", "Dinesh", "Nelson"] incubees incubees.insert(0, "Jian Yang") incubees incubees.pop(0) incubees Explanation: As a general rule though, if you're adding a single item, use append. Try what happens if you use the .extend method to add a single item, like "Jian Yang" to our list of original Incubees. Finding Items End of explanation passage = It is with a heavy heart that I take up my pen to write these the last words in which I shall ever record the singular gifts by which my friend Mr. Sherlock Holmes was distinguished. In an incoherent and, as I deeply feel, an entirely inadequate fashion, I have endeavored to give some account of my strange experiences in his company from the chance which first brought us together at the period of the “Study in Scarlet,” up to the time of his interference in the matter of the “Naval Treaty”—an interference which had the unquestionable effect of preventing a serious international complication. It was my intention to have stopped there, and to have said nothing of that event which has created a void in my life which the lapse of two years has done little to fill. My hand has been forced, however, by the recent letters in which Colonel James Moriarty defends the memory of his brother, and I have no choice but to lay the facts before the public exactly as they occurred. I alone know the absolute truth of the matter, and I am satisfied that the time has come when no good purpose is to be served by its suppression. As far as I know, there have been only three accounts in the public press: that in the Journal de Geneve on May 6th, 1891, the Reuter’s despatch in the English papers on May 7th, and finally the recent letter to which I have alluded. Of these the first and second were extremely condensed, while the last is, as I shall now show, an absolute perversion of the facts. It lies with me to tell for the first time what really took place between Professor Moriarty and Mr. Sherlock Holmes. It may be remembered that after my marriage, and my subsequent start in private practice, the very intimate relations which had existed between Holmes and myself became to some extent modified. He still came to me from time to time when he desired a companion in his investigation, but these occasions grew more and more seldom, until I find that in the year 1890 there were only three cases of which I retain any record. During the winter of that year and the early spring of 1891, I saw in the papers that he had been engaged by the French government upon a matter of supreme importance, and I received two notes from Holmes, dated from Narbonne and from Nimes, from which I gathered that his stay in France was likely to be a long one. It was with some surprise, therefore, that I saw him walk into my consulting-room upon the evening of April 24th. It struck me that he was looking even paler and thinner than usual. # Your code below Explanation: Exercise Convert the passage below into a list after replacing all punctuation Who is mentioned more times - Sherlock or Moriarty? How many times does the author refer to himself? (Hint: Count the use of the word 'my') End of explanation list_a = [21,14,7,19,15,47,42,55,97,92] len(list_a) sum(list_a) max(list_a) min(list_a) range = max(list_a) - min(list_a) print(range) Explanation: Numerical Functions with Lists Lists are useful for more than just processing text. End of explanation # Your code below: Explanation: Exercise list_a = [21,14,7,19,15,47,42,55,97,92] * Find the average of list_a * Find the median of list_a If you need a refresher on how to find the average (or mean) and median, we will cover that later. End of explanation titanic = CAPE RACE, N.F., April 15. -- The White Star liner Olympic reports by wireless this evening that the Cunarder Carpathia reached, at daybreak this morning, the position from which wireless calls for help were sent out last night by the Titanic after her collision with an iceberg. The Carpathia found only the lifeboats and the wreckage of what had been the biggest steamship afloat. The Titanic had foundered at about 2:20 A.M., in latitude 41:46 north and longitude 50:14 west. This is about 30 minutes of latitude, or about 34 miles, due south of the position at which she struck the iceberg. All her boats are accounted for and about 655 souls have been saved of the crew and passengers, most of the latter presumably women and children. There were about 1,200 persons aboard the Titanic. The Leyland liner California is remaining and searching the position of the disaster, while the Carpathia is returning to New York with the survivors. It can be positively stated that up to 11 o'clock to-night nothing whatever had been received at or heard by the Marconi station here to the effect that the Parisian, Virginian or any other ships had picked up any survivors, other than those picked up by the Carpathia. First News of the Disaster. The first news of the disaster to the Titanic was received by the Marconi wireless station here at 10:25 o'clock last night (as told in yesterday's New York Times.) The Titanic was first heard giving the distress signal "C. Q. D.," which was answered by a number of ships, including the Carpathia, the Baltic and the Olympic. The Titanic said she had struck an iceberg and was in immediate need of assistance, giving her position as latitude 41:46 north and longitude 50:14 west. At 10:55 o'clock the Titanic reported she was sinking by the head, and at 11:25 o'clock the station here established communication with the Allan liner Virginian, from Halifax to Liverpool, and notified her of the Titanic's urgent need of assistance and gave her the Titanic's position. The Virginian advised the Marconi station almost immediately that she was proceeding toward the scene of the disaster. At 11:36 o'clock the Titanic informed the Olympic that they were putting the women off in boats and instructed the Olympic to have her boats read to transfer the passangers. The Titanic, during all this time, continued to give distress signals and to announce her position. The wireless operator seemed absolutely cool and clear-headed, his sending throughout being steady and perfectly formed, and the judgment used by him was of the best. The last signals heard from the Titanic were received at 12:27 A.M., when the Virginian reported having heard a few blurred signals which ended abruptly. # Your code here Explanation: Exercise Let's combine lists, string methods and a bit of logic. Strip the passage of all punctuation How many times does the word 'Titanic' appear? How many times does 'Carpathia' appear? Slightly trickier question - how many words does each paragraph have? (Hint: Split the passage at "\n", then count the words for each paragraph) End of explanation set_a = {1,2,3,4,5} print(set_a) Explanation: Sets Here's how you make a set. End of explanation set_a = {1,2,3,4,5} 5 in set_a 6 in set_a Explanation: That's it. As simple as that. So why do we have sets, as opposed to just using lists? Sets are really fast when it comes to checking for membership. Here's how: End of explanation set_b = {1,2,3} print(set_a - set_b) print(set_a.difference(set_b)) Explanation: But wait, there's more! set_a.add(x): add a value to a set set_a.remove(x): remove a value from a set set_a - set_b: return values in a but not in b. set_a.difference(set_b): same as set_a - set_b set_a | set_b: elements in a or b. Equivalent to set_a.union(set_b) set_a &amp; set_b: elements in both a and b. Equivalent to set_a.intersection(set_b) set_a ^ set_b: elements in a or b but not both. Equivalent to set_a.symmetric_difference(set_b) set_a &lt;= set_b: tests whether every element in set_a is in set_b. Equivalent to set_a.issubset(set_b) End of explanation pf1 = {"AA", "AAC", "AAP", "ABB", "AC", "ACCO", "AAPL", "AZO", "ZEN", "PX", "GS"} pf2 = {"AA", "GRUB", "AAC", "GWR", "AAP", "C", "AC", "CVS"} # Find the stocks in either pf1 or pf2, but not in both. # Find the stocks in both portfolios # Create a third portfolio named pf3, which has pf1 and pf2 combined # Market conditions have changed, let's drop GRUB and CVS from pf3 and add IBM Explanation: Exercise An analyst is looking at two portfolios, and wants to identify the unique ones. pf1 = {"AA", "AAC", "AAP", "ABB", "AC", "ACCO", "AAPL", "AZO", "ZEN", "PX", "GS"} pf2 = {"AA", "GRUB", "AAC", "GWR", "AAP", "C", "AC", "CVS"} Write code for the following: * Find the stocks in either pf1 or pf2, but not in both. (Hint: Symmetric Difference) * Find the stocks in both portfolios (Hint: Intersection) * Create a third portfolio named pf3, which has pf1 and pf2 combined (Hint: Union) * Market conditions have changed, let's drop GRUB and CVS from pf3 and add IBM (Hint: set_a.remove(x) and set_a.add(x) ) End of explanation children = ("Meadow", "Anthony") capos = ("Paulie", "Silvio", "Christopher", "Furio","Richie") len(children) len(capos) capos capos = list(capos) capos capos.append("Bobby") capos capos = tuple(capos) capos Explanation: <img src="images/sets_easy.jpg"> Tuples Pronounced too-puhl We will keep this section very short in this section, but will revisit this later once we have introduced some more advanced concepts. For now, remember that a tuple is used when the values are fixed. In Python terms, it is what is referred to as 'immutable'. <img src="images/tuples.jpg"> Examples: End of explanation monthly_high = (115.20, 113.60, 117.15, 120.90, 118.25) print("Monthly high is", max(monthly_high)) print("Monthly low is", min(monthly_high)) print("Range:", max(monthly_high)-min(monthly_high)) Explanation: Tuples and Numbers End of explanation dict_1 = {"a":1, "b":2, "c":3, "d":4} print(dict_1) fav_book = { "title": "Crime and Punishment", "author": "Fyodor Dostoyevsky", "price": 10.95, "pages": 400, "source": "Amazon", "awesome": True } fav_book["title"] fav_book["awesome"] # Rarely used in this manner fav_book.get("price") fav_book["weight"] = 42 print(fav_book) "awesome" in fav_book # Doesn't work! True in fav_book Explanation: <img src="images/commando.gif"> <img src="images/czech.gif"> Dictionaries Dictionaries contain a key and a value. They are also referred to as dicts, maps, or hashes. End of explanation dict_1 = {"a":1, "b":2, "c":3, "d":4} print(dict_1) dict_1.keys() dict_1.values() dict_1.pop("d") print(dict_1) fav_book.pop("awesome") fav_book Explanation: Common Dictionary Operations End of explanation a_dict = {"a":"e", "b":5, "c":3, "c": 4} b_dict = {"c":5, "d":6} a_set = set(a_dict) b_set = set(b_dict) a_set.intersection(b_set) Explanation: Exercise Find matching key between the two dictionaries. End of explanation
12,832
Given the following text description, write Python code to implement the functionality described below step by step Description: First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it. Importing and preparing your data Import your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not. Step1: What was the most popular type of complaint, and how many times was it filed? Step2: Make a horizontal bar graph of the top 5 most frequent complaint types. Step3: Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually. Step4: According to your selection of data, how many cases were filed in March? How about May? Step5: I'd like to see all of the 311 complaints called in on April 1st. Surprise! We couldn't do this in class, but it was just a limitation of our data set Step6: What was the most popular type of complaint on April 1st? Step7: What were the most popular three types of complaint on April 1st Step8: What month has the most reports filed? How many? Graph it. Step9: What week of the year has the most reports filed? How many? Graph the weekly complaints. Step10: Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic). Step11: Which were the top five days of the year for filing complaints? How many on each of those days? Graph it. Step12: What hour of the day are the most complaints? Graph a day of complaints. Step13: One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after? Step14: So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am. Step15: Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight). Step17: df_NYPD = df[df['Agency Name'] == 'New York City Police Department'] df_NYPD.groupby(by=df_NYPD.index.hour)['Unique Key'].count().plot(kind='bar') Dont understand why I can't change the labels here Step19: Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints? Step20: Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer.
Python Code: import datetime import datetime as dt dt.datetime.strptime('07/06/2015 10:58:27 AM', '%m/%d/%Y %I:%M:%S %p') #datetime.datetime(2015, 7, 6, 0, 0) parser = lambda date: pd.datetime.strptime(date, '%m/%d/%Y %H:%M:%S') df = pd.read_csv("311-2015.csv", low_memory=False, parse_dates=[1], dtype=str , nrows=200000) df.info() df.index = df['Created Date'] del df['Created Date'] df.head(2) Explanation: First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it. Importing and preparing your data Import your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not. End of explanation df['Complaint Type'].value_counts().head(1) Explanation: What was the most popular type of complaint, and how many times was it filed? End of explanation ax = df['Complaint Type'].value_counts().sort_values(ascending=True).tail(5).plot(kind='barh', figsize=(6,4), fontsize=9) ax.set_title("Top 5 Most Frequent 311-Complaints in 2015") ax.set_xlabel("Complaint Count") ax.set_ylabel("Complaint Type") plt.savefig("5 Most Frequent 311 Complaints in 2015.svg", bbox_inches='tight') #http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html Explanation: Make a horizontal bar graph of the top 5 most frequent complaint types. End of explanation df_summed_complaints = df['Borough'].value_counts() summed_complaints = pd.DataFrame(df_summed_complaints) summed_complaints.reset_index(inplace=True) summed_complaints.columns = ['Borough', 'Complaint Count'] Borough_head_count = pd.read_csv("NYC_Boroughs.csv") summed_complaints_merged = summed_complaints.merge(Borough_head_count, left_on='Borough', right_on='borough name') del summed_complaints_merged['borough name'] summed_complaints_merged summed_complaints_merged['Per Capita'] = summed_complaints_merged['Total'] / summed_complaints_merged['Complaint Count'] summed_complaints_merged['Per Capita'].sort_values(ascending=False) summed_complaints_merged[['Borough', 'Per Capita']] Sorted_complaints = summed_complaints_merged.sort_values(by='Per Capita', ascending=False) Sorted_complaints Explanation: Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually. End of explanation df['2015-03']['Unique Key'].count() df['2015-05']['Unique Key'].count() Explanation: According to your selection of data, how many cases were filed in March? How about May? End of explanation df['2015-04-01']['Unique Key'].count() Explanation: I'd like to see all of the 311 complaints called in on April 1st. Surprise! We couldn't do this in class, but it was just a limitation of our data set End of explanation df['2015-04-01']['Complaint Type'].value_counts().head(1) Explanation: What was the most popular type of complaint on April 1st? End of explanation df['2015-04-01']['Complaint Type'].value_counts().head(3) Explanation: What were the most popular three types of complaint on April 1st End of explanation #PANDAS resample: http://stackoverflow.com/questions/17001389/pandas-resample-documentation/17001474#17001474 ax = df.resample('M')['Unique Key'].count().plot(kind='barh') #ax.set_yticks(['2015-01-31 00:00:00, 2015-02-28 00:00:00, 2015-03-30 00:00:00, 2015-04-30 00:00:00, 2015-05-31 00:00:00, 2015-06-30 00:00:00, 2015-07-31 00:00:00, 2015-08-31 00:00:00, 2015-09-30 00:00:00, 2015-10-31 00:00:00, 2015-11-30 00:00:00, 2015-12-31 00:00:00']) ax.set_yticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']) ax.set_title('Number of Complaints per Month') ax.set_ylabel('Months of the Year') ax.set_xlabel('Complaint Counts') Explanation: What month has the most reports filed? How many? Graph it. End of explanation df_week_count = df.resample('W')['Unique Key'].count() df_week_count = pd.DataFrame(df_week_count) df_week_count.sort_values(by='Unique Key', ascending=False).head(5) Explanation: What week of the year has the most reports filed? How many? Graph the weekly complaints. End of explanation df['Noise'] = df['Complaint Type'].str.contains('Noise') df_noise = df[df['Noise'] == True] ax = df_noise.resample('D')['Unique Key'].count().plot(kind='bar', figsize=(15,4)) ax.set_xticklabels('') ax.set_title('Number of Noise Complaints per Day in 2015') ax.set_ylabel('Noise Complaint Count') ax.set_xlabel('From January - December 2015 ') df_noise.groupby(by=df_noise.index.hour)['Unique Key'].count().plot(kind='bar', figsize=(12,6)) Explanation: Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic). End of explanation df_day_count = df.resample('D')['Unique Key'].count() df_day_count = pd.DataFrame(df_day_count) ax = df_day_count.sort_values(by='Unique Key', ascending=True).tail(5).plot(kind='barh', legend=False) ax.set_yticklabels(['10. Oktober', '6. August', '3. August', '25. September', '19. Oktober']) ax.set_title('Top five complaint days in 2015') ax.set_ylabel('') ax.set_xlabel('Complaint Counts') plt.savefig("Top 5 Complaint Days", bbox_inches='tight') Explanation: Which were the top five days of the year for filing complaints? How many on each of those days? Graph it. End of explanation ax = df.groupby(by=df.index.hour)['Unique Key'].count().plot(kind='bar', figsize=(12,6)) ax.set_title('Number of Complaints per hour in 2015') Explanation: What hour of the day are the most complaints? Graph a day of complaints. End of explanation df.groupby(by=df.index.hour)['Complaint Type'].value_counts().head(2) df_complaint_type_count_per_hour = df.groupby(by=df.index.hour)['Complaint Type'].value_counts() Top_complaints_by_hour = pd.DataFrame(df_complaint_type_count_per_hour) Top_complaints_by_hour['Complaint Type'][0].head(1) Top_complaints_by_hour['Complaint Type'][1].head(1) Top_complaints_by_hour['Complaint Type'][23].head(1) #More Reading: http://pandas.pydata.org/pandas-docs/version/0.13.1/timeseries.html Explanation: One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after? End of explanation df.groupby(by=df.index.hour == 0)['Unique Key'].count() Explanation: So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am. End of explanation df['Agency'].value_counts().head(5) df['Agency Name'].value_counts().head(5) df_NYPD = df[df['Agency Name'] == 'New York City Police Department'] df_NYPD.groupby(by=df_NYPD.index.hour)['Unique Key'].count().plot(kind='bar') df_HPD = df[df['Agency Name'] == 'Department of Housing Preservation and Development'] df_HPD.groupby(by=df_HPD.index.hour)['Unique Key'].count().plot(kind='bar') df_DOT = df[df['Agency Name'] == 'Department of Transportation'] df_DOT.groupby(by=df_DOT.index.hour)['Unique Key'].count().plot(kind='bar') df_DPR = df[df['Agency Name'] == 'Department of Parks and Recreation'] df_DPR.groupby(by=df_DPR.index.hour)['Unique Key'].count().plot(kind='bar') df_DOHMH = df[df['Agency Name'] == 'Department of Health and Mental Hygiene'] df_DOHMH.groupby(by=df_DOHMH.index.hour)['Unique Key'].count().plot(kind='bar') Explanation: Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight). End of explanation ax = df[df['Agency Name'] == 'New York City Police Department'].groupby(by=df_NYPD.index.hour)['Unique Key'].count().plot(kind='bar', stacked=True, figsize=(15,6)) df[df['Agency Name'] == 'Department of Housing Preservation and Development'].groupby(by=df_HPD.index.hour)['Unique Key'].count().plot(kind='bar', ax=ax, stacked=True, color='lightblue') df[df['Agency Name'] == 'Department of Transportation'].groupby(by=df_DOT.index.hour)['Unique Key'].count().plot(kind='bar', ax=ax, stacked=True, color='purple') df[df['Agency Name'] == 'Department of Health and Mental Hygiene'].groupby(by=df_DOHMH.index.hour)['Unique Key'].count().plot(kind='bar', ax=ax, stacked=True, color='grey') df[df['Agency Name'] == 'Department of Parks and Recreation'].groupby(by=df_DPR.index.hour)['Unique Key'].count().plot(kind='bar', ax=ax, stacked=True, color='green') ax.set_title('Time of Day Agencies file complaints') ax.set_ylabel('Complaint Count') ax.set_xlabel( Red: New York City Police Department Blue: Department of Housing Preservation and Development Purple: Department of Transportation Grey: Department of Health and Mental Hygiene Green: Department of Parks and Recreation) plt.savefig("Time of Day Agencies File Complaints.svg", bbox_inches='tight') Explanation: df_NYPD = df[df['Agency Name'] == 'New York City Police Department'] df_NYPD.groupby(by=df_NYPD.index.hour)['Unique Key'].count().plot(kind='bar') Dont understand why I can't change the labels here: End of explanation ax = df[df['Agency Name'] == 'New York City Police Department'].resample('W')['Agency'].count().plot(figsize=(15,4), linewidth=3) df[df['Agency Name'] == 'Department of Housing Preservation and Development'].resample('W')['Agency'].count().plot(color='lightblue', linewidth=2) df[df['Agency Name'] == 'Department of Transportation'].resample('W')['Agency'].count().plot(color='purple', linewidth=2) df[df['Agency Name'] == 'Department of Health and Mental Hygiene'].resample('W')['Agency'].count().plot(color='grey', linewidth=2) df[df['Agency Name'] == 'Department of Parks and Recreation'].resample('W')['Agency'].count().plot(color='green', linewidth=2) ax.set_xticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']) ax.set_title('When the Angencies Filed the Reports 2015') ax.set_xlabel( Red: New York City Police Department Blue: Department of Housing Preservation and Development Purple: Department of Transportation Grey: Department of Health and Mental Hygiene Green: Department of Parks and Recreation) plt.savefig("When the Angencies Filed the Reports 2015.svg", bbox_inches='tight') ax = df_NYPD.resample('W')['Agency'].count().plot(kind='bar', figsize=(15,4)) ax.set_xticklabels(['']) ax.set_xlabel('January to December 2015') Explanation: Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints? End of explanation df['2015-08']['Complaint Type'].value_counts().head(3) df['2015-07']['Complaint Type'].value_counts().head(3) df['2015-05']['Complaint Type'].value_counts().head(3) Explanation: Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer. End of explanation
12,833
Given the following text description, write Python code to implement the functionality described below step by step Description: Facies classification using Random Forest Contest entry by <a href=\"https Step1: A complete description of the dataset is given in the Original contest notebook by Brendon Hall, Enthought. A total of four measured rock properties and two interpreted geological properties are given as raw predictor variables for the prediction of the "Facies" class. Feature engineering As stated in our previous submission, we believe that feature engineering has a high potential for increasing classification success. A strategy for building new variables is explained below. The dataset is distributed along a series of drillholes intersecting a stratigraphic sequence. Sedimentary facies tend to be deposited in sequences that reflect the evolution of the paleo-environment (variations in water depth, water temperature, biological activity, currents strenght, detrital input, ...). Each facies represents a specific depositional environment and is in contact with facies that represent a progressive transition to an other environment. Thus, there is a relationship between neighbouring samples, and the distribution of the data along drillholes can be as important as data values for predicting facies. A series of new variables (features) are calculated and tested below to help represent the relationship of neighbouring samples and the overall texture of the data along drillholes. These variables are Step2: Building a prediction model from these variables A Random Forest model is built here to test the effect of these new variables on the prediction power. Algorithm parameters have been tuned so as to take into account the non-stationarity of the training and testing sets using the LeaveOneGroupOut cross-validation strategy. The size of individual tree leafs and nodes has been increased to the maximum possible without significantly increasing the variance so as to reduce the bias of the prediction. Box plot for a series of scores obtained through cross validation are presented below. Create predictor and target arrays Step3: Estimation of validation scores from this tuning Step4: Evaluating feature importances The individual contribution to the classification for each feature (i.e., feature importances) can be obtained from a Random Forest classifier. This gives a good idea of the classification power of individual features and helps understanding which type of feature engineering is the most promising. Caution should be taken when interpreting feature importances, as highly correlated variables will tend to dilute their classification power between themselves and will rank lower than uncorelated variables. Step5: Plot the feature importances of the forest Step6: Features derived from raw geological variables tend to have the highest classification power. Rolling min, max and mean tend to have better classification power than raw data. Wavelet approximation coeficients tend to have a similar to lower classification power than raw data. Features expressing local texture of the data (entropy, gradient, standard deviation and wavelet detail coeficients) have a low classification power but still participate in the prediction. Confusion matrix The confusion matrix from the validation test is presented below. Step7: Applying the classification model to test data Step8: Exporting results
Python Code: ###### Importing all used packages %matplotlib inline import warnings warnings.filterwarnings('ignore') import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.colors as colors from mpl_toolkits.axes_grid1 import make_axes_locatable import seaborn as sns from pandas import set_option # set_option("display.max_rows", 10) pd.options.mode.chained_assignment = None ###### Import packages needed for the make_vars functions import Feature_Engineering as FE ##### import stuff from scikit learn from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import KFold, cross_val_score,LeavePGroupsOut, LeaveOneGroupOut, cross_val_predict from sklearn.metrics import confusion_matrix, make_scorer, f1_score, accuracy_score, recall_score, precision_score filename = '../facies_vectors.csv' training_data = pd.read_csv(filename) training_data.head() training_data.describe() Explanation: Facies classification using Random Forest Contest entry by <a href=\"https://geolern.github.io/index.html#\">geoLEARN</a>: <a href=\"https://github.com/mablou\">Martin Blouin</a>, <a href=\"https://github.com/lperozzi\">Lorenzo Perozzi</a> and <a href=\"https://github.com/Antoine-Cate\">Antoine Caté</a> <br> in collaboration with <a href=\"http://ete.inrs.ca/erwan-gloaguen\">Erwan Gloaguen</a> Original contest notebook by Brendon Hall, Enthought In this notebook we will train a machine learning algorithm to predict facies from well log data. The dataset comes from a class exercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007). The dataset consists of log data from nine wells that have been labeled with a facies type based on observation of core. We will use this log data to train a Random Forest model to classify facies types. Exploring the dataset First, we import and examine the dataset used to train the classifier. End of explanation ##### cD From wavelet db1 dwt_db1_cD_df = FE.make_dwt_vars_cD(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db1') ##### cA From wavelet db1 dwt_db1_cA_df = FE.make_dwt_vars_cA(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db1') ##### cD From wavelet db3 dwt_db3_cD_df = FE.make_dwt_vars_cD(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db3') ##### cA From wavelet db3 dwt_db3_cA_df = FE.make_dwt_vars_cA(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db3') ##### From entropy entropy_df = FE.make_entropy_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], l_foots=[2, 3, 4, 5, 7, 10]) ###### From gradient gradient_df = FE.make_gradient_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], dx_list=[2, 3, 4, 5, 6, 10, 20]) ##### From rolling average moving_av_df = FE.make_moving_av_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[1, 2, 5, 10, 20]) ##### From rolling standard deviation moving_std_df = FE.make_moving_std_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[3 , 4, 5, 7, 10, 15, 20]) ##### From rolling max moving_max_df = FE.make_moving_max_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[3, 4, 5, 7, 10, 15, 20]) ##### From rolling min moving_min_df = FE.make_moving_min_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[3 , 4, 5, 7, 10, 15, 20]) ###### From rolling NM/M ratio rolling_marine_ratio_df = FE.make_rolling_marine_ratio_vars(wells_df=training_data, windows=[5, 10, 15, 20, 30, 50, 75, 100, 200]) ###### From distance to NM and M, up and down dist_M_up_df = FE.make_distance_to_M_up_vars(wells_df=training_data) dist_M_down_df = FE.make_distance_to_M_down_vars(wells_df=training_data) dist_NM_up_df = FE.make_distance_to_NM_up_vars(wells_df=training_data) dist_NM_down_df = FE.make_distance_to_NM_down_vars(wells_df=training_data) list_df_var = [dwt_db1_cD_df, dwt_db1_cA_df, dwt_db3_cD_df, dwt_db3_cA_df, entropy_df, gradient_df, moving_av_df, moving_std_df, moving_max_df, moving_min_df, rolling_marine_ratio_df, dist_M_up_df, dist_M_down_df, dist_NM_up_df, dist_NM_down_df] combined_df = training_data for var_df in list_df_var: temp_df = var_df combined_df = pd.concat([combined_df,temp_df],axis=1) combined_df.replace(to_replace=np.nan, value='-1', inplace=True) print (combined_df.shape) combined_df.head(5) Explanation: A complete description of the dataset is given in the Original contest notebook by Brendon Hall, Enthought. A total of four measured rock properties and two interpreted geological properties are given as raw predictor variables for the prediction of the "Facies" class. Feature engineering As stated in our previous submission, we believe that feature engineering has a high potential for increasing classification success. A strategy for building new variables is explained below. The dataset is distributed along a series of drillholes intersecting a stratigraphic sequence. Sedimentary facies tend to be deposited in sequences that reflect the evolution of the paleo-environment (variations in water depth, water temperature, biological activity, currents strenght, detrital input, ...). Each facies represents a specific depositional environment and is in contact with facies that represent a progressive transition to an other environment. Thus, there is a relationship between neighbouring samples, and the distribution of the data along drillholes can be as important as data values for predicting facies. A series of new variables (features) are calculated and tested below to help represent the relationship of neighbouring samples and the overall texture of the data along drillholes. These variables are: detail and approximation coeficients at various levels of two wavelet transforms (using two types of Daubechies wavelets); measures of the local entropy with variable observation windows; measures of the local gradient with variable observation windows; rolling statistical calculations (i.e., mean, standard deviation, min and max) with variable observation windows; ratios between marine and non-marine lithofacies with different observation windows; distances from the nearest marine or non-marine occurence uphole and downhole. Functions used to build these variables are located in the Feature Engineering python script. All the data exploration work related to the conception and study of these variables is not presented here. End of explanation X = combined_df.iloc[:, 4:] y = combined_df['Facies'] groups = combined_df['Well Name'] Explanation: Building a prediction model from these variables A Random Forest model is built here to test the effect of these new variables on the prediction power. Algorithm parameters have been tuned so as to take into account the non-stationarity of the training and testing sets using the LeaveOneGroupOut cross-validation strategy. The size of individual tree leafs and nodes has been increased to the maximum possible without significantly increasing the variance so as to reduce the bias of the prediction. Box plot for a series of scores obtained through cross validation are presented below. Create predictor and target arrays End of explanation scoring_param = ['accuracy', 'recall_weighted', 'precision_weighted','f1_weighted'] scores = [] Cl = RandomForestClassifier(n_estimators=100, max_features=0.1, min_samples_leaf=25, min_samples_split=50, class_weight='balanced', random_state=42, n_jobs=-1) lpgo = LeavePGroupsOut(n_groups=2) for scoring in scoring_param: cv=lpgo.split(X, y, groups) validated = cross_val_score(Cl, X, y, scoring=scoring, cv=cv, n_jobs=-1) scores.append(validated) scores = np.array(scores) scores = np.swapaxes(scores, 0, 1) scores = pd.DataFrame(data=scores, columns=scoring_param) sns.set_style('white') fig,ax = plt.subplots(figsize=(8,6)) sns.boxplot(data=scores) plt.xlabel('scoring parameters') plt.ylabel('score') plt.title('Classification scores for tuned parameters'); Explanation: Estimation of validation scores from this tuning End of explanation ####### Evaluation of feature importances Cl = RandomForestClassifier(n_estimators=75, max_features=0.1, min_samples_leaf=25, min_samples_split=50, class_weight='balanced', random_state=42,oob_score=True, n_jobs=-1) Cl.fit(X, y) print ('OOB estimate of accuracy for prospectivity classification using all features: %s' % str(Cl.oob_score_)) importances = Cl.feature_importances_ std = np.std([tree.feature_importances_ for tree in Cl.estimators_], axis=0) indices = np.argsort(importances)[::-1] print("Feature ranking:") Vars = list(X.columns.values) for f in range(X.shape[1]): print("%d. feature %d %s (%f)" % (f + 1, indices[f], Vars[indices[f]], importances[indices[f]])) Explanation: Evaluating feature importances The individual contribution to the classification for each feature (i.e., feature importances) can be obtained from a Random Forest classifier. This gives a good idea of the classification power of individual features and helps understanding which type of feature engineering is the most promising. Caution should be taken when interpreting feature importances, as highly correlated variables will tend to dilute their classification power between themselves and will rank lower than uncorelated variables. End of explanation sns.set_style('white') fig,ax = plt.subplots(figsize=(15,5)) ax.bar(range(X.shape[1]), importances[indices],color="r", align="center") plt.ylabel("Feature importance") plt.xlabel('Ranked features') plt.xticks([], indices) plt.xlim([-1, X.shape[1]]); Explanation: Plot the feature importances of the forest End of explanation ######## Confusion matrix from this tuning cv=LeaveOneGroupOut().split(X, y, groups) y_pred = cross_val_predict(Cl, X, y, cv=cv, n_jobs=-1) conf_mat = confusion_matrix(y, y_pred) list_facies = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS'] conf_mat = pd.DataFrame(conf_mat, columns=list_facies, index=list_facies) conf_mat.head(10) Explanation: Features derived from raw geological variables tend to have the highest classification power. Rolling min, max and mean tend to have better classification power than raw data. Wavelet approximation coeficients tend to have a similar to lower classification power than raw data. Features expressing local texture of the data (entropy, gradient, standard deviation and wavelet detail coeficients) have a low classification power but still participate in the prediction. Confusion matrix The confusion matrix from the validation test is presented below. End of explanation filename = '../validation_data_nofacies.csv' test_data = pd.read_csv(filename) test_data.head(5) ##### cD From wavelet db1 dwt_db1_cD_df = FE.make_dwt_vars_cD(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db1') ##### cA From wavelet db1 dwt_db1_cA_df = FE.make_dwt_vars_cA(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db1') ##### cD From wavelet db3 dwt_db3_cD_df = FE.make_dwt_vars_cD(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db3') ##### cA From wavelet db3 dwt_db3_cA_df = FE.make_dwt_vars_cA(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], levels=[1, 2, 3, 4], wavelet='db3') ##### From entropy entropy_df = FE.make_entropy_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], l_foots=[2, 3, 4, 5, 7, 10]) ###### From gradient gradient_df = FE.make_gradient_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], dx_list=[2, 3, 4, 5, 6, 10, 20]) ##### From rolling average moving_av_df = FE.make_moving_av_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[1, 2, 5, 10, 20]) ##### From rolling standard deviation moving_std_df = FE.make_moving_std_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[3 , 4, 5, 7, 10, 15, 20]) ##### From rolling max moving_max_df = FE.make_moving_max_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[3, 4, 5, 7, 10, 15, 20]) ##### From rolling min moving_min_df = FE.make_moving_min_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'], windows=[3 , 4, 5, 7, 10, 15, 20]) ###### From rolling NM/M ratio rolling_marine_ratio_df = FE.make_rolling_marine_ratio_vars(wells_df=test_data, windows=[5, 10, 15, 20, 30, 50, 75, 100, 200]) ###### From distance to NM and M, up and down dist_M_up_df = FE.make_distance_to_M_up_vars(wells_df=test_data) dist_M_down_df = FE.make_distance_to_M_down_vars(wells_df=test_data) dist_NM_up_df = FE.make_distance_to_NM_up_vars(wells_df=test_data) dist_NM_down_df = FE.make_distance_to_NM_down_vars(wells_df=test_data) combined_test_df = test_data list_df_var = [dwt_db1_cD_df, dwt_db1_cA_df, dwt_db3_cD_df, dwt_db3_cA_df, entropy_df, gradient_df, moving_av_df, moving_std_df, moving_max_df, moving_min_df, rolling_marine_ratio_df, dist_M_up_df, dist_M_down_df, dist_NM_up_df, dist_NM_down_df] for var_df in list_df_var: temp_df = var_df combined_test_df = pd.concat([combined_test_df,temp_df],axis=1) combined_test_df.replace(to_replace=np.nan, value='-99999', inplace=True) X_test = combined_test_df.iloc[:, 3:] print (combined_test_df.shape) combined_test_df.head(5) Cl = RandomForestClassifier(n_estimators=100, max_features=0.1, min_samples_leaf=25, min_samples_split=50, class_weight='balanced', random_state=42, n_jobs=-1) Cl.fit(X, y) y_test = Cl.predict(X_test) y_test = pd.DataFrame(y_test, columns=['Predicted Facies']) test_pred_df = pd.concat([combined_test_df[['Well Name', 'Depth']], y_test], axis=1) test_pred_df.head() Explanation: Applying the classification model to test data End of explanation test_pred_df.to_pickle('Prediction_blind_wells_RF_c.pkl') Explanation: Exporting results End of explanation
12,834
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to scikit-learn Classification of Handwritten Digits the task is to predict, given an image, which digit it represents. We are given samples of each of the 10 possible classes (the digits zero through nine) on which we fit an estimator to be able to predict the classes to which unseen samples belong. 1. Data collection 2. Data preprocessing A dataset is a dictionary-like object that holds all the data and some metadata about the data. Step1: digits.images.shape Step2: 3. Build a model on training data In scikit-learn, an estimator for classification is a Python object that implements the methods fit(X, y) and predict(T). An example of an estimator is the class sklearn.svm.SVC that implements support vector classification. Step3: learning Step4: predicting Step5: 4. Evaluate the model on the test data learning dataset Step6: test dataset Step7: evaluation metrics Step8: 5. Deploy to the real system
Python Code: from sklearn import datasets digits = datasets.load_digits() %pylab inline digits.data digits.data.shape # n_samples, n_features Explanation: Introduction to scikit-learn Classification of Handwritten Digits the task is to predict, given an image, which digit it represents. We are given samples of each of the 10 possible classes (the digits zero through nine) on which we fit an estimator to be able to predict the classes to which unseen samples belong. 1. Data collection 2. Data preprocessing A dataset is a dictionary-like object that holds all the data and some metadata about the data. End of explanation digits.target digits.target.shape # show images fig = plt.figure(figsize=(6, 6)) # figure size in inches fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05) # plot the digits: each image is 8x8 pixels for i in range(64): ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[]) ax.imshow(digits.images[i], cmap=plt.cm.binary) # label the image with the target value ax.text(0, 7, str(digits.target[i])) Explanation: digits.images.shape End of explanation from sklearn import svm clf = svm.SVC(gamma=0.001, C=100.) Explanation: 3. Build a model on training data In scikit-learn, an estimator for classification is a Python object that implements the methods fit(X, y) and predict(T). An example of an estimator is the class sklearn.svm.SVC that implements support vector classification. End of explanation clf.fit(digits.data[:-500], digits.target[:-500]) Explanation: learning End of explanation clf.predict(digits.data[-1:]), digits.target[-1:] Explanation: predicting End of explanation (clf.predict(digits.data[:-500]) == digits.target[:-500]).sum() / float(len(digits.target[:-500])) Explanation: 4. Evaluate the model on the test data learning dataset End of explanation (clf.predict(digits.data[-500:]) == digits.target[-500:]).sum() / 500.0 Explanation: test dataset End of explanation from sklearn import metrics def evaluate(expected, predicted): print("Classification report:\n%s\n" % metrics.classification_report(expected, predicted)) print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted)) predicted = clf.predict(digits.data[-500:]) evaluate(digits.target[-500:], predicted) Explanation: evaluation metrics End of explanation import pickle s = pickle.dumps(clf) clf2 = pickle.loads(s) clf2.predict(digits.data[-1:]), digits.target[-1:] Explanation: 5. Deploy to the real system End of explanation
12,835
Given the following text description, write Python code to implement the functionality described below step by step Description: Take the set of pings, make sure we have actual clientIds and remove duplicate pings. Step2: We're going to dump each event from the pings. Do a little empty data sanitization so we don't get NoneType errors during the dump. We create a JSON array of active experiments as part of the dump. Step3: The data can have duplicate events, due to a bug in the data collection that was fixed (bug 1246973). We still need to de-dupe the events. Because pings can be archived on device and submitted on later days, we can't assume dupes only happen on the same submission day. We don't use submission date when de-duping. Step4: Create a set of events from "saved-session" UI telemetry. Output the data to CSV or Parquet. This script is designed to loop over a range of days and output a single day for the given channels. Use explicit date ranges for backfilling, or now() - '1day' for automated runs.
Python Code: def dedupe_pings(rdd): return rdd.filter(lambda p: p["meta/clientId"] is not None)\ .map(lambda p: (p["meta/documentId"], p))\ .reduceByKey(lambda x, y: x)\ .map(lambda x: x[1]) Explanation: Take the set of pings, make sure we have actual clientIds and remove duplicate pings. End of explanation def safe_str(obj): return the byte string representation of obj if obj is None: return unicode("") return unicode(obj) def transform(ping): output = [] # These should not be None since we filter those out & ingestion process adds the data clientId = ping["meta/clientId"] submissionDate = dt.datetime.strptime(ping["meta/submissionDate"], "%Y%m%d") events = ping["payload/UIMeasurements"] if events and isinstance(events, list): for event in events: if isinstance(event, dict) and "type" in event and event["type"] == "event": if "timestamp" not in event or "action" not in event or "method" not in event or "sessions" not in event: continue # Verify timestamp is a long, otherwise ignore the event timestamp = None try: timestamp = long(event["timestamp"]) except: continue # Force all fields to strings action = safe_str(event["action"]) method = safe_str(event["method"]) # The extras is an optional field extras = unicode("") if "extras" in event and safe_str(event["extras"]) is not None: extras = safe_str(event["extras"]) sessions = {} experiments = [] for session in event["sessions"]: if "experiment.1:" in session: experiments.append(safe_str(session[13:])) elif "firstrun.1" in session: sessions[unicode("firstrun")] = 1 elif "awesomescreen.1" in session: sessions[unicode("awesomescreen")] = 1 elif "reader.1" in session: sessions[unicode("reader")] = 1 output.append([clientId, submissionDate, timestamp, action, method, extras, json.dumps(sessions.keys()), json.dumps(experiments)]) return output Explanation: We're going to dump each event from the pings. Do a little empty data sanitization so we don't get NoneType errors during the dump. We create a JSON array of active experiments as part of the dump. End of explanation def dedupe_events(rdd): return rdd.map(lambda p: (p[0] + safe_str(p[2]) + p[3] + p[4], p))\ .reduceByKey(lambda x, y: x)\ .map(lambda x: x[1]) Explanation: The data can have duplicate events, due to a bug in the data collection that was fixed (bug 1246973). We still need to de-dupe the events. Because pings can be archived on device and submitted on later days, we can't assume dupes only happen on the same submission day. We don't use submission date when de-duping. End of explanation channels = ["nightly", "aurora", "beta", "release"] start = dt.datetime.now() - dt.timedelta(1) end = dt.datetime.now() - dt.timedelta(1) day = start while day <= end: for channel in channels: print "\nchannel: " + channel + ", date: " + day.strftime("%Y%m%d") pings = get_pings(sc, app="Fennec", channel=channel, submission_date=(day.strftime("%Y%m%d"), day.strftime("%Y%m%d")), build_id=("20100101000000", "99999999999999"), fraction=1) subset = get_pings_properties(pings, ["meta/clientId", "meta/documentId", "meta/submissionDate", "payload/UIMeasurements"]) subset = dedupe_pings(subset) print subset.first() rawEvents = subset.flatMap(transform) print "\nRaw count: " + str(rawEvents.count()) print rawEvents.first() uniqueEvents = dedupe_events(rawEvents) print "\nUnique count: " + str(uniqueEvents.count()) print uniqueEvents.first() s3_output = "s3n://net-mozaws-prod-us-west-2-pipeline-analysis/mobile/android_events" s3_output += "/v1/channel=" + channel + "/submission=" + day.strftime("%Y%m%d") schema = StructType([ StructField("clientid", StringType(), False), StructField("submissiondate", TimestampType(), False), StructField("ts", LongType(), True), StructField("action", StringType(), True), StructField("method", StringType(), True), StructField("extras", StringType(), True), StructField("sessions", StringType(), True), StructField("experiments", StringType(), True) ]) grouped = sqlContext.createDataFrame(uniqueEvents, schema) grouped.coalesce(1).write.parquet(s3_output) day += dt.timedelta(1) Explanation: Create a set of events from "saved-session" UI telemetry. Output the data to CSV or Parquet. This script is designed to loop over a range of days and output a single day for the given channels. Use explicit date ranges for backfilling, or now() - '1day' for automated runs. End of explanation
12,836
Given the following text description, write Python code to implement the functionality described below step by step Description: Data Science Demo Step1: And now we create a HiveContext to enable Spark to access data from HIVE Step2: Let's take a look at the dataset - first 5 rows Step3: Exploring the Dataset What are the different types of crime resolutions? Step4: Let's define a crime as 'resolved' if it has any string except "NONE" in the resolution column. Question Step5: Let's look at the longitude/latitude values in more detail. Spark provides the describe() function to see this some basic statistics of these columns Step6: Notice that the max values for longitude (-120.5) and latitude (90.0) seem strange. Those are not inside the SF area. Let's see how many bad values like this exist in the dataset Step9: Seems like this is a data quality issue where some data points just have a fixed (bad) value of -120.5, 90. Computing Neighborhoods Now I create a new dataset called crimes2 Step10: Question Step11: And as a bar chart Step12: Using the Python Folium package I draw an interactive map of San Francisco, and color-code each neighborhood with the percent of resolved crimes Step14: Preparing The Feature Matrix First, some basic feature engineering work Step15: For this demo, I create a training set for my model from the data in years 2011-2013 and a testing/validation set from the data in year 2014. Step16: For convenience, I define a function to compute our classification metrics (we will use it later) Step17: Next, I use Spark-ML to create a pipeline of transformation to generate the feature vector for each crime event Step18: Predictive Models Step19: Similarly, create the same pipeline with the Random Forest classifier Step21: Measure Model Accuracy Per Neighborhood I would like to also show the accuracy of the model for each neighborhood. For this, I compute the centroid for each neighborhood, using ESRI's ST_Centroid() HIVE UDF Step22: Now I draw a map, this time showing a marker with the accuracy for each neighborhood, using the results from the Random Forest model.
Python Code: # Set up Spark Context from pyspark import SparkContext, SparkConf SparkContext.setSystemProperty('spark.executor.memory', '4g') conf = SparkConf() conf.set('spark.sql.autoBroadcastJoinThreshold', 200*1024*1024) # 200MB for map-side joins conf.set('spark.executor.instances', 12) sc = SparkContext('yarn-client', 'Spark-demo', conf=conf) Explanation: Data Science Demo: predicting Crime resolution in San Francisco The City of San Francisco publishes historical crime events on their http://sfdata.gov website. I have loaded this dataset into HIVE. Let's use Spark to do some fun stuff with it! Setting Up Spark First we create a SparkContext: End of explanation # Setup HiveContext from pyspark.sql import HiveContext, Row hc = HiveContext(sc) hc.sql("use demo") hc.sql("DESCRIBE crimes").show() Explanation: And now we create a HiveContext to enable Spark to access data from HIVE: End of explanation crimes = hc.table("crimes") crimes.limit(5).toPandas() Explanation: Let's take a look at the dataset - first 5 rows: End of explanation crimes.select("resolution").distinct().toPandas() Explanation: Exploring the Dataset What are the different types of crime resolutions? End of explanation total = crimes.count() num_resolved = crimes.filter(crimes.resolution != 'NONE').count() print str(total) + " crimes total, out of which " + str(num_resolved) + " were resolved" Explanation: Let's define a crime as 'resolved' if it has any string except "NONE" in the resolution column. Question: How many crimes total in the dataset? How many resolved? End of explanation c1 = crimes.select(crimes.longitude.cast("float").alias("long"), crimes.latitude.cast("float").alias("lat")) c1.describe().toPandas() Explanation: Let's look at the longitude/latitude values in more detail. Spark provides the describe() function to see this some basic statistics of these columns: End of explanation c2 = c1.filter('lat < 37 or lat > 38') print c2.count() c2.head(3) Explanation: Notice that the max values for longitude (-120.5) and latitude (90.0) seem strange. Those are not inside the SF area. Let's see how many bad values like this exist in the dataset: End of explanation hc.sql("add jar /home/jupyter/notebooks/jars/guava-11.0.2.jar") hc.sql("add jar /home/jupyter/notebooks/jars/spatial-sdk-json.jar") hc.sql("add jar /home/jupyter/notebooks/jars/esri-geometry-api.jar") hc.sql("add jar /home/jupyter/notebooks/jars/spatial-sdk-hive.jar") hc.sql("create temporary function ST_Contains as 'com.esri.hadoop.hive.ST_Contains'") hc.sql("create temporary function ST_Point as 'com.esri.hadoop.hive.ST_Point'") cf = hc.sql( SELECT date_str, time, longitude, latitude, resolution, category, district, dayofweek, description FROM crimes WHERE longitude < -121.0 and latitude < 38.0 ).repartition(50) cf.registerTempTable("cf") crimes2 = hc.sql( SELECT date_str, time, dayofweek, category, district, description, longitude, latitude, if (resolution == 'NONE',0.0,1.0) as resolved, neighborho as neighborhood FROM sf_neighborhoods JOIN cf WHERE ST_Contains(sf_neighborhoods.shape, ST_Point(cf.longitude, cf.latitude)) ).cache() crimes2.registerTempTable("crimes2") crimes2.limit(5).toPandas() Explanation: Seems like this is a data quality issue where some data points just have a fixed (bad) value of -120.5, 90. Computing Neighborhoods Now I create a new dataset called crimes2: 1. Without the points that have invalid longitude/latitude 2. I calculate the neighborhood associated with each long/lat (for each crime), using ESRI geo-spatial UDFs 3. Translate "resolution" to "resolved" (1.0 = resolved, 0.0 = unresolved) End of explanation ngrp = crimes2.groupBy('neighborhood') ngrp_pd = ngrp.avg('resolved').toPandas() ngrp_pd['count'] = ngrp.count().toPandas()['count'] ngrp_pd.columns = ['neighborhood', '% resolved', 'count'] data = ngrp_pd.sort(columns = '% resolved', ascending=False) print data.set_index('neighborhood').head(10) Explanation: Question: what is the percentage of crimes resolved in each neighborhood? End of explanation import matplotlib matplotlib.style.use('ggplot') data.iloc[:10].plot('neighborhood', '% resolved', kind='bar', legend=False, figsize=(12,5), fontsize=15) Explanation: And as a bar chart: End of explanation from IPython.display import HTML import folium map_width=1000 map_height=600 sf_lat = 37.77 sf_long = -122.4 def inline_map(m, width=map_width, height=map_height): m.create_map() srcdoc = m.HTML.replace('"', '&quot;') embed = HTML('<iframe srcdoc="{}" ' 'style="width: {}px; height: {}px; ' 'border: none"></iframe>'.format(srcdoc, width, height)) return embed map_sf = folium.Map(location=[sf_lat, sf_long], zoom_start=12, width=map_width, height=map_height) map_sf.geo_json(geo_path='data/sfn.geojson', data=ngrp_pd, columns=['neighborhood', '% resolved'], key_on='feature.properties.neighborho', threshold_scale=[0, 0.3, 0.4, 0.5, 1.0], fill_color='OrRd', fill_opacity=0.6, line_opacity=0.6, legend_name='P(resolved)') inline_map(map_sf) Explanation: Using the Python Folium package I draw an interactive map of San Francisco, and color-code each neighborhood with the percent of resolved crimes: End of explanation import pandas as pd crimes3 = hc.sql( SELECT cast(SUBSTR(date_str,7,4) as int) as year, cast(SUBSTR(date_str,1,2) as int) as month, cast(SUBSTR(time,1,2) as int) as hour, category, district, dayofweek, description, neighborhood, longitude, latitude, resolved FROM crimes2 ).cache() crimes3.limit(5).toPandas() Explanation: Preparing The Feature Matrix First, some basic feature engineering work: 1. Extract year and month values from the date field 2. Extract hour from the time field End of explanation trainData = crimes3.filter(crimes3.year>=2011).filter(crimes3.year<=2013).cache() testData = crimes3.filter(crimes3.year==2014).cache() print "training set has " + str(trainData.count()) + " instances" print "test set has " + str(testData.count()) + " instances" Explanation: For this demo, I create a training set for my model from the data in years 2011-2013 and a testing/validation set from the data in year 2014. End of explanation def eval_metrics(lap): tp = float(len(lap[(lap['label']==1) & (lap['prediction']==1)])) tn = float(len(lap[(lap['label']==0) & (lap['prediction']==0)])) fp = float(len(lap[(lap['label']==0) & (lap['prediction']==1)])) fn = float(len(lap[(lap['label']==1) & (lap['prediction']==0)])) precision = tp / (tp+fp) recall = tp / (tp+fn) accuracy = (tp+tn) / (tp+tn+fp+fn) return {'precision': precision, 'recall': recall, 'accuracy': accuracy} Explanation: For convenience, I define a function to compute our classification metrics (we will use it later): precision, recall and accuracy End of explanation from IPython.display import Image Image(filename='pipeline.png') from pyspark.ml.feature import StringIndexer, VectorAssembler, Tokenizer, HashingTF from pyspark.ml import Pipeline inx1 = StringIndexer(inputCol="category", outputCol="cat-inx") inx2 = StringIndexer(inputCol="dayofweek", outputCol="dow-inx") inx3 = StringIndexer(inputCol="district", outputCol="dis-inx") inx4 = StringIndexer(inputCol="neighborhood", outputCol="ngh-inx") inx5 = StringIndexer(inputCol="resolved", outputCol="label") parser = Tokenizer(inputCol="description", outputCol="words") hashingTF = HashingTF(numFeatures=50, inputCol="words", outputCol="hash-inx") vecAssembler = VectorAssembler(inputCols =["month", "hour", "cat-inx", "dow-inx", "dis-inx", "ngh-inx", "hash-inx"], outputCol="features") Explanation: Next, I use Spark-ML to create a pipeline of transformation to generate the feature vector for each crime event: - Converting each string variable (e.g., category, dayofweek, etc) to a categorical variable - Converting the "description" field to a set of word features using Tokenizer() and HashingTF() - Convert "resolved" (float) into a categorical variable "label" - Assembling all the features into the a single feature vector (Vector Assembler) End of explanation from pyspark.ml.classification import LogisticRegression lr = LogisticRegression(maxIter=20, regParam=0.1, labelCol="label") pipeline_lr = Pipeline(stages=[inx1, inx2, inx3, inx4, inx5, parser, hashingTF, vecAssembler, lr]) model_lr = pipeline_lr.fit(trainData) results_lr = model_lr.transform(testData) m = eval_metrics(results_lr.select("label", "prediction").toPandas()) print "precision = " + str(m['precision']) + ", recall = " + str(m['recall']) + ", accuracy = " + str(m['accuracy']) Explanation: Predictive Models: Logistic Regression and Random Forest Finish up the end-to-end pipeline and run the training set through it. The resulting model is in the model_lr variable, and is used to predict on the testData. I use the previously defined eval_metrics() function to compute precision, recall, and accuracy End of explanation from pyspark.ml.classification import RandomForestClassifier rf = RandomForestClassifier(numTrees=250, maxDepth=5, maxBins=50, seed=42) pipeline_rf = Pipeline(stages=[inx1, inx2, inx3, inx4, inx5, parser, hashingTF, vecAssembler, rf]) model_rf = pipeline_rf.fit(trainData) results_rf = model_rf.transform(testData) m = eval_metrics(results_rf.select("label", "prediction").toPandas()) print "precision = " + str(m['precision']) + ", recall = " + str(m['recall']) + ", accuracy = " + str(m['accuracy']) Explanation: Similarly, create the same pipeline with the Random Forest classifier: End of explanation hc.sql("add jar /home/jupyter/notebooks/jars/guava-11.0.2.jar") hc.sql("add jar /home/jupyter/notebooks/jars/esri-geometry-api.jar") hc.sql("add jar /home/jupyter/notebooks/jars/spatial-sdk-hive.jar") hc.sql("add jar /home/jupyter/notebooks/jars/spatial-sdk-json.jar") hc.sql("create temporary function ST_Centroid as 'com.esri.hadoop.hive.ST_Centroid'") hc.sql("create temporary function ST_X as 'com.esri.hadoop.hive.ST_X'") hc.sql("create temporary function ST_Y as 'com.esri.hadoop.hive.ST_Y'") df_centroid = hc.sql( SELECT neighborho as neighborhood, ST_X(ST_Centroid(sf_neighborhoods.shape)) as cent_longitude, ST_Y(ST_Centroid(sf_neighborhoods.shape)) as cent_latitude FROM sf_neighborhoods ) df_centroid.cache() Explanation: Measure Model Accuracy Per Neighborhood I would like to also show the accuracy of the model for each neighborhood. For this, I compute the centroid for each neighborhood, using ESRI's ST_Centroid() HIVE UDF End of explanation df = results_rf.select("neighborhood", "label", "prediction").toPandas() map_sf = folium.Map(location=[sf_lat, sf_long], zoom_start=12, width=map_width, height=map_height) n_list = results_rf.select("neighborhood").distinct().toPandas()['neighborhood'].tolist() # list of neighborhoods for n in df_centroid.collect(): if n.neighborhood in n_list: m = eval_metrics(df[df['neighborhood']==n.neighborhood]) map_sf.simple_marker([n.cent_latitude, n.cent_longitude], \ popup = n.neighborhood + ": accuracy = %.2f" % m['accuracy']) inline_map(map_sf) Explanation: Now I draw a map, this time showing a marker with the accuracy for each neighborhood, using the results from the Random Forest model. End of explanation
12,837
Given the following text description, write Python code to implement the functionality described below step by step Description: The dataset The dataset is the mnist digits which is a common toy data set for testing machine learning methods on images. This is a subset of the mnist set which have also been shrunked in size. Let's load them and plot some. In addition to the images, there are also the labels Step1: Plot 10 of the training images. Rerun this cell to plot new images. Step2: Baseline linear classifier Before we spend our precious time setting up and training deep networks on the data, let's see how a simple linear classifier from sklearn can do. Step3: Try training a linear classifier on the Even-Odd labels Step4: Fully connected MLP This is a simple network made from two layers. On the Keras documentation page, you can find other nonlinearities under "Core Layers". You can add more layers, changes the layers, change the optimizer, or add dropout. Step5: Convolutional MLP We can also have the first layer be a set of small filters which are convolved with the images. Try different parameters and see what happens. (This network might be slow.) Step6: Visualizing the filters Linear classifier Step7: MLP Step8: CNN
Python Code: data = loadmat('small_mnist.mat') # Training data (images, 0-9, even-odd) # Images are stored in a (batch, x, y) array # Labels are integers train_im = data['train_im'] train_y = data['train_y'].ravel() train_eo = data['train_eo'].ravel() # Validation data (images, 0-9, even-odd) # Same format as training data valid_im = data['valid_im'] valid_y = data['valid_y'].ravel() valid_eo = data['valid_eo'].ravel() Explanation: The dataset The dataset is the mnist digits which is a common toy data set for testing machine learning methods on images. This is a subset of the mnist set which have also been shrunked in size. Let's load them and plot some. In addition to the images, there are also the labels: 0-9 or even-odd. Load the data. End of explanation im_size = train_im.shape[-1] order = np.random.permutation(train_im.shape[0]) ims = tri(train_im[order[:10]].reshape((-1, im_size**2)), (im_size, im_size), (1, 10), (1,1)) plt.imshow(ims, cmap='gray', interpolation='nearest') plt.axis('off') print('Labels: {}'.format(train_y[order[:10]])) print('Odd-Even: {}'.format(train_eo[order[:10]])) Explanation: Plot 10 of the training images. Rerun this cell to plot new images. End of explanation # Create the classifier to do multinomial classification linear_classifier = LR(solver='lbfgs', multi_class='multinomial', C=0.1) # Train and evaluate the classifier linear_classifier.fit(train_im.reshape(-1, im_size**2), train_y) print('Training Error on (0-9): {}'.format(linear_classifier.score(train_im.reshape(-1, im_size**2), train_y))) print('Validation Error on (0-9): {}'.format(linear_classifier.score(valid_im.reshape(-1, im_size**2), valid_y))) Explanation: Baseline linear classifier Before we spend our precious time setting up and training deep networks on the data, let's see how a simple linear classifier from sklearn can do. End of explanation # Import things from Keras Library from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Convolution2D from keras.regularizers import l2 from keras.optimizers import SGD, Adam, RMSprop from keras.utils import np_utils Explanation: Try training a linear classifier on the Even-Odd labels: train_eo! Using a Deep Nets library If you're just starting off with deep nets and want to quickly try them on a dataset it is probably easiest to start with an existing library rather than writing your own. There are now a bunch of different libraries written for Python. We'll be using Keras which is designed to be easy to use. In matlab, there is the Neural Network Toolbox. Keras documentation can be found here: http://keras.io/ We'll do the next most complicated network comparer to linear regression: a two layer network! End of explanation # Create the network! mlp = Sequential() # First fully connected layer mlp.add(Dense(im_size**2/2, input_shape=(im_size**2,), W_regularizer=l2(0.001))) # number of hidden units, default is 100 mlp.add(Activation('tanh')) # nonlinearity print('Shape after layer 1: {}'.format(mlp.output_shape)) # Second fully connected layer with softmax output mlp.add(Dropout(0.0)) # dropout is currently turned off, you may need to train for more epochs if nonzero mlp.add(Dense(10)) # number of targets, 10 for y, 2 for eo mlp.add(Activation('softmax')) # Adam is a simple optimizer, SGD has more parameters and is slower but may give better results opt = Adam() #opt = RMSprop() #opt = SGD(lr=0.1, momentum=0.9, decay=0.0001, nesterov=True) print('') mlp.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) mlp.fit(train_im.reshape(-1, im_size**2), np_utils.to_categorical(train_y), nb_epoch=20, batch_size=100) tr_score = mlp.evaluate(train_im.reshape(-1, im_size**2), np_utils.to_categorical(train_y), batch_size=100) va_score = mlp.evaluate(valid_im.reshape(-1, im_size**2), np_utils.to_categorical(valid_y), batch_size=100) print('') print('Train loss: {}, train accuracy: {}'.format(*tr_score)) print('Validation loss: {}, validation accuracy: {}'.format(*va_score)) Explanation: Fully connected MLP This is a simple network made from two layers. On the Keras documentation page, you can find other nonlinearities under "Core Layers". You can add more layers, changes the layers, change the optimizer, or add dropout. End of explanation # Create the network! cnn = Sequential() # First fully connected layer cnn.add(Convolution2D(20, 5, 5, input_shape=(1, im_size, im_size), border_mode='valid', subsample=(2, 2))) cnn.add(Activation('tanh')) # nonlinearity print('Shape after layer 1: {}'.format(cnn.output_shape)) # Take outputs and turn them into a vector cnn.add(Flatten()) print('Shape after flatten: {}'.format(cnn.output_shape)) # Fully connected layer cnn.add(Dropout(0.0)) # dropout is currently turned off, you may need to train for more epochs if nonzero cnn.add(Dense(100)) # number of targets, 10 for y, 2 for eo cnn.add(Activation('tanh')) # Second fully connected layer with softmax output cnn.add(Dropout(0.0)) # dropout is currently turned off, you may need to train for more epochs if nonzero cnn.add(Dense(10)) # number of targets, 10 for y, 2 for eo cnn.add(Activation('softmax')) # Adam is a simple optimizer, SGD has more parameters and is slower but may give better results #opt = Adam() #opt = RMSprop() opt = SGD(lr=0.1, momentum=0.9, decay=0.0001, nesterov=True) print('') cnn.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) cnn.fit(train_im[:, np.newaxis, ...], np_utils.to_categorical(train_y), nb_epoch=20, batch_size=100) tr_score = cnn.evaluate(train_im[:, np.newaxis, ...], np_utils.to_categorical(train_y), batch_size=100) va_score = cnn.evaluate(valid_im[:, np.newaxis, ...], np_utils.to_categorical(valid_y), batch_size=100) print('') print('Train loss: {}, train accuracy: {}'.format(*tr_score)) print('Validation loss: {}, validation accuracy: {}'.format(*va_score)) Explanation: Convolutional MLP We can also have the first layer be a set of small filters which are convolved with the images. Try different parameters and see what happens. (This network might be slow.) End of explanation W = linear_classifier.coef_ ims = tri(W, (im_size, im_size), (1, 10), (1,1)) plt.imshow(ims, cmap='gray', interpolation='nearest') plt.axis('off') Explanation: Visualizing the filters Linear classifier End of explanation W = mlp.get_weights()[0].T ims = tri(W, (im_size, im_size), (W.shape[0]//10, 10), (1,1)) plt.imshow(ims, cmap='gray', interpolation='nearest') plt.axis('off') Explanation: MLP End of explanation W = cnn.get_weights()[0] ims = tri(W.reshape(-1, np.prod(W.shape[2:])), (W.shape[2], W.shape[3]), (W.shape[0]//10, 10), (1,1)) plt.imshow(ims, cmap='gray', interpolation='nearest') plt.axis('off') Explanation: CNN End of explanation
12,838
Given the following text description, write Python code to implement the functionality described below step by step Description: Measuring the J/$\psi$ Meson Mass This notebook will walk you through a simplified measurement of the mass of the J/$\psi$ meson. We will use data taken by the CMS experiment hosted on CERN's opendata portal. After importing some modules and defining helper functions we download the data and then start exploring it. Step1: Fetch the data The data has been preprocessed and converted to a ROOT nTuple. These events were extracted from the CMS Mu Primary Dataset. Thanks to http Step2: Plot the dimuon mass spectrum Calculate the dimuon invariant mass for each event and plot the full spectrum. This shows many of the well known dimuon resonances. Several of the strange features on the plot are explained by varying trigger threshold for each of the regions. Step3: Zoom in on the J/$\psi$ resonance The whole spectrum does not show anything weird, time to zoom in on the region of the J/$\psi$ meson. Step4: Measure the mass The final step of the analysis is to build a fit model using RooFit and use it to extract the mass of the J/$\psi$ meson.
Python Code: # Required imports and setup import os import numpy as np from rootpy.plotting import Hist, Canvas, set_style, get_style from rootpy import asrootpy, log from root_numpy import root2array, fill_hist from ROOT import (RooFit, RooRealVar, RooDataHist, RooArgList, RooVoigtian, RooAddPdf, RooPolynomial, TLatex) style = get_style('ATLAS') style.SetCanvasDefW(900) style.SetCanvasDefH(600) set_style('ATLAS') log['/ROOT.TClassTable.Add'].setLevel(log.ERROR) def get_masses(filename, treename='events'): # Get array of squared mass values selecting opposite charge events expr = '(e1 + e2)**2 - ((px1 + px2)^2 + (py1 + py2)^2 + (pz1 + pz2)^2)' m2 = root2array(filename, treename=treename, branches=expr, selection='q1 * q2 == -1') # Remove bad events m2 = m2[m2 > 0] # Return the masses return np.sqrt(m2) def plot_hist(hist, logy=True, logx=True): # A function to plot the mass values canvas = Canvas() if logy: canvas.SetLogy() if logx: canvas.SetLogx() hist.xaxis.title = 'm_{#mu#mu} [GeV]' hist.yaxis.title = 'Events' hist.Draw('hist') return canvas Explanation: Measuring the J/$\psi$ Meson Mass This notebook will walk you through a simplified measurement of the mass of the J/$\psi$ meson. We will use data taken by the CMS experiment hosted on CERN's opendata portal. After importing some modules and defining helper functions we download the data and then start exploring it. End of explanation if not os.path.exists('events.root'): !curl https://cernbox.cern.ch/index.php/s/ZE45HERahm7DeZ6/download -o events.root Explanation: Fetch the data The data has been preprocessed and converted to a ROOT nTuple. These events were extracted from the CMS Mu Primary Dataset. Thanks to http://openstack.cern.ch for providing a CernVM running SL5 where we could set up CMSSW 4.2.8 and to https://github.com/tpmccauley/dimuon-filter for generating the CSV from CMS AODs. End of explanation masses = get_masses('events.root') mass_hist = Hist(1500, 0.5, 120) fill_hist(mass_hist, masses) plot_hist(mass_hist) Explanation: Plot the dimuon mass spectrum Calculate the dimuon invariant mass for each event and plot the full spectrum. This shows many of the well known dimuon resonances. Several of the strange features on the plot are explained by varying trigger threshold for each of the regions. End of explanation mass_hist_zoomed = Hist(100, 2.8, 3.4, drawstyle='EP') fill_hist(mass_hist_zoomed, masses) plot_hist(mass_hist_zoomed, logx=False, logy=False) Explanation: Zoom in on the J/$\psi$ resonance The whole spectrum does not show anything weird, time to zoom in on the region of the J/$\psi$ meson. End of explanation def fit(hist): hmin = hist.GetXaxis().GetXmin() hmax = hist.GetXaxis().GetXmax() # Declare observable x x = RooRealVar("x","x",hmin,hmax) dh = RooDataHist("dh","dh",RooArgList(x),RooFit.Import(hist)) frame = x.frame(RooFit.Title("Z mass")) # this will show histogram data points on canvas dh.plotOn(frame,RooFit.MarkerColor(2),RooFit.MarkerSize(0.9),RooFit.MarkerStyle(21)) # Signal PDF mean = RooRealVar("mean","mean",3.1, 0, 5) width = RooRealVar("width","width",1, 0, 100) sigma = RooRealVar("sigma","sigma",5, 0, 100) voigt = RooVoigtian("voigt","voigt",x,mean,width,sigma) # Background PDF pol0 = RooPolynomial("pol0","pol0",x,RooArgList()) # Combined model jpsi_yield = RooRealVar("jpsi_yield","yield of j/psi",0.9,0,1) model = RooAddPdf("model","pol0+gauss",RooArgList(voigt,pol0),RooArgList(jpsi_yield)) result = asrootpy(model.fitTo(dh, RooFit.Save(True))) mass = result.final_params['mean'].value bin1 = hist.FindFirstBinAbove(hist.GetMaximum()/2) bin2 = hist.FindLastBinAbove(hist.GetMaximum()/2) hwhm = (hist.GetBinCenter(bin2) - hist.GetBinCenter(bin1)) / 2 # this will show fit overlay on canvas model.plotOn(frame,RooFit.LineColor(4)) # Draw all frames on a canvas canvas = Canvas() frame.GetXaxis().SetTitle("m_{#mu#mu} [GeV]") frame.GetXaxis().SetTitleOffset(1.2) frame.Draw() # Draw the mass and error label label = TLatex(0.6, 0.8, "m_{{J/#psi}} = {0:.2f} #pm {1:.2f} GeV".format(mass, hwhm)) label.SetNDC() label.Draw() return canvas import ROOT _=asrootpy(ROOT.RooFitResult) fit(mass_hist_zoomed) Explanation: Measure the mass The final step of the analysis is to build a fit model using RooFit and use it to extract the mass of the J/$\psi$ meson. End of explanation
12,839
Given the following text description, write Python code to implement the functionality described below step by step Description: Lecture 7 Step1: this matrix has $\mathcal{O}(1)$ elements in a row, therefore it is sparse. Finite elements method is also likely to give you a system with a sparse matrix. How to store a sparse matrix Coordinate format (coo) (i, j, value) i.e. store two integer arrays and one real array. Easy to add elements. But how to multiply a matrix by a vector? CSR format A matrix is stored as 3 different arrays Step2: As you see, CSR is faster, and for more unstructured patterns the gain will be larger. CSR format has difficulties with adding new elements. How to solve linear systems? Direct or iterative solvers Direct solvers The direct methods use sparse Gaussian elimination, i.e. they eliminate variables while trying to keep the matrix as sparse as possible. And often, the inverse of a sparse matrix is not sparse Step3: Looks woefully. Step4: But occasionally L and U factors can be sparse. Step5: In 1D factors L and U are bidiagonal. In 2D factors L and U looks less optimistic, but still ok.) Step6: Sparse matrices and graph ordering The number of non-zeros in the LU decomposition has a deep connection to the graph theory. (I.e., there is an edge between $(i, j)$ if $a_{ij} \ne 0$. Step7: Strategies for elimination The reordering that minimizes the fill-in is important, so we can use graph theory to find one. Minimum degree ordering - order by the degree of the vertex Cuthill–McKee algorithm (and reverse Cuthill-McKee) -- order for a small bandwidth Nested dissection Step8: Florida sparse matrix collection Florida sparse matrix collection which contains all sorts of matrices for different applications. It also allows for finding test matrices as well! Let's have a look. Step9: Test some Let us check some sparse matrix (and its LU). Step10: Iterative solvers The main disadvantage of factorization methods is there computational complexity. A more efficient solution of linear systems can be obtained by iterative methods. This requires a high convergence rate of the iterative process and low arithmetic cost of each iteration. Modern iterative methods are mainly based on the idea of iteration on Krylov subspace. $$ \mathcal{K}i = span{b,~Ab,~A^2b,~ ..,~ A^{i-1}b}, ~~ i = 1,2,..$$ $$ x_i = argmin{ \|b-Ax\|{\text{some norm}}
Python Code: import matplotlib.pyplot as plt import numpy as np import scipy as sp import matplotlib.cm as cm %matplotlib inline N = 3 B = np.diag(2*np.ones(N)) + np.diag((-1)*np.ones(N-1),k=-1)+ np.diag((-1)*np.ones(N-1),k = 1) Id = np.diag(np.ones(N)); # Assembling a 3D operator: A = np.kron(Id,np.kron(Id,B)) + np.kron(Id,np.kron(B,Id)) +np.kron(B,np.kron(Id,Id)) plt.spy(A,markersize=34/N**2) Explanation: Lecture 7: Fast sparse solvers Sparse matrix DEF: Sparse matrix is a matrix that contains $\mathcal{O}(n)$ nonzero elements. Sparse matrices are ubiquitous in PDEs Consider for example a 3D Poisson equation: $$\Delta T = \frac{\partial^2T}{\partial x^2}+\frac{\partial^2T}{\partial y^2}+\frac{\partial^2T}{\partial z^2}=f.$$ After discretization we obtain five diagonal matrix A: End of explanation import numpy as np import scipy as sp import scipy.sparse import scipy.sparse.linalg from scipy.sparse import csc_matrix, csr_matrix, coo_matrix, lil_matrix A = csr_matrix([10,10]) B = lil_matrix([10,10]) A[0,0] = 1 #print A B[0,0] = 1 #print B import numpy as np import scipy as sp import scipy.sparse import scipy.sparse.linalg from scipy.sparse import csc_matrix, csr_matrix, coo_matrix import matplotlib.pyplot as plt import time %matplotlib inline n = 1000 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csr_matrix(A) rhs = np.ones(n * n) B = coo_matrix(A) #t0 = time.time() %timeit A.dot(rhs) #print time.time() - t0 #t0 = time.time() %timeit B.dot(rhs) #print time.time() - t0 Explanation: this matrix has $\mathcal{O}(1)$ elements in a row, therefore it is sparse. Finite elements method is also likely to give you a system with a sparse matrix. How to store a sparse matrix Coordinate format (coo) (i, j, value) i.e. store two integer arrays and one real array. Easy to add elements. But how to multiply a matrix by a vector? CSR format A matrix is stored as 3 different arrays: sa, ja, ia where: nnz is the total number of non-zeros for the matrix sa is an real-value array of non-zeros for the matrix (length nnz) ja is an integer array of column number of the non-zeros (length nnz) ia is an integer array of locations of the first non-zero element in each row (length n+1) (Blackboard figure) Idea behind CSR For each row i we store the column number of the non-zeros (and their) values We stack this all together into ja and sa arrays We save the location of the first non-zero element in each row CSR helps for matrix-by-vector product as well for i in xrange(n): for k in xrange(ia(i):ia(i+1)-1): y(i) += sa(k) * x(ja(k)) Let us do a short timing test End of explanation N = n = 100 ex = np.ones(n); a = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); a = a.todense() b = np.array(np.linalg.inv(a)) fig,axes = plt.subplots(1, 2) axes[0].spy(a) axes[1].spy(b,markersize=2) Explanation: As you see, CSR is faster, and for more unstructured patterns the gain will be larger. CSR format has difficulties with adding new elements. How to solve linear systems? Direct or iterative solvers Direct solvers The direct methods use sparse Gaussian elimination, i.e. they eliminate variables while trying to keep the matrix as sparse as possible. And often, the inverse of a sparse matrix is not sparse: (it corresponds to some integral operator, so it has block low-rank structure. Details will be later in this course) End of explanation N = n = 5 ex = np.ones(n); A = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); A = A.todense() B = np.array(np.linalg.inv(A)) print B Explanation: Looks woefully. End of explanation p, l, u = scipy.linalg.lu(a) fig,axes = plt.subplots(1, 2) axes[0].spy(l) axes[1].spy(u) Explanation: But occasionally L and U factors can be sparse. End of explanation n = 3 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csc_matrix(A) T = scipy.sparse.linalg.splu(A) fig,axes = plt.subplots(1, 2) axes[0].spy(a, markersize=1) axes[1].spy(T.L, marker='.', markersize=0.4) Explanation: In 1D factors L and U are bidiagonal. In 2D factors L and U looks less optimistic, but still ok.) End of explanation import networkx as nx n = 13 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csc_matrix(A) G = nx.Graph(A) nx.draw(G, pos=nx.spring_layout(G), node_size=10) Explanation: Sparse matrices and graph ordering The number of non-zeros in the LU decomposition has a deep connection to the graph theory. (I.e., there is an edge between $(i, j)$ if $a_{ij} \ne 0$. End of explanation import networkx as nx from networkx.utils import reverse_cuthill_mckee_ordering, cuthill_mckee_ordering n = 13 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csc_matrix(A) G = nx.Graph(A) #rcm = list(reverse_cuthill_mckee_ordering(G)) rcm = list(reverse_cuthill_mckee_ordering(G)) A1 = A[rcm, :][:, rcm] plt.spy(A1, marker='.', markersize=3) #p, L, U = scipy.linalg.lu(A1.todense()) #plt.spy(L, marker='.', markersize=0.8) #nx.draw(G, pos=nx.spring_layout(G), node_size=10) Explanation: Strategies for elimination The reordering that minimizes the fill-in is important, so we can use graph theory to find one. Minimum degree ordering - order by the degree of the vertex Cuthill–McKee algorithm (and reverse Cuthill-McKee) -- order for a small bandwidth Nested dissection: split the graph into two with minimal number of vertices on the separator End of explanation from IPython.display import HTML HTML('<iframe src=http://yifanhu.net/GALLERY/GRAPHS/search.html width=700 height=450></iframe>') Explanation: Florida sparse matrix collection Florida sparse matrix collection which contains all sorts of matrices for different applications. It also allows for finding test matrices as well! Let's have a look. End of explanation fname = 'crystm02.mat' !wget http://www.cise.ufl.edu/research/sparse/mat/Boeing/$fname from scipy.io import loadmat import scipy.sparse q = loadmat(fname) #print q mat = q['Problem']['A'][0, 0] T = scipy.sparse.linalg.splu(mat) #Compute its LU %matplotlib inline import matplotlib.pyplot as plt plt.spy(T.L, markersize=0.1) Explanation: Test some Let us check some sparse matrix (and its LU). End of explanation from IPython.core.display import HTML def css_styling(): styles = open("./styles/custom.css", "r").read() return HTML(styles) css_styling() Explanation: Iterative solvers The main disadvantage of factorization methods is there computational complexity. A more efficient solution of linear systems can be obtained by iterative methods. This requires a high convergence rate of the iterative process and low arithmetic cost of each iteration. Modern iterative methods are mainly based on the idea of iteration on Krylov subspace. $$ \mathcal{K}i = span{b,~Ab,~A^2b,~ ..,~ A^{i-1}b}, ~~ i = 1,2,..$$ $$ x_i = argmin{ \|b-Ax\|{\text{some norm}}:x\in \mathcal{K}_i} $$ In fact, to apply iterative solver to a system with matrix $~A$ all you need to know is how to multiply matrix by vector how to apply preconditioner Preconditioners If A is ill conditioned then iterative methods give you a lot of iterations. You can reduce number of iterations if you find matrix $~B$ (called preconditioner), such that $~AB$ or $~BA$ matrices has less conditional number. $$Ax=y \Rightarrow BAx= By$$ $$ABz= y, x= Bz.$$ To be a good preconditioner matrix $~B$ must be somehow close to inverse matrix of $~A$ $$B \approx A^{-1}.$$ Note that $B = A^{-1}$ is a perfect preconditioner and gives you 1 iteration to converge. But building this preconditioner requires as much operations as the direct solution requires. Building a preconditioner requires some compromise between time for building it and iterations time. Two basic strategies for building preconditioner: Use information about elements of matrix $A$ Use additional information about problem. The first strategy, where we use information about elements of matrix $A$ For sparse matrices we use only non-zero elements. Good example is a method of Incomplete matrix factorization The main idea here is to avoid full factorization by dropping some elements in the factorization. Drop rules specify type of incomplete factorization and type of preconditioner. Standard ILU preconditioners: ILU($0$) ILU(k) ILUt ILU2 The second strategy, where we use additional information about a problem Here we use additional information about where the matrix came from. For example, Multigrid and Domain Decomposition methods (see next lecture for multigrid) End of explanation
12,840
Given the following text description, write Python code to implement the functionality described below step by step Description: Plotting topographic maps of evoked data Load evoked data and plot topomaps for selected time points using multiple additional options. Step1: Basic plot_topomap options We plot evoked topographies using Step2: If times is set to None at most 10 regularly spaced topographies will be shown Step3: Instead of showing topographies at specific time points we can compute averages of 50 ms bins centered on these time points to reduce the noise in the topographies Step4: We can plot gradiometer data (plots the RMS for each pair of gradiometers) Step5: Additional plot_topomap options We can also use a range of various Step6: If you look at the edges of the head circle of a single topomap you'll see the effect of extrapolation. By default extrapolate='box' is used which extrapolates to a large box stretching beyond the head circle. Compare this with extrapolate='head' (second topography below) where extrapolation goes to 0 at the head circle and extrapolate='local' where extrapolation is performed only within some distance from channels Step7: More advanced usage Now we plot magnetometer data as topomap at a single time point Step8: Animating the topomap Instead of using a still image we can plot magnetometer data as an animation (animates only in matplotlib interactive mode)
Python Code: # Authors: Christian Brodbeck <[email protected]> # Tal Linzen <[email protected]> # Denis A. Engeman <[email protected]> # Mikołaj Magnuski <[email protected]> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt from mne.datasets import sample from mne import read_evokeds print(__doc__) path = sample.data_path() fname = path + '/MEG/sample/sample_audvis-ave.fif' # load evoked corresponding to a specific condition # from the fif file and subtract baseline condition = 'Left Auditory' evoked = read_evokeds(fname, condition=condition, baseline=(None, 0)) Explanation: Plotting topographic maps of evoked data Load evoked data and plot topomaps for selected time points using multiple additional options. End of explanation times = np.arange(0.05, 0.151, 0.02) evoked.plot_topomap(times, ch_type='mag', time_unit='s') Explanation: Basic plot_topomap options We plot evoked topographies using :func:mne.Evoked.plot_topomap. The first argument, times allows to specify time instants (in seconds!) for which topographies will be shown. We select timepoints from 50 to 150 ms with a step of 20ms and plot magnetometer data: End of explanation evoked.plot_topomap(ch_type='mag', time_unit='s') Explanation: If times is set to None at most 10 regularly spaced topographies will be shown: End of explanation evoked.plot_topomap(times, ch_type='mag', average=0.05, time_unit='s') Explanation: Instead of showing topographies at specific time points we can compute averages of 50 ms bins centered on these time points to reduce the noise in the topographies: End of explanation evoked.plot_topomap(times, ch_type='grad', time_unit='s') Explanation: We can plot gradiometer data (plots the RMS for each pair of gradiometers) End of explanation evoked.plot_topomap(times, ch_type='mag', cmap='Spectral_r', res=32, outlines='skirt', contours=4, time_unit='s') Explanation: Additional plot_topomap options We can also use a range of various :func:mne.viz.plot_topomap arguments that control how the topography is drawn. For example: cmap - to specify the color map res - to control the resolution of the topographies (lower resolution means faster plotting) outlines='skirt' to see the topography stretched beyond the head circle contours to define how many contour lines should be plotted End of explanation extrapolations = ['box', 'head', 'local'] fig, axes = plt.subplots(figsize=(7.5, 2.5), ncols=3) for ax, extr in zip(axes, extrapolations): evoked.plot_topomap(0.1, ch_type='mag', size=2, extrapolate=extr, axes=ax, show=False, colorbar=False) ax.set_title(extr, fontsize=14) Explanation: If you look at the edges of the head circle of a single topomap you'll see the effect of extrapolation. By default extrapolate='box' is used which extrapolates to a large box stretching beyond the head circle. Compare this with extrapolate='head' (second topography below) where extrapolation goes to 0 at the head circle and extrapolate='local' where extrapolation is performed only within some distance from channels: End of explanation evoked.plot_topomap(0.1, ch_type='mag', show_names=True, colorbar=False, size=6, res=128, title='Auditory response', time_unit='s') plt.subplots_adjust(left=0.01, right=0.99, bottom=0.01, top=0.88) Explanation: More advanced usage Now we plot magnetometer data as topomap at a single time point: 100 ms post-stimulus, add channel labels, title and adjust plot margins: End of explanation evoked.animate_topomap(ch_type='mag', times=times, frame_rate=10, time_unit='s') Explanation: Animating the topomap Instead of using a still image we can plot magnetometer data as an animation (animates only in matplotlib interactive mode) End of explanation
12,841
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1 align="center">TensorFlow Neural Network Lab</h1> <img src="image/notmnist.png"> In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J). Step5: <img src="image/Mean_Variance_Image.png" style="height Step6: Checkpoint All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed. Step7: Problem 2 Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer. <img src="image/network_diagram.png" style="height Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height Step9: Test You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
Python Code: import hashlib import os import pickle from urllib.request import urlretrieve import numpy as np from PIL import Image from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelBinarizer from sklearn.utils import resample from tqdm import tqdm from zipfile import ZipFile print('All modules imported.') Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1> <img src="image/notmnist.png"> In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts. The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in! To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported". End of explanation def download(url, file): Download file from <url> :param url: URL to file :param file: Local file path if not os.path.isfile(file): print('Downloading ' + file + '...') urlretrieve(url, file) print('Download Finished') # Download the training and test dataset. download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip') download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip') # Make sure the files aren't corrupted assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\ 'notMNIST_train.zip file is corrupted. Remove the file and try again.' assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\ 'notMNIST_test.zip file is corrupted. Remove the file and try again.' # Wait until you see that all files have been downloaded. print('All files downloaded.') def uncompress_features_labels(file): Uncompress features and labels from a zip file :param file: The zip file to extract the data from features = [] labels = [] with ZipFile(file) as zipf: # Progress Bar filenames_pbar = tqdm(zipf.namelist(), unit='files') # Get features and labels from all files for filename in filenames_pbar: # Check if the file is a directory if not filename.endswith('/'): with zipf.open(filename) as image_file: image = Image.open(image_file) image.load() # Load image data as 1 dimensional array # We're using float32 to save on memory space feature = np.array(image, dtype=np.float32).flatten() # Get the the letter from the filename. This is the letter of the image. label = os.path.split(filename)[1][0] features.append(feature) labels.append(label) return np.array(features), np.array(labels) # Get the features and labels from the zip files train_features, train_labels = uncompress_features_labels('notMNIST_train.zip') test_features, test_labels = uncompress_features_labels('notMNIST_test.zip') # Limit the amount of data to work with a docker container docker_size_limit = 150000 train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit) # Set flags for feature engineering. This will prevent you from skipping an important step. is_features_normal = False is_labels_encod = False # Wait until you see that all features and labels have been uncompressed. print('All features and labels uncompressed.') Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J). End of explanation # Problem 1 - Implement Min-Max scaling for grayscale image data def normalize_grayscale(image_data): Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data arr = np.array(image_data) arr = (arr - arr.min())/(arr.max() - arr.min()) arr = arr * 0.8 + 0.1 return arr.tolist() ### DON'T MODIFY ANYTHING BELOW ### # Test Cases np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])), [0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314, 0.125098039216, 0.128235294118, 0.13137254902, 0.9], decimal=3) np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])), [0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078, 0.896862745098, 0.9]) print(train_features) if not is_features_normal: train_features = normalize_grayscale(train_features) test_features = normalize_grayscale(test_features) is_features_normal = True print('Tests Passed!') if not is_labels_encod: # Turn labels into numbers and apply One-Hot Encoding encoder = LabelBinarizer() encoder.fit(train_labels) train_labels = encoder.transform(train_labels) test_labels = encoder.transform(test_labels) # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32 train_labels = train_labels.astype(np.float32) test_labels = test_labels.astype(np.float32) is_labels_encod = True print('Labels One-Hot Encoded') assert is_features_normal, 'You skipped the step to normalize the features' assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels' # Get randomized datasets for training and validation train_features, valid_features, train_labels, valid_labels = train_test_split( train_features, train_labels, test_size=0.05, random_state=832289) print('Training features and labels randomized and split.') # Save the data for easy access pickle_file = 'notMNIST.pickle' if not os.path.isfile(pickle_file): print('Saving data to pickle file...') try: with open('notMNIST.pickle', 'wb') as pfile: pickle.dump( { 'train_dataset': train_features, 'train_labels': train_labels, 'valid_dataset': valid_features, 'valid_labels': valid_labels, 'test_dataset': test_features, 'test_labels': test_labels, }, pfile, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise print('Data cached in pickle file.') Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%"> Problem 1 The first problem involves normalizing the features for your training and test data. Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9. Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255. Min-Max Scaling: $ X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}} $ If you're having trouble solving problem 1, you can view the solution here. End of explanation %matplotlib inline # Load the modules import pickle import math import numpy as np import tensorflow as tf from tqdm import tqdm import matplotlib.pyplot as plt # Reload the data pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: pickle_data = pickle.load(f) train_features = pickle_data['train_dataset'] train_labels = pickle_data['train_labels'] valid_features = pickle_data['valid_dataset'] valid_labels = pickle_data['valid_labels'] test_features = pickle_data['test_dataset'] test_labels = pickle_data['test_labels'] del pickle_data # Free up memory print('Data and modules loaded.') Explanation: Checkpoint All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed. End of explanation # All the pixels in the image (28 * 28 = 784) features_count = 784 # All the labels labels_count = 10 features = tf.placeholder(tf.float32, [None, features_count]) labels = tf.placeholder(tf.float32, [None, labels_count]) weights = tf.Variable(tf.truncated_normal([features_count, labels_count])) biases = tf.Variable(tf.zeros([labels_count])) ### DON'T MODIFY ANYTHING BELOW ### #Test Cases from tensorflow.python.ops.variables import Variable assert features._op.name.startswith('Placeholder'), 'features must be a placeholder' assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder' assert isinstance(weights, Variable), 'weights must be a TensorFlow variable' assert isinstance(biases, Variable), 'biases must be a TensorFlow variable' assert features._shape == None or (\ features._shape.dims[0].value is None and\ features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect' assert labels._shape == None or (\ labels._shape.dims[0].value is None and\ labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect' assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect' assert biases._variable._shape == (10), 'The shape of biases is incorrect' assert features._dtype == tf.float32, 'features must be type float32' assert labels._dtype == tf.float32, 'labels must be type float32' # Feed dicts for training, validation, and test session train_feed_dict = {features: train_features, labels: train_labels} valid_feed_dict = {features: valid_features, labels: valid_labels} test_feed_dict = {features: test_features, labels: test_labels} # Linear Function WX + b logits = tf.matmul(features, weights) + biases prediction = tf.nn.softmax(logits) # Cross entropy cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1) # Training loss loss = tf.reduce_mean(cross_entropy) # Create an operation that initializes all variables init = tf.global_variables_initializer() # Test Cases with tf.Session() as session: session.run(init) session.run(loss, feed_dict=train_feed_dict) session.run(loss, feed_dict=valid_feed_dict) session.run(loss, feed_dict=test_feed_dict) biases_data = session.run(biases) assert not np.count_nonzero(biases_data), 'biases must be zeros' print('Tests Passed!') # Determine if the predictions are correct is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1)) # Calculate the accuracy of the predictions accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32)) print('Accuracy function created.') Explanation: Problem 2 Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer. <img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%"> For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network. For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors: - features - Placeholder tensor for feature data (train_features/valid_features/test_features) - labels - Placeholder tensor for label data (train_labels/valid_labels/test_labels) - weights - Variable Tensor with random numbers from a truncated normal distribution. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help. - biases - Variable Tensor with all zeros. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help. If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here. End of explanation results # Change if you have memory restrictions batch_size = 128 # Find the best parameters for each configuration #results = [] #for epochs in [1,2,3,4,5]: # for learning_rate in [0.8, 0.5, 0.1, 0.05, 0.01]: epochs = 4 learning_rate = 0.1 ### DON'T MODIFY ANYTHING BELOW ### # Gradient Descent optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # The accuracy measured against the validation set validation_accuracy = 0.0 # Measurements use for graphing loss and accuracy log_batch_step = 50 batches = [] loss_batch = [] train_acc_batch = [] valid_acc_batch = [] with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches') # The training cycle for batch_i in batches_pbar: # Get a batch of training features and labels batch_start = batch_i*batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] # Run optimizer and get loss _, l = session.run( [optimizer, loss], feed_dict={features: batch_features, labels: batch_labels}) # Log every 50 batches if not batch_i % log_batch_step: # Calculate Training and Validation accuracy training_accuracy = session.run(accuracy, feed_dict=train_feed_dict) validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict) # Log batches previous_batch = batches[-1] if batches else 0 batches.append(log_batch_step + previous_batch) loss_batch.append(l) train_acc_batch.append(training_accuracy) valid_acc_batch.append(validation_accuracy) # Check accuracy against Validation data validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict) loss_plot = plt.subplot(211) loss_plot.set_title('Loss') loss_plot.plot(batches, loss_batch, 'g') loss_plot.set_xlim([batches[0], batches[-1]]) acc_plot = plt.subplot(212) acc_plot.set_title('Accuracy') acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy') acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy') acc_plot.set_ylim([0, 1.0]) acc_plot.set_xlim([batches[0], batches[-1]]) acc_plot.legend(loc=4) plt.tight_layout() plt.show() print('Validation accuracy at {}'.format(validation_accuracy)) #print('LR: {}\t Epochs: {}\t Validation accuracy at {}'.format(learning_rate, epochs, validation_accuracy)) #results.append((learning_rate, epochs, validation_accuracy)) Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%"> Problem 3 Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy. Parameter configurations: Configuration 1 * Epochs: 1 * Learning Rate: * 0.8 * 0.5 * 0.1 * 0.05 * 0.01 Configuration 2 * Epochs: * 1 * 2 * 3 * 4 * 5 * Learning Rate: 0.2 The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed. If you're having trouble solving problem 3, you can view the solution here. End of explanation ### DON'T MODIFY ANYTHING BELOW ### # The accuracy measured against the test set test_accuracy = 0.0 with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches') # The training cycle for batch_i in batches_pbar: # Get a batch of training features and labels batch_start = batch_i*batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] # Run optimizer _ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels}) # Check accuracy against Test data test_accuracy = session.run(accuracy, feed_dict=test_feed_dict) assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy) print('Nice Job! Test Accuracy is {}'.format(test_accuracy)) Explanation: Test You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. End of explanation
12,842
Given the following text description, write Python code to implement the functionality described below step by step Description: This is a demo notebook of some of the features of IEtools.py. IEtools includes tools to read FRED economic data https Step1: Read in the files Step2: Here's a plot of nominal GDP Step3: And here is nominal GDP growth Step4: Fit information equilibrium parameters Here we take the information equilibrium model with A = nominal GDP, B = labor employed (PAYEMS), and p = CPI (all items). First we solve for the IT index and show the model. Note Step5: And we can show the relationship between the growth rates (i.e. compute the inflation rate equal to the growth rate of the CPI) Step6: Additionally, rearranging the terms and looking at the growth rate, we can show a form of Okun's law. Since g_p = g_a - g_b, we can say g_b = g_a - g_p. The right hand side of the last equation when A is nominal GDP and p is the CPI is the CPI-deflated real GDP growth. Okun's law is an inverse relationship between the change in unemployment and RGDP growth, but in our case we will look at the direct relationship of RGDP growth and change in employment (PAYEMS).
Python Code: import numpy as np import IEtools import pylab as pl %pylab inline Explanation: This is a demo notebook of some of the features of IEtools.py. IEtools includes tools to read FRED economic data https://fred.stlouisfed.org/ in either csv or xls formats. IEtools also includes tools for fitting information equilibrium parameters and constructing dynamic equilibrium models (see Dynamic Equilibrium Examples.ipynb). The basic information equilibrium condition between two variables A and B is given by p = dA/dB = k A/B with information transfer (IT) index k. In 'general equilibrium' (where A or B is not changing faster than the other), we have the solution A = a B^k or log(A) = k log(B) + c IEtools has a function to solve for these parameters. In the information equilibrium condition, p = dA/dB is the abstract price. In 'general equilibrium', we have p = k a B^(k-1) The continuously compounded growth rates of these variables are also related. If the growth rate of A is g_a, B is g_b, and p is g_p then: g_a = k g_b g_p = (k-1) g_b g_p = g_a - g_b This notebook shows some of these results for GDP and labor supply, showing Okun's law along the way. IEtools was tested using Python 3.6 as part of the Anaconda 4.4.0 package. All dependencies are included in Anaconda 4.4.0. End of explanation filename1='C:/econdata/GDP.xls' filename2='C:/econdata/PAYEMS.xls' filename3='C:/econdata/CPIAUCSL.xls' gdp = IEtools.FREDxlsRead(filename1) lab = IEtools.FREDxlsRead(filename2) cpi = IEtools.FREDxlsRead(filename3) Explanation: Read in the files End of explanation pl.plot(gdp['interp'].x,gdp['interp'](gdp['interp'].x)) pl.ylabel(gdp['name']+' [G$]') pl.yscale('log') pl.show() Explanation: Here's a plot of nominal GDP End of explanation pl.plot(gdp['growth'].x,gdp['growth'](gdp['growth'].x)) pl.ylabel(gdp['name']+' growth [%]') pl.show() Explanation: And here is nominal GDP growth End of explanation result = IEtools.fitGeneralInfoEq(gdp['data'],lab['data'], guess=[1.0,0.0]) print(result) print('IT index = ',np.round(result.x[0],decimals=2)) time=gdp['interp'].x pl.plot(time,np.exp(result.x[0]*np.log(lab['interp'](time))+result.x[1]),label='model') pl.plot(time,gdp['interp'](time),label='data') pl.yscale('log') pl.ylabel(gdp['name']+' [G$]') pl.legend() pl.show() Explanation: Fit information equilibrium parameters Here we take the information equilibrium model with A = nominal GDP, B = labor employed (PAYEMS), and p = CPI (all items). First we solve for the IT index and show the model. Note: this is a simple model with limited accuracy. End of explanation time=gdp['data'][:,0] der1=gdp['growth'](time)-lab['growth'](time) der2=cpi['growth'](time) pl.plot(time,der1,label='model') pl.plot(time,der2,label='data') pl.legend() pl.show() Explanation: And we can show the relationship between the growth rates (i.e. compute the inflation rate equal to the growth rate of the CPI) End of explanation time=gdp['data'][:,0] der1=gdp['growth'](time)-cpi['growth'](time) der2=lab['growth'](time) pl.plot(time,der1,label='model') pl.plot(time,der2,label='data') pl.legend() pl.show() Explanation: Additionally, rearranging the terms and looking at the growth rate, we can show a form of Okun's law. Since g_p = g_a - g_b, we can say g_b = g_a - g_p. The right hand side of the last equation when A is nominal GDP and p is the CPI is the CPI-deflated real GDP growth. Okun's law is an inverse relationship between the change in unemployment and RGDP growth, but in our case we will look at the direct relationship of RGDP growth and change in employment (PAYEMS). End of explanation
12,843
Given the following text description, write Python code to implement the functionality described below step by step Description: 2.1.1 Huberized Hinge loss Plot on a same plot, with different colors, the misclassification error loss, the (regular) hinge loss, and the huberized hinge loss. Step1: Explain how the huberized hinge loss relates to the regular hinge loss and to the misclassification error loss. Well, Huberized Hinge Loss is a smooth version of Hinge Loss that is differentiable everywhere. As can be seen in the plot, Hinge Loss and Huberized Hinge Loss give loss even when the function output is less than 1. Misclassification Error Loss only acts after the function output falls below 0. Write a mathematical proof that the huberized hinge loss is differentiable (hint Step2: 2.1.3 Numerical checks Write a function grad_checker. Step3: 2.1.4 Gradient Descent Write a function my-gradient-descent that implements the gradient descent algorithm with a constant step-size η. The function is initialised at $w_0 = 0$. The function takes as input η and the maximum number of iterations maxiter. The function returns the output $w_T$ with T = maxiter. Step4: Generate a synthetic data for binary classification. Each class is modelled as a Gaussian distribution, with 500 examples for training and 500 for testing. Make sure the two classes have sufficient overlap. Step5: Normalize your data. Step6: You will use here Linear SVM with huberized hinge loss, trained using your gradient descent algorithm. Write a function my-svm for Linear SVM, that can used for training (by calling my-gradient-descent) and testing. Step7: Run experiments for various values of the fixed step-size η. Step8: Visualise the linear separation learned by your Linear SVM Step9: Plot the objective function vs the iterations, as the gradient descent algorithm proceeds. Step10: Implement backtracking line search (google it). Profile your code, optimise in terms of speed. Step11: Add several options to my-svm that allow the user to choose between the different stopping criteria Step12: 2.1.5 Stochastic Gradient Descent Write a function my-sgd that implements stochastic gradient, as described in the lecture slides. The function is initialised at w_0 = 0. The function takes as input η_0 and t_0 and a stopping criterion along the same lines as for the regular gradient descent. Only the counterparts of stopping criteria i) and iii) apply here. The function returns the last iterate w_T.
Python Code: %matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt from sklearn import preprocessing from sklearn.datasets import make_blobs from sklearn.svm import LinearSVC x = np.linspace(-2.0, 2.0, num=100) def huberizedHingeLoss(x, h): if x > 1+h: return 0 elif abs(1-x) <= h: return ((1+h-x)**2)/(4*h) else: return 1-x def hingeLoss(x): return max([0, 1-x]) def misclassLoss(x): return 1 if x <= 0 else 0 hub, = plt.plot(x, [huberizedHingeLoss(i, 0.5) for i in x], 'b-', label='Huberized Hinge Loss') hinge, = plt.plot(x, [hingeLoss(i) for i in x], 'k-', label='Hinge Loss') mis, = plt.plot(x, [misclassLoss(i) for i in x], 'r-', label='Misclassification Error Loss') plt.legend(handles=[hub,hinge,mis], loc=0) plt.ylabel('Loss value') plt.xlabel('Model output') Explanation: 2.1.1 Huberized Hinge loss Plot on a same plot, with different colors, the misclassification error loss, the (regular) hinge loss, and the huberized hinge loss. End of explanation def compute_obj(x, y, w, C=1.0, h=0.5): loss = np.vectorize(huberizedHingeLoss, excluded=['h']) return np.dot(w, w) + (C/float(x.shape[0]))*sum(loss(y*np.dot(x,w), h)) def compute_grad(x, y, w, C=1.0, h=0.5): p = y*np.dot(x, w) gradW = np.zeros(w.shape[0], dtype=float) def gradHuberHinge(i, j): if p[i] > 1+h: return 0 elif abs(1-p[i]) <= h: return ((1+h-p[i])/(2*h))*(-y[i]*x[i][j]) else: return (-y[i]*x[i][j]) for j in range(w.shape[0]): sum_over_i = 0.0 for i in range(x.shape[0]): sum_over_i += gradHuberHinge(i,j) gradW[j] = 2*w[j] + (C/float(x.shape[0]))*sum_over_i return gradW def add_bias_column(x): return np.append(x, np.ones(x.shape[0]).reshape(x.shape[0],1), axis=1) n_samples = 1000 n_features = 30 x, y = make_blobs(n_samples, n_features, centers=2) y[y==0] = -1 x = add_bias_column(x) w = np.zeros(n_features+1) compute_obj(x, y, w) compute_grad(x, y, w) Explanation: Explain how the huberized hinge loss relates to the regular hinge loss and to the misclassification error loss. Well, Huberized Hinge Loss is a smooth version of Hinge Loss that is differentiable everywhere. As can be seen in the plot, Hinge Loss and Huberized Hinge Loss give loss even when the function output is less than 1. Misclassification Error Loss only acts after the function output falls below 0. Write a mathematical proof that the huberized hinge loss is differentiable (hint: the huberized hinge loss is defined piecewise). Assuming yt = $x$ and function $f$ as the huberized hinge loss, $$ f(x) = \left{\begin{aligned} &0 &: x > 1+h\ &\frac{(1+h-x)^2}{4h} &: 1-h \leq x \leq 1+h\ &1-x &: x < 1-h \end{aligned} \right.$$ Piecewise differentiation of $f$, $$ f'(x) = \left{\begin{aligned} &0 &: x > 1+h\ &-\frac{2(1+h-x)}{4h} &: 1-h \leq x \leq 1+h\ &-1 &: x < 1-h \end{aligned} \right.$$ Clearly, $f$ is differentiable on each piece. Let's check $f$'s differentiability on the kinks using $f'(x)$ $= \lim_{p\to x}\frac{f(p)}{p}$. For $1+h$, \begin{align} f'(1+h) =& \lim_{p\to 1+h}\frac{f(p)}{p} \end{align} Right limit is, \begin{align} f'(1+h) =& \lim_{p\to (1+h)^+}\frac{0}{1+h} = 0 \end{align} Left limit is, \begin{align} f'(1+h) =& \lim_{p\to (1+h)^-}\frac{\frac{(1+h-(1+h))^2}{4h}}{1+h} = 0 \end{align} Clearly, Right limit = Left limit at $1+h$. Therefore, the function is differentiable at this point. For $1-h$ right limit is, \begin{align} f'(1-h) =& \lim_{p\to (1-h)^+}\frac{\frac{(1+h-(1-h))^2}{4h}}{1-h} = \frac{h}{1-h} \end{align} Left limit is, \begin{align} f'(1-h) =& \lim_{p\to (1-h)^-}\frac{1-(1-h)}{1-h} = \frac{h}{1-h} \end{align} Clearly, Right limit = Left limit at $1-h$. Therefore, the function is differentiable at this point. This proves that the function is differentiable at all points. Write the analytic expression(s) of the gradient of the huberized hinge loss. Analytic expression(s) of the gradient of the huberized hinge loss is, $$ l_{huber-hinge}'(yt) = \left{\begin{aligned} &0 &: yt > 1+h\ &-\frac{2(1+h-yt)}{4h} &: \left|1-yt\right| \leq h\ &-1 &: yt < 1-h \end{aligned} \right.$$ Is the gradient huberized hinge loss Lipschitz-continuous over a particular domain? (hint: the huberized hinge loss is defined piecewise). If the gradient huberized hinge loss is Lipschitz-continuous, write a mathematical proof. Then, give an upper-bound on the Lipschitz-continuous parameter L, and write the corresponding mathematical proof. If gradient huberized hinge loss is not Lipschitz-continuous, write a mathematical proof. Yes, huberized hinge loss is Lipschitz-continuous. To prove this, let's check if the absolute value of the derivative is limited by a particular $\rho$. For cases 1 and 3 from Answer 2, the derivative is limited by 1 ($\left|-1\right| = 1$). For the second case, the function is linear in yt, so we only need to check the values at both ends of piece, $1-h$ and $1+h$. \begin{align} l'_{huber-hinge}(1+h) = -\frac{2(1+h-(1+h))}{4h} = 0 \end{align} \begin{align} l'_{huber-hinge}(1-h) = -\frac{2(1+h-(1-h))}{4h} = -\frac{4h}{4h} = -1 \end{align} Clearly, this piece is also limited by 1. Therefore, huberized hinge loss is 1-Lipschitz-continuous. 2.1.2 Analytic expressions Write a function compute_obj, to compute F(w) for a given w. Write a function compute_grad, to compute $\nabla_w F$ . End of explanation def grad_checker(x, y, w, C=1.0, h=0.5, epsilon=1e-6): orig_grad = compute_grad(x, y, w, C, h) for i in range(w.shape[0]): wplus = np.copy(w) wneg = np.copy(w) wplus[i] += epsilon wneg[i] -= epsilon new_grad = (compute_obj(x, y, wplus, C, h) - compute_obj(x, y, wneg, C, h))/(2*epsilon) if abs(new_grad - orig_grad[i]) > epsilon: print "Fails at weight ", i print "gradient from input function ", orig_grad[i] print "gradient from approximation", new_grad return print "compute_grad is correct" grad_checker(x,y,w) Explanation: 2.1.3 Numerical checks Write a function grad_checker. End of explanation def my_gradient_descent(x, y, F, dF, eta=0.001, maxiter=1000): w = np.zeros(x.shape[1]) for i in range(maxiter): w = w - eta*dF(x,y,w) return w %time my_gradient_descent(x, y, compute_obj, compute_grad); Explanation: 2.1.4 Gradient Descent Write a function my-gradient-descent that implements the gradient descent algorithm with a constant step-size η. The function is initialised at $w_0 = 0$. The function takes as input η and the maximum number of iterations maxiter. The function returns the output $w_T$ with T = maxiter. End of explanation def dataset_fixed_cov(n,dim): '''Generate 2 Gaussians samples with the same covariance matrix''' C = np.array([[0., -0.23], [0.83, .23]]) X = np.r_[np.dot(np.random.randn(n, dim), C), np.dot(np.random.randn(n, dim), C) + np.array([1, 1])] y = np.hstack((-np.ones(n), np.ones(n))) return X, y x_train,y_train = dataset_fixed_cov(250,2) plt.plot(x_train[:250,0],x_train[:250,1], 'o', color='red') plt.plot(x_train[250:,0],x_train[250:,1], 'o', color='blue') x_test,y_test = dataset_fixed_cov(250,2) plt.plot(x_test[:250,0],x_test[:250,1], 'o', color='red') plt.plot(x_test[250:,0],x_test[250:,1], 'o', color='blue') Explanation: Generate a synthetic data for binary classification. Each class is modelled as a Gaussian distribution, with 500 examples for training and 500 for testing. Make sure the two classes have sufficient overlap. End of explanation scaler = preprocessing.StandardScaler().fit(x_train) scaler.transform(x_train) scaler.transform(x_test); Explanation: Normalize your data. End of explanation class my_svm(object): def __init__(self): self.learnt_w = None def fit(self, x_train, y_train, eta=0.01, max_iter=1000): x_copy = add_bias_column(x_train) self.learnt_w = my_gradient_descent(x_copy, y_train, compute_obj, compute_grad, eta, max_iter) def predict(self, x_test): x_copy = add_bias_column(x_test) y = np.dot(x_copy, self.learnt_w) y[y<0] = -1 y[y>0] = 1 return y def score(self, x_test, y_test): y_predict = self.predict(x_test) bools = y_predict == y_test accuracy = bools[bools == True].shape[0]/float(bools.shape[0]) return accuracy Explanation: You will use here Linear SVM with huberized hinge loss, trained using your gradient descent algorithm. Write a function my-svm for Linear SVM, that can used for training (by calling my-gradient-descent) and testing. End of explanation svm = my_svm() for k in range(0,9): eta = 0.1**k svm.fit(x_train, y_train, eta) print eta, svm.score(x_test, y_test) Explanation: Run experiments for various values of the fixed step-size η. End of explanation svm = my_svm() svm.fit(x_train, y_train, 0.01) line = svm.learnt_w plt.plot(x_test[:250,0],x_test[:250,1], 'o', color='red') plt.plot(x_test[250:,0],x_test[250:,1], 'o', color='blue') xx = np.linspace(-3, 3) yy = ((-line[0]/line[1])*xx)+(-line[2]/line[1]) # y = (-a/b)*x + (-c/b) plt.plot(xx, yy) Explanation: Visualise the linear separation learned by your Linear SVM End of explanation def modified_gradient_descent(x, y, F, dF, eta=0.01, maxiter=1000): w = np.zeros(x.shape[1]) F_vals = np.zeros(maxiter) for i in range(maxiter): w = w - eta*dF(x,y,w) F_vals[i] = F(x,y,w) return w, F_vals x_copy = add_bias_column(x_train) _, F_vals = modified_gradient_descent(x_copy, y_train, compute_obj, compute_grad) iterations = np.arange(1000) plt.plot(iterations, F_vals) Explanation: Plot the objective function vs the iterations, as the gradient descent algorithm proceeds. End of explanation def backtracked_gradient_descent(x, y, F, dF, maxiter=100): w = np.zeros(x.shape[1]) beta = 0.8 F_vals = np.zeros(maxiter) for i in range(maxiter): eta = 1 val = F(x,y,w) grad = dF(x,y,w) while F(x, y, (w - eta * grad)) > val - ((eta/2.) * grad.dot(grad)): eta = beta * eta #print eta w = w - eta*grad F_vals[i] = F(x,y,w) return w, F_vals x_copy = add_bias_column(x_train) _, F_vals = backtracked_gradient_descent(x_copy, y_train, compute_obj, compute_grad) iterations = np.arange(100) plt.plot(iterations, F_vals) Explanation: Implement backtracking line search (google it). Profile your code, optimise in terms of speed. End of explanation class my_svm(object): def huberizedHingeLoss(self, x, h): if x > 1+h: return 0 elif abs(1-x) <= h: return ((1+h-x)**2)/(4*h) else: return 1-x def add_bias_column(self, x): return np.append(x, np.ones(x.shape[0]).reshape(x.shape[0],1), axis=1) def compute_obj(self, x, y, w, C=1.0, h=0.5): loss = np.vectorize(self.huberizedHingeLoss, excluded=['h']) return np.dot(w, w) + (C/float(x.shape[0]))*sum(loss(y*np.dot(x,w), h)) def compute_grad(self, x, y, w, C=1.0, h=0.5): p = y*np.dot(x, w) gradW = np.zeros(w.shape[0], dtype=float) def gradHuberHinge(i, j): if p[i] > 1+h: return 0 elif abs(1-p[i]) <= h: return ((1+h-p[i])/(2*h))*(-y[i]*x[i][j]) else: return (-y[i]*x[i][j]) for j in range(w.shape[0]): sum_over_i = 0.0 for i in range(x.shape[0]): sum_over_i += gradHuberHinge(i,j) gradW[j] = 2*w[j] + (C/float(x.shape[0]))*sum_over_i return gradW def __init__(self, stop_criteria="iter", eta=0.01, max_iter=1000, epsilon=1e-3): self.learnt_w = None self.stop_criteria = stop_criteria self.eta = eta # i) maximum number of iterations self.max_iter = max_iter # ii) optimization-based criterion self.epsilon = epsilon def fit(self, x_train, y_train): x = self.add_bias_column(x_train) y = y_train w = np.zeros(x.shape[1]) F = self.compute_obj dF = self.compute_grad if self.stop_criteria == "iter": for i in range(self.max_iter): #print w, self.eta * dF(x,y,w) eta = 1 val = F(x,y,w) grad = dF(x,y,w) while F(x, y, (w - eta * grad)) > val - ((eta/2.) * grad.dot(grad)): eta = beta * eta w = w - eta*grad w = w - self.eta * dF(x,y,w) elif self.stop_criteria == "opt": grad = dF(x,y,w) while np.sqrt(grad.dot(grad)) > self.epsilon: #print F(x,y,w) #print w, self.eta * dF(x,y,w) w = w - self.eta * grad grad = dF(x,y,w) self.learnt_w = w def predict(self, x_test): x = self.add_bias_column(x_test) y = np.dot(x, self.learnt_w) y[y<0] = -1 y[y>0] = 1 return y def score(self, x_test, y_test): y_predict = self.predict(x_test) bools = y_predict == y_test accuracy = bools[bools == True].shape[0]/float(bools.shape[0]) return accuracy def dataset_fixed_cov(n,dim): '''Generate 2 Gaussians samples with the same covariance matrix''' C = np.array([[-0.8, 0.2], [0.8, 0.2]]) X = np.r_[np.dot(np.random.randn(n, dim), C) + np.array([1, -1]), np.dot(np.random.randn(n, dim), C) + np.array([-1, 1])] y = np.hstack((-np.ones(n), np.ones(n))) return X, y x_train, y_train = dataset_fixed_cov(250,2) x_test, y_test = dataset_fixed_cov(250,2) scaler = preprocessing.StandardScaler().fit(x_train) scaler.transform(x_train) scaler.transform(x_test); plt.plot(x_train[:250,0], x_train[:250,1], 'o', color='red') plt.plot(x_train[250:,0], x_train[250:,1], 'o', color='blue') svm = my_svm(stop_criteria="opt") svm.fit(x_train, y_train) line = svm.learnt_w print line xx = np.linspace(-5, 5) yy = ((-line[0]/line[1])*xx)+(-line[2]/line[1]) # y = (-a/b)*x + (-c/b) plt.plot(xx, yy) svm = LinearSVC() svm.fit(x_train, y_train) line = svm.coef_ xx = np.linspace(-5, 5) yy = ((-svm.coef_[0][0]/svm.coef_[0][1])*xx)+(-svm.intercept_[0]/svm.coef_[0][1]) # y = (-a/b)*x + (-c/b) plt.plot(xx, yy) Explanation: Add several options to my-svm that allow the user to choose between the different stopping criteria: i) maximum number of iterations; ii) optimization-based criterion. End of explanation def my_sgd(x, y, F, dF, eta=0.01, epochs=20): w = np.zeros(x.shape[1]) for i in range(epochs): for j in range(x.shape[0]): grad = dF(x[j:j+1,:],y[j:j+1],w) w = w - eta*grad return w x_train,y_train = dataset_fixed_cov(50000,2) x_copy = add_bias_column(x_train) %time sgd_w = my_sgd(x_copy, y_train, compute_obj, compute_grad) Explanation: 2.1.5 Stochastic Gradient Descent Write a function my-sgd that implements stochastic gradient, as described in the lecture slides. The function is initialised at w_0 = 0. The function takes as input η_0 and t_0 and a stopping criterion along the same lines as for the regular gradient descent. Only the counterparts of stopping criteria i) and iii) apply here. The function returns the last iterate w_T. End of explanation
12,844
Given the following text description, write Python code to implement the functionality described below step by step Description: Text classification using Neural Networks The goal of this notebook is to learn to use Neural Networks for text classification. In this notebook, we will Step1: Here are all the possible classes Step2: Preprocessing text for the (supervised) CBOW model We will implement a simple classification model in Keras. Raw text requires (sometimes a lot of) preprocessing. The following cells uses Keras to preprocess text Step3: Tokenized sequences are converted to list of token ids (with an integer code) Step4: The tokenizer object stores a mapping (vocabulary) from word strings to token ids that can be inverted to reconstruct the original message (without formatting) Step5: Let's have a closer look at the tokenized sequences Step6: Let's zoom on the distribution of regular sized posts. The vast majority of the posts have less than 1000 symbols Step7: Let's truncate and pad all the sequences to 1000 symbols to build the training set Step8: A simple supervised CBOW model in Keras The following computes a very simple model, as described in fastText Step9: Exercice - compute model accuracy on test set Step10: Building more complex models Exercise - From the previous template, build more complex models using Step11: Loading pre-trained embeddings The file glove100K.100d.txt is an extract of Glove Vectors, that were trained on english Wikipedia 2014 + Gigaword 5 (6B tokens). We extracted the 100 000 most frequent words. They have a dimension of 100 Step12: Finding most similar words Exercice Build a function to find most similar words, given a word as query Step13: Predict the future better than tarot Step14: Displaying vectors with t-SNE Step15: Using pre-trained embeddings in our model We want to use these pre-trained embeddings for transfer learning. This process is rather similar than transfer learning in image recognition Step16: Build a layer with pre-trained embeddings Step17: A model with pre-trained Embeddings Average word embeddings pre-trained with Glove / Word2Vec usually works suprisingly well. However, when averaging more than 10-15 words, the resulting vector becomes too noisy and classification performance is degraded.
Python Code: import numpy as np from sklearn.datasets import fetch_20newsgroups newsgroups_train = fetch_20newsgroups(subset='train') newsgroups_test = fetch_20newsgroups(subset='test') sample_idx = 1000 print(newsgroups_train["data"][sample_idx]) target_names = newsgroups_train["target_names"] target_id = newsgroups_train["target"][sample_idx] print("Class of previous message:", target_names[target_id]) Explanation: Text classification using Neural Networks The goal of this notebook is to learn to use Neural Networks for text classification. In this notebook, we will: - Train a shallow model with learning embeddings - Download pre-trained embeddings from Glove - Use these pre-trained embeddings However keep in mind: - Deep Learning can be better on text classification that simpler ML techniques, but only on very large datasets and well designed/tuned models. - We won't be using the most efficient (in terms of computing) techniques, as Keras is good for prototyping but rather inefficient for training small embedding models on text. - The following projects can replicate similar word embedding models much more efficiently: word2vec and gensim's word2vec (self-supervised learning only), fastText (both supervised and self-supervised learning), Vowpal Wabbit (supervised learning). - Plain shallow sparse TF-IDF bigrams features without any embedding and Logistic Regression or Multinomial Naive Bayes is often competitive in small to medium datasets. 20 Newsgroups Dataset The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups http://qwone.com/~jason/20Newsgroups/ End of explanation target_names Explanation: Here are all the possible classes: End of explanation from keras.preprocessing.text import Tokenizer MAX_NB_WORDS = 20000 # get the raw text data texts_train = newsgroups_train["data"] texts_test = newsgroups_test["data"] # finally, vectorize the text samples into a 2D integer tensor tokenizer = Tokenizer(nb_words=MAX_NB_WORDS, char_level=False) tokenizer.fit_on_texts(texts_train) sequences = tokenizer.texts_to_sequences(texts_train) sequences_test = tokenizer.texts_to_sequences(texts_test) word_index = tokenizer.word_index print('Found %s unique tokens.' % len(word_index)) Explanation: Preprocessing text for the (supervised) CBOW model We will implement a simple classification model in Keras. Raw text requires (sometimes a lot of) preprocessing. The following cells uses Keras to preprocess text: - using a tokenizer. You may use different tokenizers (from scikit-learn, NLTK, custom Python function etc.). This converts the texts into sequences of indices representing the 20000 most frequent words - sequences have different lengths, so we pad them (add 0s at the end until the sequence is of length 1000) - we convert the output classes as 1-hot encodings End of explanation sequences[0] Explanation: Tokenized sequences are converted to list of token ids (with an integer code): End of explanation type(tokenizer.word_index), len(tokenizer.word_index) index_to_word = dict((i, w) for w, i in tokenizer.word_index.items()) " ".join([index_to_word[i] for i in sequences[0]]) Explanation: The tokenizer object stores a mapping (vocabulary) from word strings to token ids that can be inverted to reconstruct the original message (without formatting): End of explanation seq_lens = [len(s) for s in sequences] print("average length: %0.1f" % np.mean(seq_lens)) print("max length: %d" % max(seq_lens)) %matplotlib inline import matplotlib.pyplot as plt plt.hist(seq_lens, bins=50); Explanation: Let's have a closer look at the tokenized sequences: End of explanation plt.hist([l for l in seq_lens if l < 3000], bins=50); Explanation: Let's zoom on the distribution of regular sized posts. The vast majority of the posts have less than 1000 symbols: End of explanation from keras.preprocessing.sequence import pad_sequences MAX_SEQUENCE_LENGTH = 1000 # pad sequences with 0s x_train = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH) x_test = pad_sequences(sequences_test, maxlen=MAX_SEQUENCE_LENGTH) print('Shape of data tensor:', x_train.shape) print('Shape of data test tensor:', x_test.shape) from keras.utils.np_utils import to_categorical y_train = newsgroups_train["target"] y_test = newsgroups_test["target"] y_train = to_categorical(np.asarray(y_train)) print('Shape of label tensor:', y_train.shape) Explanation: Let's truncate and pad all the sequences to 1000 symbols to build the training set: End of explanation from keras.layers import Dense, Input, Flatten from keras.layers import GlobalAveragePooling1D, Embedding from keras.models import Model EMBEDDING_DIM = 50 N_CLASSES = len(target_names) # input: a sequence of MAX_SEQUENCE_LENGTH integers sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32') embedding_layer = Embedding(MAX_NB_WORDS, EMBEDDING_DIM, input_length=MAX_SEQUENCE_LENGTH, trainable=True) embedded_sequences = embedding_layer(sequence_input) average = GlobalAveragePooling1D()(embedded_sequences) predictions = Dense(N_CLASSES, activation='softmax')(average) model = Model(sequence_input, predictions) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'], verbose=2) model.fit(x_train, y_train, validation_split=0.1, nb_epoch=10, batch_size=128, verbose=2) Explanation: A simple supervised CBOW model in Keras The following computes a very simple model, as described in fastText: <img src="images/fasttext.svg" style="width: 600px;" /> Build an embedding layer mapping each word to a vector representation Compute the vector representation of all words in each sequence and average them Add a dense layer to output 20 classes (+ softmax) End of explanation # %load solutions/accuracy.py output_test = model.predict(x_test) test_casses = np.argmax(output_test, axis=-1) print("test accuracy:", np.mean(test_casses == y_test)) Explanation: Exercice - compute model accuracy on test set End of explanation # %load solutions/lstm.py from keras.layers import LSTM, Conv1D, MaxPooling1D # input: a sequence of MAX_SEQUENCE_LENGTH integers sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32') embedded_sequences = embedding_layer(sequence_input) # 1D convolution with 64 output channels x = Conv1D(64, 5)(embedded_sequences) # MaxPool divides the length of the sequence by 5 x = MaxPooling1D(5)(x) x = Conv1D(64, 5)(x) x = MaxPooling1D(5)(x) # LSTM layer with a hidden size of 64 x = LSTM(64)(x) predictions = Dense(20, activation='softmax')(x) model = Model(sequence_input, predictions) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc']) # You will get large speedups with these models by using a GPU # The model might take a lot of time to converge, and even more # if you add dropout (needed to prevent overfitting) # %load solutions/conv1d.py from keras.layers import Conv1D, MaxPooling1D, Flatten sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32') embedded_sequences = embedding_layer(sequence_input) # A 1D convolution with 128 output channels x = Conv1D(128, 5, activation='relu')(embedded_sequences) # MaxPool divides the length of the sequence by 5 x = MaxPooling1D(5)(x) # A 1D convolution with 64 output channels x = Conv1D(64, 5, activation='relu')(x) # MaxPool divides the length of the sequence by 5 x = MaxPooling1D(5)(x) x = Flatten()(x) predictions = Dense(20, activation='softmax')(x) model = Model(sequence_input, predictions) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc']) model.fit(x_train, y_train, validation_split=0.1, nb_epoch=10, batch_size=128, verbose=2) Explanation: Building more complex models Exercise - From the previous template, build more complex models using: - 1d convolution and 1d maxpooling. Note that you will still need a GloabalAveragePooling or Flatten after the convolutions - Recurrent neural networks through LSTM (you will need to reduce sequence length before) <img src="images/unrolled_rnn_one_output_2.svg" style="width: 600px;" /> Bonus - You may try different architectures with: - more intermediate layers, combination of dense, conv, recurrent - different recurrent (GRU, RNN) - bidirectional LSTMs Note: The goal is to build working models rather than getting better test accuracy. To achieve much better results, we'd need more computation time and data quantity. Build your model, and verify that they converge to OK results. End of explanation embeddings_index = {} embeddings_vectors = [] f = open('glove100K.100d.txt', 'rb') word_idx = 0 for line in f: values = line.decode('utf-8').split() word = values[0] vector = np.asarray(values[1:], dtype='float32') embeddings_index[word] = word_idx embeddings_vectors.append(vector) word_idx = word_idx + 1 f.close() inv_index = {v: k for k, v in embeddings_index.items()} print("found %d different words in the file" % word_idx) # Stack all embeddings in a large numpy array glove_embeddings = np.vstack(embeddings_vectors) glove_norms = np.linalg.norm(glove_embeddings, axis=-1, keepdims=True) glove_embeddings_normed = glove_embeddings / glove_norms print(glove_embeddings.shape) def get_emb(word): idx = embeddings_index.get(word) if idx is None: return None else: return glove_embeddings[idx] def get_normed_emb(word): idx = embeddings_index.get(word) if idx is None: return None else: return glove_embeddings_normed[idx] get_emb("computer") Explanation: Loading pre-trained embeddings The file glove100K.100d.txt is an extract of Glove Vectors, that were trained on english Wikipedia 2014 + Gigaword 5 (6B tokens). We extracted the 100 000 most frequent words. They have a dimension of 100 End of explanation # %load solutions/most_similar.py def most_similar(words, topn=10): query_emb = 0 # If we have a list of words instead of one word # (bonus question) if type(words) == list: for word in words: query_emb += get_emb(word) else: query_emb = get_emb(words) query_emb = query_emb / np.linalg.norm(query_emb) # Large numpy vector with all cosine similarities # between emb and all other words cosines = np.dot(glove_embeddings_normed, query_emb) # topn most similar indexes corresponding to cosines idxs = np.argsort(cosines)[::-1][:topn] # pretty return with word and similarity return [(inv_index[idx], cosines[idx]) for idx in idxs] most_similar("cpu") most_similar("pitt") most_similar("jolie") Explanation: Finding most similar words Exercice Build a function to find most similar words, given a word as query: - lookup the vector for the query word in the Glove index; - compute the cosine similarity between a word embedding and all other words; - display the top 10 most similar words. Bonus Change your function so that it takes multiple words as input (by averaging them) End of explanation np.dot(get_normed_emb('aniston'), get_normed_emb('pitt')) np.dot(get_normed_emb('jolie'), get_normed_emb('pitt')) most_similar("1") # bonus: yangtze is a chinese river most_similar(["river", "chinese"]) Explanation: Predict the future better than tarot: End of explanation from sklearn.manifold import TSNE word_emb_tsne = TSNE(perplexity=30).fit_transform(glove_embeddings_normed[:1000]) %matplotlib inline import matplotlib.pyplot as plt plt.figure(figsize=(40, 40)) axis = plt.gca() np.set_printoptions(suppress=True) plt.scatter(word_emb_tsne[:, 0], word_emb_tsne[:, 1], marker=".", s=1) for idx in range(1000): plt.annotate(inv_index[idx], xy=(word_emb_tsne[idx, 0], word_emb_tsne[idx, 1]), xytext=(0, 0), textcoords='offset points') plt.savefig("tsne.png") plt.show() Explanation: Displaying vectors with t-SNE End of explanation EMBEDDING_DIM = 100 # prepare embedding matrix nb_words_in_matrix = 0 nb_words = min(MAX_NB_WORDS, len(word_index)) embedding_matrix = np.zeros((nb_words, EMBEDDING_DIM)) for word, i in word_index.items(): if i >= MAX_NB_WORDS: continue embedding_vector = get_emb(word) if embedding_vector is not None: # words not found in embedding index will be all-zeros. embedding_matrix[i] = embedding_vector nb_words_in_matrix = nb_words_in_matrix + 1 print("added %d words in the embedding matrix" % nb_words_in_matrix) Explanation: Using pre-trained embeddings in our model We want to use these pre-trained embeddings for transfer learning. This process is rather similar than transfer learning in image recognition: the features learnt on words might help us bootstrap the learning process, and increase performance if we don't have enough training data. - We initialize embedding matrix from the model with Glove embeddings: - take all words from our 20 Newgroup vocabulary (MAX_NB_WORDS = 20000), and look up their Glove embedding - place the Glove embedding at the corresponding index in the matrix - if the word is not in the Glove vocabulary, we only place zeros in the matrix - We may fix these embeddings or fine-tune them End of explanation pretrained_embedding_layer = Embedding( MAX_NB_WORDS, EMBEDDING_DIM, weights=[embedding_matrix], input_length=MAX_SEQUENCE_LENGTH, ) Explanation: Build a layer with pre-trained embeddings: End of explanation sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32') embedded_sequences = pretrained_embedding_layer(sequence_input) average = GlobalAveragePooling1D()(embedded_sequences) predictions = Dense(N_CLASSES, activation='softmax')(average) model = Model(sequence_input, predictions) # We don't want to fine-tune embeddings model.layers[1].trainable=False model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc']) model.fit(x_train, y_train, validation_split=0.1, nb_epoch=10, batch_size=128, verbose=2) # Note, on this type of task, this technique will # degrade results as we train much less parameters # and we average a large number pre-trained embeddings. # You will notice much less overfitting then! # Using convolutions / LSTM will help # It is also advisable to treat seperately pre-trained # embeddings and words out of vocabulary. Explanation: A model with pre-trained Embeddings Average word embeddings pre-trained with Glove / Word2Vec usually works suprisingly well. However, when averaging more than 10-15 words, the resulting vector becomes too noisy and classification performance is degraded. End of explanation
12,845
Given the following text description, write Python code to implement the functionality described below step by step Description: Machine Learning Engineer Nanodegree Model Evaluation & Validation Project 1 Step1: Data Exploration In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results. Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MDEV', will be the variable we seek to predict. These are stored in features and prices, respectively. Implementation Step3: Question 1 - Feature Observation As a reminder, we are using three features from the Boston housing dataset Step4: Question 2 - Goodness of Fit Assume that a dataset contains five data points and a model made the following predictions for the target variable Step5: Answer Step6: Question 3 - Training and Testing What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm? Hint Step7: Question 4 - Learning the Data Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model? Hint Step9: Question 5 - Bias-Variance Tradeoff When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions? Hint Step10: Making Predictions Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on. Question 9 - Optimal Model What maximum depth does the optimal model have? How does this result compare to your guess in Question 6? Run the code block below to fit the decision tree regressor to the training data and produce an optimal model. Step11: Answer Step12: Answer
Python Code: # Import libraries necessary for this project import numpy as np import pandas as pd import visuals as vs # Supplementary code from sklearn.cross_validation import ShuffleSplit # Pretty display for notebooks %matplotlib inline # Load the Boston housing dataset data = pd.read_csv('housing.csv') prices = data['MDEV'] features = data.drop('MDEV', axis = 1) # Success print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape) Explanation: Machine Learning Engineer Nanodegree Model Evaluation & Validation Project 1: Predicting Boston Housing Prices Welcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting Started In this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis. The dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preoprocessing steps have been made to the dataset: - 16 data points have an 'MDEV' value of 50.0. These data points likely contain missing or censored values and have been removed. - 1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been removed. - The features 'RM', 'LSTAT', 'PTRATIO', and 'MDEV' are essential. The remaining non-relevant features have been excluded. - The feature 'MDEV' has been multiplicatively scaled to account for 35 years of market inflation. Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported. End of explanation # TODO: Minimum price of the data minimum_price = np.min(prices) # TODO: Maximum price of the data maximum_price = np.max(prices) # TODO: Mean price of the data mean_price = np.mean(prices) # TODO: Median price of the data median_price = np.median(prices) # TODO: Standard deviation of prices of the data std_price = np.std(prices) # Show the calculated statistics print "Statistics for Boston housing dataset:\n" print "Minimum price: ${:,.2f}".format(minimum_price) print "Maximum price: ${:,.2f}".format(maximum_price) print "Mean price: ${:,.2f}".format(mean_price) print "Median price ${:,.2f}".format(median_price) print "Standard deviation of prices: ${:,.2f}".format(std_price) Explanation: Data Exploration In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results. Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MDEV', will be the variable we seek to predict. These are stored in features and prices, respectively. Implementation: Calculate Statistics For your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model. In the code cell below, you will need to implement the following: - Calculate the minimum, maximum, mean, median, and standard deviation of 'MDEV', which is stored in prices. - Store each calculation in their respective variable. End of explanation # TODO: Import 'r2_score' from sklearn.metrics import r2_score def performance_metric(y_true, y_predict): Calculates and returns the performance score between true and predicted values based on the metric chosen. # TODO: Calculate the performance score between 'y_true' and 'y_predict' score = r2_score(y_true, y_predict) # Return the score return score # performance_metric([1,2,3],[1,2,3]) Explanation: Question 1 - Feature Observation As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood): - 'RM' is the average number of rooms among homes in the neighborhood. - 'LSTAT' is the percentage of all Boston homeowners who have a greater net worth than homeowners in the neighborhood. - 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood. Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MDEV' or a decrease in the value of 'MDEV'? Justify your answer for each. Hint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7? Answer: on my intuition, I take that 'MDEV' will usually increase when 'LSTAT' or 'PTRATIO' increase, while increase with 'RM' may led to a decrease of 'MDEV', cause that all the increase or decrease of these three features means superior resources increasing. Developing a Model In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions. Implementation: Define a Performance Metric It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions. The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 always fails to predict the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is no better than one that naively predicts the mean of the target variable. For the performance_metric function in the code cell below, you will need to implement the following: - Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict. - Assign the performance score to the score variable. End of explanation # Calculate the performance of this model score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3]) print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score) Explanation: Question 2 - Goodness of Fit Assume that a dataset contains five data points and a model made the following predictions for the target variable: | True Value | Prediction | | :-------------: | :--------: | | 3.0 | 2.5 | | -0.5 | 0.0 | | 2.0 | 2.1 | | 7.0 | 7.8 | | 4.2 | 5.3 | Would you consider this model to have successfully captured the variation of the target variable? Why or why not? Run the code cell below to use the performance_metric function and calculate this model's coefficient of determination. End of explanation # TODO: Import 'train_test_split' from sklearn.cross_validation import train_test_split # TODO: Shuffle and split the data into training and testing subsets X_train, X_test, y_train, y_test = train_test_split( features, prices, test_size=0.2, random_state=50) # Success print "Training and testing split was successful." Explanation: Answer: this model used to predict have mostly captures the cariation of the target variable, with a r2_score 0.923. Implementation: Shuffle and Split Data Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset. For the code cell below, you will need to implement the following: - Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets. - Split the data into 80% training and 20% testing. - Set the random_state for train_test_split to a value of your choice. This ensures results are consistent. - Assign the train and testing splits to X_train, X_test, y_train, and y_test. End of explanation # Produce learning curves for varying training set sizes and maximum depths vs.ModelLearning(features, prices) Explanation: Question 3 - Training and Testing What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm? Hint: What could go wrong with not having a way to test your model? Answer: the main purpose of splitting a dataset into some ratio of trainnig and testing subset is to to limit problems like overfitting, give an insight on how the model will generalize to an independent dataset Analyzing Model Performance In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone. Learning Curves The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded reigon of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination. Run the code cell below and use these graphs to answer the following question. End of explanation vs.ModelComplexity(X_train, y_train) Explanation: Question 4 - Learning the Data Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model? Hint: Are the learning curves converging to particular scores? Answer: for instance, when tha maximum depth is 3, both the test curve and the learning curve converge to a particular score, that's about 0.8. As more training points are added, the model has a better performance with a higher r2_score of testing curve, while a lower r2_score of training curve, but when the number of training points is more than 200, both curves converge to 0.8. So when we have not enough training points, more training points benefit the model, but as the number increase, when it cames to a certain enough number, number of training points will have no benefit. Complexity Curves The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function. Run the code cell below and use this graph to answer the following two questions. End of explanation # TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV' from sklearn.metrics import make_scorer from sklearn.tree import DecisionTreeRegressor from sklearn.grid_search import GridSearchCV def fit_model(X, y): Performs grid search over the 'max_depth' parameter for a decision tree regressor trained on the input data [X, y]. # Create cross-validation sets from the training data cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0) # TODO: Create a decision tree regressor object regressor = DecisionTreeRegressor() # TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10 params = {'max_depth': range(1,11)} # TODO: Transform 'performance_metric' into a scoring function using 'make_scorer' scoring_fnc = make_scorer(performance_metric) # TODO: Create the grid search object grid = GridSearchCV(regressor, params, scoring = scoring_fnc, cv = cv_sets) # Fit the grid search object to the data to compute the optimal model grid = grid.fit(X, y) # Return the optimal model after fitting the data return grid.best_estimator_ Explanation: Question 5 - Bias-Variance Tradeoff When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions? Hint: How do you know when a model is suffering from high bias or high variance? Answer: the model trained with a maximum depth of 1 suffers from high bias, because both curves converge to an very low score, while the model trained with a maximum depth of 10 suffers from high variance, with a large gap between training curve and testing curve, which means overfitting. Question 6 - Best-Guess Optimal Model Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer? Answer: on my intuition, model trained with maximum depth of 4 generalizes best to unseen data, for both curves converge to a relative good enough score. It is a balance of both bias and variance. Evaluating Model Performance In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model. Question 7 - Grid Search What is the grid search technique and how it can be applied to optimize a learning algorithm? Answer: as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. grid serch technique joints different parameters selection together and search for best parameters. Question 8 - Cross-Validation What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model? Hint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set? Answer: in k-fold cross-validation, the orinal dataset is randomly partitioned into k equal sized subsets. A single subset of that k subsets is used for testing and the remaining k-1 subsets for training. And cross-validation process is then repeated k times, with each of the k subsets used exactly once sa tha validation data. After that, the k results can then be averaged to produce a better single estimation. The advantage is that all observations are used for both training and validation, and each observation is used for validation exactly once, so that tha trained model will generalize better. Implementation: Fitting a Model Your final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms. For the fit_model function in the code cell below, you will need to implement the following: - Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object. - Assign this object to the 'regressor' variable. - Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable. - Use make_scorer from sklearn.metrics to create a scoring function object. - Pass the performance_metric function as a parameter to the object. - Assign this scoring function to the 'scoring_fnc' variable. - Use GridSearchCV from sklearn.grid_search to create a grid search object. - Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object. - Assign the GridSearchCV object to the 'grid' variable. End of explanation # Fit the training data to the model using grid search reg = fit_model(X_train, y_train) # Produce the value for 'max_depth' print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth']) Explanation: Making Predictions Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on. Question 9 - Optimal Model What maximum depth does the optimal model have? How does this result compare to your guess in Question 6? Run the code block below to fit the decision tree regressor to the training data and produce an optimal model. End of explanation # Produce a matrix for client data client_data = [[5, 34, 15], # Client 1 [4, 55, 22], # Client 2 [8, 7, 12]] # Client 3 # Show predictions for i, price in enumerate(reg.predict(client_data)): print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price) Explanation: Answer: maximum depth of the optimal model is 4, the same to my prediction. Question 10 - Predicting Selling Prices Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients: | Feature | Client 1 | Client 2 | Client 3 | | :---: | :---: | :---: | :---: | | Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms | | Household net worth (income) | Top 34th percent | Bottom 45th percent | Top 7th percent | | Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 | What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features? Hint: Use the statistics you calculated in the Data Exploration section to help justify your response. Run the code block below to have your optimized model make predictions for each client's home. End of explanation vs.PredictTrials(features, prices, fit_model, client_data) Explanation: Answer: Predicted selling price for homes of Client 1 to 3 are $339,570.00, $212,223.53, $938,053.85. It seems that "Total number of rooms in home" and "Household net worth (income)" are more important features for price prediction. Sensitivity An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on. End of explanation
12,846
Given the following text description, write Python code to implement the functionality described below step by step Description: Trace Analysis Examples Idle States Residency Analysis This notebook shows the features provided by the idle state analysis module. It will be necessary to collect the following events Step1: Target Configuration The target configuration is used to describe and configure your test environment. You can find more details in examples/utils/testenv_example.ipynb. Our target is a Juno R0 development board running Linux. Step2: Workload configuration and execution Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb. This experiment Step3: Parse trace and analyse data Step4: Per-CPU Idle State Residency Profiling It is possible to get the residency in each idle state of a CPU or a cluster with the following commands Step5: For the translation between the idle value and its description Step6: The IdleAnalysis module provide methods for plotting residency data Step7: CPU idle state over time Take a look at the target's idle states Step8: Now use trappy to plot the idle state of a single CPU over time. Higher is deeper Step9: Examine idle period lengths Let's get a DataFrame showing the length of each idle period on the CPU and the index of the cpuidle state that was entered. Step10: Make a scatter plot of the length of idle periods against the state that was entered. We should see that for long idle periods, deeper states were entered (i.e. we should see a positive corellation between the X and Y axes). Step11: Draw a histogram of the length of idle periods shorter than 100ms in which the CPU entered cpuidle state 2. Step12: Per-cluster Idle State Residency
Python Code: import logging from conf import LisaLogging LisaLogging.setup() %matplotlib inline import os # Support to access the remote target from env import TestEnv # Support to access cpuidle information from the target from devlib import * # Support to configure and run RTApp based workloads from wlgen import RTA, Ramp # Support for trace events analysis from trace import Trace # DataFrame support import pandas as pd from pandas import DataFrame # Trappy (plots) support from trappy import ILinePlot from trappy.stats.grammar import Parser Explanation: Trace Analysis Examples Idle States Residency Analysis This notebook shows the features provided by the idle state analysis module. It will be necessary to collect the following events: cpu_idle, to filter out intervals of time in which the CPU is idle sched_switch, to recognise tasks on kernelshark Details on idle states profiling ar given in Per-CPU/Per-Cluster Idle State Residency Profiling below. End of explanation # Setup a target configuration my_conf = { # Target platform and board "platform" : 'linux', "board" : 'juno', # Target board IP/MAC address "host" : '192.168.0.1', # Login credentials "username" : 'root', "password" : 'juno', "results_dir" : "IdleAnalysis", # RTApp calibration values (comment to let LISA do a calibration run) #"rtapp-calib" : { # "0": 318, "1": 125, "2": 124, "3": 318, "4": 318, "5": 319 #}, # Tools required by the experiments "tools" : ['rt-app', 'trace-cmd'], "modules" : ['bl', 'cpufreq', 'cpuidle'], "exclude_modules" : ['hwmon'], # FTrace events to collect for all the tests configuration which have # the "ftrace" flag enabled "ftrace" : { "events" : [ "cpu_idle", "sched_switch" ], "buffsize" : 10 * 1024, }, } # Initialize a test environment te = TestEnv(my_conf, wipe=False, force_new=True) target = te.target # We're going to run quite a heavy workload to try and create short idle periods. # Let's set the CPU frequency to max to make sure those idle periods exist # (otherwise at a lower frequency the workload might overload the CPU # so it never went idle at all) te.target.cpufreq.set_all_governors('performance') Explanation: Target Configuration The target configuration is used to describe and configure your test environment. You can find more details in examples/utils/testenv_example.ipynb. Our target is a Juno R0 development board running Linux. End of explanation cpu = 1 def experiment(te): # Create RTApp RAMP task rtapp = RTA(te.target, 'ramp', calibration=te.calibration()) rtapp.conf(kind='profile', params={ 'ramp' : Ramp( start_pct = 80, end_pct = 10, delta_pct = 5, time_s = 0.5, period_ms = 5, cpus = [cpu]).get() }) # FTrace the execution of this workload te.ftrace.start() # Momentarily wake all CPUs to ensure cpu_idle trace events are present from the beginning te.target.cpuidle.perturb_cpus() rtapp.run(out_dir=te.res_dir) te.ftrace.stop() # Collect and keep track of the trace trace_file = os.path.join(te.res_dir, 'trace.dat') te.ftrace.get_trace(trace_file) # Dump platform descriptor te.platform_dump(te.res_dir) experiment(te) Explanation: Workload configuration and execution Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb. This experiment: - Runs a periodic RT-App workload, pinned to CPU 1, that ramps down from 80% to 10% over 7.5 seconds - Uses perturb_cpus to ensure 'cpu_idle' events are present in the trace for all CPUs - Triggers and collects ftrace output End of explanation # Base folder where tests folder are located res_dir = te.res_dir logging.info('Content of the output folder %s', res_dir) !tree {res_dir} trace = Trace(res_dir, my_conf['ftrace']['events'], te.platform) Explanation: Parse trace and analyse data End of explanation # Idle state residency for CPU 3 CPU=3 state_res = trace.data_frame.cpu_idle_state_residency(CPU) state_res Explanation: Per-CPU Idle State Residency Profiling It is possible to get the residency in each idle state of a CPU or a cluster with the following commands: End of explanation DataFrame(data={'value': state_res.index.values, 'name': [te.target.cpuidle.get_state(i, cpu=CPU) for i in state_res.index.values]}) Explanation: For the translation between the idle value and its description: End of explanation ia = trace.analysis.idle # Actual time spent in each idle state ia.plotCPUIdleStateResidency([1,2]) # Percentage of time spent in each idle state ia.plotCPUIdleStateResidency([1,2], pct=True) Explanation: The IdleAnalysis module provide methods for plotting residency data: End of explanation te.target.cpuidle.get_states() Explanation: CPU idle state over time Take a look at the target's idle states: End of explanation p = Parser(trace.ftrace, filters = {'cpu_id': cpu}) idle_df = p.solve('cpu_idle:state') ILinePlot(idle_df, column=cpu, drawstyle='steps-post').view() Explanation: Now use trappy to plot the idle state of a single CPU over time. Higher is deeper: the plot is at -1 when the CPU is active, 0 for WFI, 1 for CPU sleep, etc. We should see that as the workload ramps down and the idle periods become longer, the idle states used become deeper. End of explanation def get_idle_periods(df): series = df[cpu] series = series[series.shift() != series].dropna() if series.iloc[0] == -1: series = series.iloc[1:] idles = series.iloc[0::2] wakeups = series.iloc[1::2] if len(idles) > len(wakeups): idles = idles.iloc[:-1] else: wakeups = wakeups.iloc[:-1] lengths = pd.Series((wakeups.index - idles.index), index=idles.index) return pd.DataFrame({"length": lengths, "state": idles}) Explanation: Examine idle period lengths Let's get a DataFrame showing the length of each idle period on the CPU and the index of the cpuidle state that was entered. End of explanation lengths = get_idle_periods(idle_df) lengths.plot(kind='scatter', x='length', y='state') Explanation: Make a scatter plot of the length of idle periods against the state that was entered. We should see that for long idle periods, deeper states were entered (i.e. we should see a positive corellation between the X and Y axes). End of explanation df = lengths[(lengths['state'] == 2) & (lengths['length'] < 0.010)] df.hist(column='length', bins=50) Explanation: Draw a histogram of the length of idle periods shorter than 100ms in which the CPU entered cpuidle state 2. End of explanation # Idle state residency for CPUs in the big cluster trace.data_frame.cluster_idle_state_residency('big') # Actual time spent in each idle state for CPUs in the big and LITTLE clusters ia.plotClusterIdleStateResidency(['big', 'LITTLE']) # Percentage of time spent in each idle state for CPUs in the big and LITTLE clusters ia.plotClusterIdleStateResidency(['big', 'LITTLE'], pct=True) Explanation: Per-cluster Idle State Residency End of explanation
12,847
Given the following text description, write Python code to implement the functionality described below step by step Description: Deep Neural Networks Theano Python library that provides efficient (low-level) tools for working with Neural Networks In particular Step1: Inspecting the data Let's load some data Step4: and then visualise it Step5: Transform to "features" Step6: The labels we transform to a "one-hot" encoding Step7: For example, let's inspect the first 3 labels Step8: Simple Multi-Layer Perceptron (MLP) The simplest kind of Artificial Neural Network is as Multi-Layer Perceptron (MLP) with a single hidden layer. Step9: First we define the "architecture" of the network Step10: then we compile it. This takes the symbolic computational graph of the model and compiles it an efficient implementation which can then be used to train and evaluate the model. Note that we have to specify what loss/objective function we want to use as well which optimisation algorithm to use. SGD stands for Stochastic Gradient Descent. Step11: Next we train the model on our training data. Watch the loss, which is the objective function which we are minimising, and the estimated accuracy of the model. Step12: Once the model is trained, we can evaluate its performance on the test data. Step13: A Deeper MLP Next we build a two-layer MLP with the same number of hidden nodes, half in each layer. Step14: Manual Autoencoder Step15: Stacked Autoencoder
Python Code: from __future__ import absolute_import from __future__ import print_function from ipywidgets import interact, interactive, widgets import numpy as np np.random.seed(1337) # for reproducibility Explanation: Deep Neural Networks Theano Python library that provides efficient (low-level) tools for working with Neural Networks In particular: Automatic Differentiation (AD) Compiled computation graphs GPU accelerated computation Keras High level library for specifying and training neural networks Can use Theano or TensorFlow as backend The MNIST Dataset 70,000 handwritten digits 60,000 for training 10,000 for testing As 28x28 pixel images TODO implement layer-by-layer training in the Stacked Autoencoder implement supervised fine tuning on pre-trained "Autoencoder" implement filter visualisation by gradient ascent on neuron activations Data Preprocessing End of explanation from keras.datasets import mnist (images_train, labels_train), (images_test, labels_test) = mnist.load_data() print('images',images_train.shape) print('labels', labels_train.shape) Explanation: Inspecting the data Let's load some data End of explanation %matplotlib inline import matplotlib import matplotlib.pyplot as plt def plot_mnist_digit(image, figsize=None): Plot a single MNIST image. fig = plt.figure() ax = fig.add_subplot(1, 1, 1) if figsize: ax.set_figsize(*figsize) ax.matshow(image, cmap = matplotlib.cm.binary) plt.xticks(np.array([])) plt.yticks(np.array([])) plt.show() def plot_1_by_2_images(image, reconstruction, figsize=None): fig = plt.figure(figsize=figsize) ax = fig.add_subplot(1, 2, 1) ax.matshow(image, cmap = matplotlib.cm.binary) plt.xticks(np.array([])) plt.yticks(np.array([])) ax = fig.add_subplot(1, 2, 2) ax.matshow(reconstruction, cmap = matplotlib.cm.binary) plt.xticks(np.array([])) plt.yticks(np.array([])) plt.show() def plot_10_by_10_images(images, figsize=None): Plot 100 MNIST images in a 10 by 10 table. Note that we crop the images so that they appear reasonably close together. The image is post-processed to give the appearance of being continued. fig = plt.figure(figsize=figsize) #images = [image[3:25, 3:25] for image in images] #image = np.concatenate(images, axis=1) for x in range(10): for y in range(10): ax = fig.add_subplot(10, 10, 10*y+x+1) ax.matshow(images[10*y+x], cmap = matplotlib.cm.binary) plt.xticks(np.array([])) plt.yticks(np.array([])) plt.show() def draw_image(i): plot_mnist_digit(images_train[i]) print(i, ':', labels_train[i]) interact(draw_image, i=(0, len(images_train)-1)) plot_10_by_10_images(images_train, figsize=(10,10)) Explanation: and then visualise it End of explanation def to_features(X): return X.reshape(-1, 784).astype("float32") / 255.0 def to_images(X): return (X*255.0).astype('uint8').reshape(-1, 28, 28) #print((images_train[0]-(to_images(to_features(images_train[0])))).max()) print('data shape:', images_train.shape) print('features shape', to_features(images_train).shape) # the data, shuffled and split between train and test sets X_train = to_features(images_train) X_test = to_features(images_test) print(X_train.shape, 'training samples') print(X_test.shape, 'test samples') Explanation: Transform to "features" End of explanation # The labels need to be transformed into class indicators from keras.utils import np_utils y_train = np_utils.to_categorical(labels_train, nb_classes=10) y_test = np_utils.to_categorical(labels_test, nb_classes=10) print(y_train.shape, 'train labels') print(y_test.shape, 'test labels') Explanation: The labels we transform to a "one-hot" encoding End of explanation print('labels', labels_train[:3]) print('y', y_train[:3]) Explanation: For example, let's inspect the first 3 labels: End of explanation # Neural Network Architecture Parameters nb_input = 784 nb_hidden = 512 nb_output = 10 # Training Parameters nb_epoch = 1 batch_size = 128 Explanation: Simple Multi-Layer Perceptron (MLP) The simplest kind of Artificial Neural Network is as Multi-Layer Perceptron (MLP) with a single hidden layer. End of explanation from keras.models import Sequential from keras.layers.core import Dense, Activation mlp = Sequential() mlp.add(Dense(output_dim=nb_hidden, input_dim=nb_input, init='uniform')) mlp.add(Activation('sigmoid')) mlp.add(Dense(output_dim=nb_output, input_dim=nb_hidden, init='uniform')) mlp.add(Activation('softmax')) Explanation: First we define the "architecture" of the network End of explanation mlp.compile(loss='categorical_crossentropy', optimizer='SGD') Explanation: then we compile it. This takes the symbolic computational graph of the model and compiles it an efficient implementation which can then be used to train and evaluate the model. Note that we have to specify what loss/objective function we want to use as well which optimisation algorithm to use. SGD stands for Stochastic Gradient Descent. End of explanation mlp.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, show_accuracy=True) Explanation: Next we train the model on our training data. Watch the loss, which is the objective function which we are minimising, and the estimated accuracy of the model. End of explanation mlp.evaluate(X_test, y_test, show_accuracy=True) def draw_mlp_prediction(j): plot_mnist_digit(to_images(X_test)[j]) prediction = mlp.predict_classes(X_test[j:j+1], verbose=False)[0] print(j, ':', '\tpredict:', prediction, '\tactual:', labels_test[j]) interact(draw_mlp_prediction, j=(0, len(X_test)-1)) plot_10_by_10_images(images_test, figsize=(10,10)) Explanation: Once the model is trained, we can evaluate its performance on the test data. End of explanation from keras.models import Sequential nb_layers = 2 mlp2 = Sequential() # add hidden layers for i in range(nb_layers): mlp2.add(Dense(output_dim=nb_hidden/nb_layers, input_dim=nb_input if i==0 else nb_hidden/nb_layers, init='uniform')) mlp2.add(Activation('sigmoid')) # add output layer mlp2.add(Dense(output_dim=nb_output, input_dim=nb_hidden/nb_layers, init='uniform')) mlp2.add(Activation('softmax')) mlp2.compile(loss='categorical_crossentropy', optimizer='SGD') mlp2.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch, show_accuracy=True, verbose=1) mlp2.evaluate(X_test, y_test, show_accuracy=True) Explanation: A Deeper MLP Next we build a two-layer MLP with the same number of hidden nodes, half in each layer. End of explanation from keras.models import Sequential from keras.layers.core import Dense, Activation, Dropout mae = Sequential() nb_layers = 1 encoder = [] decoder = [] for i in range(nb_layers): if i>0: encoder.append(Dropout(0.4)) encoder.append(Dense(output_dim=nb_hidden/nb_layers, input_dim=nb_input if i==0 else nb_hidden/nb_layers, init='glorot_uniform')) encoder.append(Activation('sigmoid')) # Note that these are in reverse order decoder.append(Activation('sigmoid')) decoder.append(Dense(output_dim=nb_input if i==0 else nb_hidden/nb_layers, input_dim=nb_hidden/nb_layers, init='glorot_uniform')) #decoder.append(Dropout(0.2)) for layer in encoder: mae.add(layer) for layer in reversed(decoder): mae.add(layer) from keras.optimizers import SGD sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True) mae.compile(loss='mse', optimizer=sgd) # replace with sgd mae.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1) def draw_mae_prediction(j): X_plot = X_test[j:j+1] prediction = mae.predict(X_plot, verbose=False) plot_1_by_2_images(to_images(X_plot)[0], to_images(prediction)[0]) interact(draw_mae_prediction, j=(0, len(X_test)-1)) plot_10_by_10_images(images_test, figsize=(10,10)) Explanation: Manual Autoencoder End of explanation from keras.models import Sequential from keras.layers.core import Dense, Activation, Dropout class StackedAutoencoder(object): def __init__(self, layers, mode='autoencoder', activation='sigmoid', init='uniform', final_activation='softmax', dropout=0.2, optimizer='SGD'): self.layers = layers self.mode = mode self.activation = activation self.final_activation = final_activation self.init = init self.dropout = dropout self.optimizer = optimizer self._model = None self.build() self.compile() def _add_layer(self, model, i, is_encoder): if is_encoder: input_dim, output_dim = self.layers[i], self.layers[i+1] activation = self.final_activation if i==len(self.layers)-2 else self.activation else: input_dim, output_dim = self.layers[i+1], self.layers[i] activation = self.activation model.add(Dense(output_dim=output_dim, input_dim=input_dim, init=self.init)) model.add(Activation(activation)) def build(self): self.encoder = Sequential() self.decoder = Sequential() self.autoencoder = Sequential() for i in range(len(self.layers)-1): self._add_layer(self.encoder, i, True) self._add_layer(self.autoencoder, i, True) #if i<len(self.layers)-2: # self.autoencoder.add(Dropout(self.dropout)) # Note that the decoder layers are in reverse order for i in reversed(range(len(self.layers)-1)): self._add_layer(self.decoder, i, False) self._add_layer(self.autoencoder, i, False) def compile(self): print("Compiling the encoder ...") self.encoder.compile(loss='categorical_crossentropy', optimizer=self.optimizer) print("Compiling the decoder ...") self.decoder.compile(loss='mse', optimizer=self.optimizer) print("Compiling the autoencoder ...") return self.autoencoder.compile(loss='mse', optimizer=self.optimizer) def fit(self, X_train, Y_train, batch_size, nb_epoch, verbose=1): result = self.autoencoder.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=verbose) # copy the weights to the encoder for i, l in enumerate(self.encoder.layers): l.set_weights(self.autoencoder.layers[i].get_weights()) for i in range(len(self.decoder.layers)): self.decoder.layers[-1-i].set_weights(self.autoencoder.layers[-1-i].get_weights()) return result def pretrain(self, X_train, batch_size, nb_epoch, verbose=1): for i in range(len(self.layers)-1): # Greedily train each layer print("Now pretraining layer {} [{}-->{}]".format(i+1, self.layers[i], self.layers[i+1])) ae = Sequential() self._add_layer(ae, i, True) #ae.add(Dropout(self.dropout)) self._add_layer(ae, i, False) ae.compile(loss='mse', optimizer=self.optimizer) ae.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=verbose) # Then lift the training data up one layer print("Transforming data from", X_train.shape, "to", (X_train.shape[0], self.layers[i+1])) enc = Sequential() self._add_layer(enc, i, True) enc.compile(loss='mse', optimizer=self.optimizer) enc.layers[0].set_weights(ae.layers[0].get_weights()) enc.layers[1].set_weights(ae.layers[1].get_weights()) X_train = enc.predict(X_train, verbose=verbose) print("Shape check:", X_train.shape) # Then copy the learned weights self.encoder.layers[2*i].set_weights(ae.layers[0].get_weights()) self.encoder.layers[2*i+1].set_weights(ae.layers[1].get_weights()) self.autoencoder.layers[2*i].set_weights(ae.layers[0].get_weights()) self.autoencoder.layers[2*i+1].set_weights(ae.layers[1].get_weights()) self.decoder.layers[-1-(2*i)].set_weights(ae.layers[-1].get_weights()) self.decoder.layers[-1-(2*i+1)].set_weights(ae.layers[-2].get_weights()) self.autoencoder.layers[-1-(2*i)].set_weights(ae.layers[-1].get_weights()) self.autoencoder.layers[-1-(2*i+1)].set_weights(ae.layers[-2].get_weights()) def evaluate(self, X_test, Y_test, show_accuracy=False): return self.autoencoder.evaluate(X_test, Y_test, show_accuracy=show_accuracy) def predict(self, X, verbose=False): return self.autoencoder.predict(X, verbose=verbose) def _get_paths(self, name): model_path = "models/{}_model.yaml".format(name) weights_path = "models/{}_weights.hdf5".format(name) return model_path, weights_path def save(self, name='autoencoder'): model_path, weights_path = self._get_paths(name) open(model_path, 'w').write(self.autoencoder.to_yaml()) self.autoencoder.save_weights(weights_path, overwrite=True) def load(self, name='autoencoder'): model_path, weights_path = self._get_paths(name) self.autoencoder = keras.models.model_from_yaml(open(model_path)) self.autoencoder.load_weights(weights_path) sae = StackedAutoencoder(layers=[nb_input, 400, 100, 10], activation='sigmoid', final_activation='sigmoid', init='uniform', dropout=0.2, optimizer='adam') nb_epoch = 3 sae.pretrain(X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1) #sae.compile() sae.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1) def draw_sae_prediction(j): X_plot = X_test[j:j+1] prediction = sae.predict(X_plot, verbose=False) plot_1_by_2_images(to_images(X_plot)[0], to_images(prediction)[0]) print(sae.encoder.predict(X_plot, verbose=False)[0]) interact(draw_sae_prediction, j=(0, len(X_test)-1)) plot_10_by_10_images(images_test, figsize=(10,10)) sae.evaluate(X_test, X_test, show_accuracy=True) def visualise_filter(model, layer_index, filter_index): from keras import backend as K # build a loss function that maximizes the activation # of the nth filter on the layer considered layer_output = model.layers[layer_index].get_output() loss = K.mean(layer_output[:, filter_index]) # compute the gradient of the input picture wrt this loss input_img = model.layers[0].input grads = K.gradients(loss, input_img)[0] # normalization trick: we normalize the gradient grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5) # this function returns the loss and grads given the input picture iterate = K.function([input_img], [loss, grads]) # we start from a gray image with some noise input_img_data = np.random.random((1,nb_input,)) # run gradient ascent for 20 steps step = 1 for i in range(100): loss_value, grads_value = iterate([input_img_data]) input_img_data += grads_value * step #print("Current loss value:", loss_value) if loss_value <= 0.: # some filters get stuck to 0, we can skip them break print("Current loss value:", loss_value) # decode the resulting input image if loss_value>0: #return input_img_data[0] return input_img_data else: raise ValueError(loss_value) def draw_filter(i): flt = visualise_filter(mlp, 3, 4) #print(flt) plot_mnist_digit(to_images(flt)[0]) interact(draw_filter, i=[0, 9]) Explanation: Stacked Autoencoder End of explanation
12,848
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to Feature Engineering Learning Objectives * Improve the accuracy of a model by using feature engineering * Understand there's two places to do feature engineering in Tensorflow 1. Using the tf.feature_column module 2. In the input functions Introduction Up until now we've been focusing on Tensorflow mechanics to make sure our code works, we have neglected model performance, which at this point is 9.26 RMSE. In this notebook we'll attempt to improve on that using feature engineering. Step1: Load raw data These are the same files created in the create_datasets.ipynb notebook Step2: Train and Evaluate input functions These are the same as before with one additional line of code Step3: Feature Engineering Step4: Feature Engineering Step5: Gather list of feature columns Ultimately our estimator expects a list of feature columns, so let's gather all our engineered features into a single list. We cannot pass categorical or crossed feature columns directly into a DNN, Tensorflow will give us an error. We must first wrap them using either indicator_column() or embedding_column(). The former will pass through the one-hot encoded representation as is, the latter will embed the feature into a dense representation of specified dimensionality (the 4th root of the number of categories is a good starting point for number of dimensions). Read more about indicator and embedding columns here. Exercise 3 In the code cell below, create a list containing all the necessary feature columns for our model, including the new feature columns we created above. Hint Step6: Serving Input Receiver function The serving input receiver function will be the same as before except the received tensors are wrapped with add_engineered_features(). Exercise 4 When building the serving_input_receiver_fn() we need to add our engineered features to the dictionary of receiver_tensors we will recieve at inference time. Complete the code below to achieve this by using the add_engineered_features function from above. Note that features should be a dictionary. Have a look at the documentation if you get stuck. Step7: Train and Evaluate (500 train steps) The same as before, we'll train the model for 500 steps (sidenote Step8: Results Our RMSE is now 5.94, our first significant improvement! If we look at the RMSE trend in TensorBoard it appears the model is still learning, so training past 500 steps would likely lower the RMSE even more. Let's run again, this time for 10x as many steps. Train and Evaluate (5,000 train steps) Now, just as above, we'll execute a longer trianing job with 5,000 train steps using our engineered features and assess the performance.
Python Code: import tensorflow as tf import numpy as np import shutil print(tf.__version__) Explanation: Introduction to Feature Engineering Learning Objectives * Improve the accuracy of a model by using feature engineering * Understand there's two places to do feature engineering in Tensorflow 1. Using the tf.feature_column module 2. In the input functions Introduction Up until now we've been focusing on Tensorflow mechanics to make sure our code works, we have neglected model performance, which at this point is 9.26 RMSE. In this notebook we'll attempt to improve on that using feature engineering. End of explanation !gsutil cp gs://cloud-training-demos/taxifare/small/*.csv . !ls -l *.csv Explanation: Load raw data These are the same files created in the create_datasets.ipynb notebook End of explanation CSV_COLUMN_NAMES = ["fare_amount","dayofweek","hourofday","pickuplon","pickuplat","dropofflon","dropofflat"] CSV_DEFAULTS = [[0.0],[1],[0],[-74.0],[40.0],[-74.0],[40.7]] def read_dataset(csv_path): def _parse_row(row): # Decode the CSV row into list of TF tensors fields = tf.decode_csv(records = row, record_defaults = CSV_DEFAULTS) # Pack the result into a dictionary features = dict(zip(CSV_COLUMN_NAMES, fields)) # NEW: Add engineered features features = add_engineered_features(features) # Separate the label from the features label = features.pop("fare_amount") # remove label from features and store return features, label # Create a dataset containing the text lines. dataset = tf.data.Dataset.list_files(file_pattern = csv_path) # (i.e. data_file_*.csv) dataset = dataset.flat_map(map_func = lambda filename:tf.data.TextLineDataset(filenames = filename).skip(count = 1)) # Parse each CSV row into correct (features,label) format for Estimator API dataset = dataset.map(map_func = _parse_row) return dataset def train_input_fn(csv_path, batch_size = 128): #1. Convert CSV into tf.data.Dataset with (features,label) format dataset = read_dataset(csv_path) #2. Shuffle, repeat, and batch the examples. dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size) return dataset def eval_input_fn(csv_path, batch_size = 128): #1. Convert CSV into tf.data.Dataset with (features,label) format dataset = read_dataset(csv_path) #2.Batch the examples. dataset = dataset.batch(batch_size = batch_size) return dataset Explanation: Train and Evaluate input functions These are the same as before with one additional line of code: a call to add_engineered_features() from within the _parse_row() function. End of explanation # 1. One hot encode dayofweek and hourofday fc_dayofweek = # TODO: Your code goes here fc_hourofday = # TODO: Your code goes here # 2. Bucketize latitudes and longitudes NBUCKETS = 16 latbuckets = np.linspace(start = 38.0, stop = 42.0, num = NBUCKETS).tolist() lonbuckets = np.linspace(start = -76.0, stop = -72.0, num = NBUCKETS).tolist() fc_bucketized_plat = # TODO: Your code goes here fc_bucketized_plon = # TODO: Your code goes here fc_bucketized_dlat = # TODO: Your code goes here fc_bucketized_dlon = # TODO: Your code goes here # 3. Cross features to get combination of day and hour fc_crossed_day_hr = # TODO: Your code goes here Explanation: Feature Engineering: feature columns There are two places in Tensorflow where we can do feature engineering. The first is using the tf.feature_column package. This allows us easily bucketize continuous features one hot encode categorical features create feature crosses For details on the possible tf.feature_column transformations and when to use each see the official guide. Let's use tf.feature_column to create a feature that shows the combination of day of week and hour of day. This will allow our model to easily learn the difference between say Wednesday at 5pm (rush hour, expect higher fares) and Sunday at 5pm (light traffic, expect lower fares). Let's also use it to bucketize our latitudes and longitudes because treating them as continuous numbers is misleading to the model. Exercise 1 Complete the code in the cell below by adding the necessary tf.feature_columns. There are many different feature columns you can use. Have a look at the various feature columns and follow the links to the relevant documentation. Hint: For the dayofweek and hourofday you'll want to use categorical features. Then have a look at how categorical features can be combined to created a crossed column for fc_day_hr. Lastly, look at the implementation of tf.feature_column.bucketized_column to apply for the pickup and dropff latitude and longitude. End of explanation def add_engineered_features(features): features["dayofweek"] = features["dayofweek"] - 1 # subtract one since our days of week are 1-7 instead of 0-6 features["latdiff"] = # TODO: Your code goes here features["londiff"] = # TODO: Your code goes here features["euclidean_dist"] = # TODO: Your code goes here return features Explanation: Feature Engineering: input functions While feature columns are very powerful, what happens when we want to something that there isn't a feature column for? Recall the input functions recieve csv data, format it, then pass it batch by batch to the model. We can also use input functions to inject arbitrary tensorflow code to manipulate the data. However, we need to be careful that any transformations we do in one input function, we do for all, otherwise we'll have training-serving skew. To guard against this we encapsulate all input function feature engineering in a single function, add_engineered_features(), and call this function from every input function. Let's calculate the euclidean distance between the pickup and dropoff points and feed that as a new feature to our model. Also it may be useful to know which cardinal direction that distance is in. I suspect that distance is cheaper to travel North/South because in Manhatten streets that run North/South have less stops than streets that run East/West. Exercise 2 In the next cell, you're asked to create some new engineered features using the add_engineered_features function below. This function takes a dictionary of features and returns the dictionary with some additional features added. We want to engineer a new feature which captures the Euclidean distance (the straight-line distance) between the pickup and dropoff. Complete the code below to compute the difference in the latitude and the difference in the longitude. Then, use these to compute the Euclidean distance and add that to the features dictionary as well. End of explanation feature_cols = [ #1. Engineered using tf.feature_column module # TODO: Your code goes here #2. Engineered in input functions # TODO: Your code goes here ] Explanation: Gather list of feature columns Ultimately our estimator expects a list of feature columns, so let's gather all our engineered features into a single list. We cannot pass categorical or crossed feature columns directly into a DNN, Tensorflow will give us an error. We must first wrap them using either indicator_column() or embedding_column(). The former will pass through the one-hot encoded representation as is, the latter will embed the feature into a dense representation of specified dimensionality (the 4th root of the number of categories is a good starting point for number of dimensions). Read more about indicator and embedding columns here. Exercise 3 In the code cell below, create a list containing all the necessary feature columns for our model, including the new feature columns we created above. Hint: You will need to use an indicator_column() to wrap any categorial or crossed feature columns. Take a look at the documentation for tf.feature_column.indicator_column. End of explanation def serving_input_receiver_fn(): receiver_tensors = { 'dayofweek' : tf.placeholder(dtype = tf.int32, shape = [None]), # shape is vector to allow batch of requests 'hourofday' : tf.placeholder(dtype = tf.int32, shape = [None]), 'pickuplon' : tf.placeholder(dtype = tf.float32, shape = [None]), 'pickuplat' : tf.placeholder(dtype = tf.float32, shape = [None]), 'dropofflat' : tf.placeholder(dtype = tf.float32, shape = [None]), 'dropofflon' : tf.placeholder(dtype = tf.float32, shape = [None]), } features = # TODO: Your code goes here return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = receiver_tensors) Explanation: Serving Input Receiver function The serving input receiver function will be the same as before except the received tensors are wrapped with add_engineered_features(). Exercise 4 When building the serving_input_receiver_fn() we need to add our engineered features to the dictionary of receiver_tensors we will recieve at inference time. Complete the code below to achieve this by using the add_engineered_features function from above. Note that features should be a dictionary. Have a look at the documentation if you get stuck. End of explanation %%time OUTDIR = "taxi_trained_dnn/500" shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training model = tf.estimator.DNNRegressor( hidden_units = [10,10], # specify neural architecture feature_columns = feature_cols, model_dir = OUTDIR, config = tf.estimator.RunConfig( tf_random_seed = 1, # for reproducibility save_checkpoints_steps = 100 # checkpoint every N steps ) ) # Add custom evaluation metric def my_rmse(labels, predictions): pred_values = tf.squeeze(input = predictions["predictions"], axis = -1) return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)} model = tf.contrib.estimator.add_metrics(estimator = model, metric_fn = my_rmse) train_spec = tf.estimator.TrainSpec( input_fn = lambda: train_input_fn("./taxi-train.csv"), max_steps = 500) exporter = tf.estimator.FinalExporter(name = "exporter", serving_input_receiver_fn = serving_input_receiver_fn) # export SavedModel once at the end of training # Note: alternatively use tf.estimator.BestExporter to export at every checkpoint that has lower loss than the previous checkpoint eval_spec = tf.estimator.EvalSpec( input_fn = lambda: eval_input_fn("./taxi-valid.csv"), steps = None, start_delay_secs = 1, # wait at least N seconds before first evaluation (default 120) throttle_secs = 1, # wait at least N seconds before each subsequent evaluation (default 600) exporters = exporter) # export SavedModel once at the end of training tf.estimator.train_and_evaluate(estimator = model, train_spec = train_spec, eval_spec = eval_spec) Explanation: Train and Evaluate (500 train steps) The same as before, we'll train the model for 500 steps (sidenote: how many epochs do 500 trains steps represent?). Let's see how the engineered features we've added affect the performance. End of explanation %%time OUTDIR = "taxi_trained_dnn/5000" shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training model = tf.estimator.DNNRegressor( hidden_units = [10,10], # specify neural architecture feature_columns = feature_cols, model_dir = OUTDIR, config = tf.estimator.RunConfig( tf_random_seed = 1, # for reproducibility save_checkpoints_steps = 100 # checkpoint every N steps ) ) # Add custom evaluation metric def my_rmse(labels, predictions): pred_values = tf.squeeze(input = predictions["predictions"], axis = -1) return {"rmse": tf.metrics.root_mean_squared_error(labels = labels, predictions = pred_values)} model = tf.contrib.estimator.add_metrics(estimator = model, metric_fn = my_rmse) train_spec = tf.estimator.TrainSpec( input_fn = lambda: train_input_fn("./taxi-train.csv"), max_steps = 5000) exporter = tf.estimator.FinalExporter(name = "exporter", serving_input_receiver_fn = serving_input_receiver_fn) # export SavedModel once at the end of training # Note: alternatively use tf.estimator.BestExporter to export at every checkpoint that has lower loss than the previous checkpoint eval_spec = tf.estimator.EvalSpec( input_fn = lambda: eval_input_fn("./taxi-valid.csv"), steps = None, start_delay_secs = 1, # wait at least N seconds before first evaluation (default 120) throttle_secs = 1, # wait at least N seconds before each subsequent evaluation (default 600) exporters = exporter) # export SavedModel once at the end of training tf.estimator.train_and_evaluate(estimator = model, train_spec = train_spec, eval_spec = eval_spec) Explanation: Results Our RMSE is now 5.94, our first significant improvement! If we look at the RMSE trend in TensorBoard it appears the model is still learning, so training past 500 steps would likely lower the RMSE even more. Let's run again, this time for 10x as many steps. Train and Evaluate (5,000 train steps) Now, just as above, we'll execute a longer trianing job with 5,000 train steps using our engineered features and assess the performance. End of explanation
12,849
Given the following text description, write Python code to implement the functionality described below step by step Description: 7 Clustering Goal Step1: Efficiency The algorithm is $O(n^3) = \sum_{i=n}^{2} C_n^2$, since it computes the distances between each pair of clusters in iteration. Optimize Step2: 7.3.2 Initializing Clusters for K-Means We want to pick points that have a good chance of lying in different clusters. two approaches Step3: We can use a binary search to find the best values for $k$. Step4: 7.3.4 The Algorithm of Bradley, Fayyad, and Reina designed to cluster data in a high-dimensional Euclidean space. strong assumption about the shape of clusters Step5: The points of the data file are read in chunks. The main-memory data other than the chunk consists of three types of objects Step6: The discard and compressed sets are represented by $2d + 1$ values, if the data is $d$-dimensional. These numbers are
Python Code: # Example 7.2 logger.setLevel('WARN') points = np.array([ [4, 10], [7, 10], [4, 8], [6, 8], [3, 4], [10, 5], [12, 6], [11, 4], [2, 2], [5, 2], [9, 3], [12, 3] ], dtype=np.float ) x, y = points[:,0], points[:,1] cluster = range(len(x)) #cluster_colors = plt.get_cmap('hsv')(np.linspace(0, 1.0, len(cluster))) cluster_colors = sns.color_palette("hls", len(cluster)) plt.scatter(x, y, c=map(lambda x: cluster_colors[x], cluster)) df_points = pd.DataFrame({ 'x': x, 'y': y, 'cluster': cluster } ) df_points logger.setLevel('WARN') class Hierarchical_cluster(): def __init__(self): pass def clustroid_calc(self, df_points, calc_func=np.mean): clustroid = df_points.groupby('cluster').aggregate(calc_func) logger.info('\n clustroid:{}'.format(clustroid)) return clustroid def candidate_merge(self, clustroid): from scipy.spatial.distance import pdist, squareform clustroid_array = clustroid.loc[:,['x','y']].as_matrix() dist = squareform(pdist(clustroid_array, 'euclidean')) cluster = clustroid.index df_dist = pd.DataFrame(dist, index=cluster, columns=cluster) df_dist.replace(0, np.nan, inplace=True) logger.info('\n dist:{}'.format(df_dist)) flat_index = np.nanargmin(df_dist.as_matrix()) candidate_iloc = np.unravel_index(flat_index, df_dist.shape) candidate_loc = [cluster[x] for x in candidate_iloc] logger.info('candidate cluster:{}'.format(candidate_loc)) new_cluster, old_cluster = candidate_loc return new_cluster, old_cluster def combine(self, df_points, show=False): clustroid = self.clustroid_calc(df_points) new_cluster, old_cluster = self.candidate_merge(clustroid) df_points.cluster.replace(old_cluster, new_cluster, inplace=True) new_order, old_order = df_points.merge_order[[new_cluster, old_cluster]] df_points.merge_order[new_cluster] = {'l': new_order, 'r': old_order} if show: plt.figure() plt.scatter(df_points.x, df_points.y, c=map(lambda x: cluster_colors[x], df_points.cluster)) return df_points def cluster(self, df_points, cluster_nums=1, show=False): assert cluster_nums > 0, 'The number of cluster should be positive.' df_points['merge_order'] = [[x] for x in range(len(df_points.x))] while len(set(df_points.cluster)) > cluster_nums: df_points = self.combine(df_points, show) logger.setLevel('WARN') df_p = df_points.copy() test = Hierarchical_cluster() test.cluster(df_p, 1, show=True) import json print json.dumps(df_p.merge_order[0], sort_keys=True, indent=4) Explanation: 7 Clustering Goal: points in the same cluster have a small distance from one other, while points in different clusters are at a large distance from one another. 7.1 Introduction to Clustering Techniques 7.1.1 Points, Spaces, Distances A dataset suitable for clustering is a collection of points, which are objects belonging to some space. distance measure: 1. nonnegative. 2. symmetric. 3. obey the triangle inequality. 7.1.2 Clustering Strategies two groups: 1. Hierarchinal or agglomerative algorithms. Combine, bottom-to-top. Point assignment. iteration A key distinction: Euclidean space can summarize a collection of points by their centroid. 7.1.3 The Curse of Dimensionality It refers that a number of unintuitive properties of high-dimensional Euclidean space. Almost all pairs of points are equally far away from one another. Almost any two vectors are almost orthogomal. %todo: Proof 7.1.4 Exercises for Section 7.1 7.1.1 \begin{align} E[d(x,y)] &= \int_{y=0}^1 \int_{x=0}^1 |x - y| \, \mathrm{d}x \, \mathrm{d}y \ &= \int_{y=0}^1 \int_{x=0}^{y} (y-x) \, \mathrm{d}x + \int_{x=y}^{1} (x-y) \, \mathrm{d}x \, \mathrm{d}y \ &= \int_{y=0}^1 \frac{1}{2} y^2 + \frac{1}{2} (1-y)^2 \, \mathrm{d}y \ &= \frac{1}{3} \end{align} 7.1.2 Because: $$\sqrt{\frac{{|x_1|}^2+{|x_2|}^2}{2}} \geq \frac{|x_1|+|x_2|}{2}$$ We have: $$\sqrt{\frac{{|x_1 - x_2|}^2+{|y_1 - y_2|}^2}{2}} \geq \frac{|x_1 - x_2|+|y_1 - y_2|}{2}$$ So: $$E[d(\mathbf{x}, \mathbf{y})] \geq \frac{\sqrt{2}}{3}$$ While: $$\sqrt{{|x_1 - x_2|}^2+{|y_1 - y_2|}^2} \leq |x_1 - x_2|+|y_1 - y_2|$$ So: $$E[d(\mathbf{x}, \mathbf{y})] \leq \frac{2}{3}$$ Above all: $$\frac{\sqrt{2}}{3} \leq E[d(\mathbf{x}, \mathbf{y})] \leq \frac{2}{3}$$ 7.1.3 for $x_i y_i$ of numerator, there are four cases: $1=1\times1, 1=-1\times-1$ and $-1=1\times-1, -1=-1\times1$. So both 1 and -1 are $\frac{1}{2}$ probility. So the expected value of their sum is 0. Hence, the expected value of cosine is 0, as $d$ grows large. 7.2 Hierarchinal Clustering This algorithm can only be used for relatively small datasets. procedure: We begin with every point in its own cluster. As time goes on, larger clusters will be constructed by combining two smaller clusters. Hence we have to decide in advance: How to represent cluster? For Euclidean space, use centriod. For Non-Euclidean space, use clustroid. clustroid: the point is close to all the points of the cluster. minimizes the sum of the distance to the other points. minimizes the maximum distance to another point. minimizes the sum of the squares of the distances to the other points. How to choose clusters to merge? shortest distance between clusters. the minimum of the distance between any two points. the average distance of all pairs of points. Combine the two clusters whose resulting cluster has the lowerst radius(the maximum distance between all the points and the centriod). modification: lowest average distance between a point and the centriod. the sum of the squares of the distances between the points and the centriod. Cobine the two clusters whose resulting cluster has the smallest diameter(the maximum distance between any two points of the cluster). The radius and diameter are not related directly, but there is a tendecy for them to be proportional. When to stop? how many clusters expected? When at some point the best combination of existing clusters produces a cluster that is inadequate. threshold of average distance of points to its centriod. threshold of the diameter of the new cluster. threshold of the density of the new cluster. track the average diameter of all the current clusters. stop if take a sudden jump. reach one cluster. $\to$ tree. eg. genome $\to$ common ancestor. There is no substantial change in the option for stopping citeria and combining citeria when we move from Euclidean to Non-Euclidean spaces. End of explanation plt.figure(figsize=(10,16)) plt.imshow(plt.imread('./res/fig7_7.png')) Explanation: Efficiency The algorithm is $O(n^3) = \sum_{i=n}^{2} C_n^2$, since it computes the distances between each pair of clusters in iteration. Optimize: 1. At first, computing the distance between all pairs. $O(n^2)$. Save the distances information into a priority queue, in order to get the smallest distance in one step. $O(n^2)$. When merging two clusters, we remove all entries involving them in the priority queue. $O(n \lg n) = 2n \times O(\lg n)$. Compute all the distances between the new cluster and the remaining clusters. 7.3 K-means Algorithms Assumptions: 1. Euclidean space; 2. $k$ is known in advance. The heart of the algortim is the for-loop, in which we consider each point and assign it to the "closest" cluster. End of explanation plt.figure(figsize=(10,16)) plt.imshow(plt.imread('./res/fig7_9.png')) Explanation: 7.3.2 Initializing Clusters for K-Means We want to pick points that have a good chance of lying in different clusters. two approaches: Cluster a sample of the data, and pick a point from the $k$ clusters. Pick points that are as far away from one another as possible. Pick the first point at random; WHILE there are fewer than k points DO ADD the point whose minimum disance from the selected points is as large as possible; END; 7.3.3 Picking the Right Value of k If we take a measure of appropriateness for clusters, then we can use it to measure the quality of the clustering for various values of $k$ and so the right value of $k$ is guessed. End of explanation plt.scatter(df_points.x, df_points.y) df_points['cluster'] = 0 df_points logger.setLevel('WARN') class k_means_cluster(): def __init__(self, max_itier=15): self.max_itier = max_itier def pick_init_points(self, df_points, k): df_clusters = df_points.sample(k, axis=0) df_clusters.reset_index(drop=True, inplace=True) df_clusters['cluster'] = df_clusters.index return df_clusters def assign_point_to_cluster(self, point, df_clusters): from scipy.spatial.distance import cdist logger.info('\n point:{}\n df_clusters:{}'.format(point, df_clusters)) p = point[['x','y']].to_frame().T c = df_clusters.loc[:,['x','y']] logger.info('\n p:{}\n c:{}'.format(p, c)) dist = cdist(p, c) logger.info('dist:{}'.format(dist)) cluster = np.argmin(dist) logger.info('cluster:{}'.format(cluster)) return pd.Series([cluster, point.x, point.y], index=['cluster', 'x', 'y']) def calc_centriod(self, df_points): centroid = df_points.groupby('cluster').mean() logger.info('\n centroid:\n{}'.format(centroid)) return centroid def cluster(self, df_points, k): df_clusters = self.pick_init_points(df_points, k) for _ in xrange(self.max_itier): df_points = df_points.apply(self.assign_point_to_cluster, args=(df_clusters,), axis=1) logger.info('iter: \n df_points:\n{}'.format(df_points)) clusters = self.calc_centriod(df_points) #todo: stop condition df_clusters = clusters return df_points, df_clusters test = k_means_cluster() k = 3 cluster_colors = sns.color_palette("hls", k) df_points_res, df_clusters_res = test.cluster(df_points, k) plt.scatter(df_points_res.x, df_points_res.y, c=map(lambda x: cluster_colors[x], df_points_res.cluster.astype(np.int))) Explanation: We can use a binary search to find the best values for $k$. End of explanation plt.imshow(plt.imread('./res/fig7_10.png')) Explanation: 7.3.4 The Algorithm of Bradley, Fayyad, and Reina designed to cluster data in a high-dimensional Euclidean space. strong assumption about the shape of clusters: normally distributed about a centriod. the dimensions must be independent. End of explanation plt.imshow(plt.imread('./res/fig7_11.png')) Explanation: The points of the data file are read in chunks. The main-memory data other than the chunk consists of three types of objects: The Discard Set: simple summaries of the clusters themselves. The Compressed Set: summaries of the points that have been found close to one another, but not close to any cluster. Each represented set of points is called a minicluster. The Retained Set: remaining points. End of explanation plt.figure(figsize=(10,5)) plt.imshow(plt.imread('./res/fig7_12.png')) plt.figure(figsize=(12,5)) plt.imshow(plt.imread('./res/fig7_13.png')) plt.figure(figsize=(12,5)) plt.imshow(plt.imread('./res/fig7_14.png')) Explanation: The discard and compressed sets are represented by $2d + 1$ values, if the data is $d$-dimensional. These numbers are: The number of points represented, $N$. The sum of the components of all the points in each dimension. a vector $SUM$ of length $d$. The sum of the squares of the components of all the points in each dimension. a vector $SUMSQ$ of length $d$. Our real goal is to represent a set of points by their count, their centroid and the standard deviation in each dimension. count: $N$. centriod: $SUM_i / N$. standard deviation: $SUMSQ_i / N - (SUM_i / N)^2$ 7.3.5 Processing Data in the BFR algorithm First, all points that are sufficiently close to the centriod of a cluster are added to that cluster, Discard Set. For the points that are not sufficiently close to any centriod, we cluster them, along with the points in the retained set. Clusters of more than one point are summarized and added to the Compressed Set. Singleton clusters become the Retained Set of points. Merge minicusters of compressed set if possible. Points of discard and compressed set are written out to secondary memory. Finally, if this is the last chunk of input data, we need do something with the compressed and retained set. Treat them as outliers. Assign them to the nearest cluster. Combine minclusters of compressed set. How to decide whether a new point $p$ is close enough to a cluster? Add $p$ to a cluster if it not only has the centriod closest to $p$, but it is very unlikely that, after all the points have been processed, some other cluster centriod will be found to be nearer to $p$. complex statiscal calculation. We can measure the probability that, if $p$ belongs to a cluster, it would be found as far as it is from the centriod of that cluster. normally distributed, independent $\to$ Mahalanobis distance: Let $p = [p_1, p_2, \dotsc, p_d]$ be a point and $c = [c_1, c_2, \dotsc, c_d]$ be the centriod of a cluster. $$\sqrt{\sum_{i=1}^{d} ( \frac{p_i - c_i}{\sigma_i} )^2 }$$ We choose that cluster whose centriod has the least Mahalanobis distance, and we add $p$ to that cluster provided the Mahalanobis distance is less than a threshold. 概率论上足够接近。 7.3.6 Exercises for Section 7.3 #todo 7.4 The CURE Algorithm CURE(Clustering Using REpresentatives) assumes a Euclidean space it does not assume anything about the shape of clusters. Process:(key factor: fixed fraction for moving and how close is sufficient to merge). Initialization Sample, and then cluster. Hierarchial clustering is advisable. pick a subset as representative points(as far from one another as possible in the same cluster). Move each of the representative poins a fixed fraction of the distance between its location and the centriod of its cluster. (Euclidean space). Merge: if two clusters have a pair of representative points that are sufficiently close. Repeat until convergence. Point assignment: We assign $p$ to the cluster of the representative point that is closest to $p$. End of explanation
12,850
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook shows how to use an index file.<br/> This example uses the index file from the Mediterranean Sea region (INSITU_MED_NRT_OBSERVATIONS_013_035) corresponding to the latest data.<br/> If you download the same file, the results will be slightly different from what is shown here. Step1: To read the index file (comma separated values), we will try with the genfromtxt function. Step2: Map of observations Step3: We import the modules necessary for the plot. Step4: We create the projection, centered on the Mediterranean Sea in this case. Step5: And we create a plot showing all the data locations. Step6: Selection of a data file based on coordinates Let's assume we want to have the list of files corresponding to measurements off the northern of Lybia.<br/> We define a rectangular box containg the data Step7: then we look for the observations within this box Step8: The generation of the file list is direct Step9: According to the file names, we have 7 profiling drifters available in the area. <br/> To check, we replot the data only in the selected box
Python Code: indexfile = "./datafiles/index_latest.txt" Explanation: This notebook shows how to use an index file.<br/> This example uses the index file from the Mediterranean Sea region (INSITU_MED_NRT_OBSERVATIONS_013_035) corresponding to the latest data.<br/> If you download the same file, the results will be slightly different from what is shown here. End of explanation import numpy as np dataindex = np.genfromtxt(indexfile, skip_header=6, unpack=True, delimiter=',', dtype=None, \ names=['catalog_id', 'file_name', 'geospatial_lat_min', 'geospatial_lat_max', 'geospatial_lon_min', 'geospatial_lon_max', 'time_coverage_start', 'time_coverage_end', 'provider', 'date_update', 'data_mode', 'parameters']) Explanation: To read the index file (comma separated values), we will try with the genfromtxt function. End of explanation lon_min = dataindex['geospatial_lon_min'] lon_max = dataindex['geospatial_lon_max'] lat_min = dataindex['geospatial_lat_min'] lat_max = dataindex['geospatial_lat_max'] Explanation: Map of observations End of explanation %matplotlib inline import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap Explanation: We import the modules necessary for the plot. End of explanation m = Basemap(projection='merc', llcrnrlat=30., urcrnrlat=46., llcrnrlon=-10, urcrnrlon=40., lat_ts=38., resolution='i') lonmean, latmean = 0.5*(lon_min + lon_max), 0.5*(lat_min + lat_max) lon2plot, lat2plot = m(lonmean, latmean) Explanation: We create the projection, centered on the Mediterranean Sea in this case. End of explanation fig = plt.figure(figsize=(10,8)) m.plot(lon2plot, lat2plot, 'ko', markersize=2) m.drawcoastlines(linewidth=0.5, zorder=3) m.fillcontinents(zorder=2) m.drawparallels(np.arange(-90.,91.,2.), labels=[1,0,0,0], linewidth=0.5, zorder=1) m.drawmeridians(np.arange(-180.,181.,3.), labels=[0,0,1,0], linewidth=0.5, zorder=1) plt.show() Explanation: And we create a plot showing all the data locations. End of explanation box = [12, 15, 32, 34] Explanation: Selection of a data file based on coordinates Let's assume we want to have the list of files corresponding to measurements off the northern of Lybia.<br/> We define a rectangular box containg the data: End of explanation import numpy as np goodcoordinates = np.where( (lonmean>=box[0]) & (lonmean<=box[1]) & (latmean>=box[2]) & (latmean<=box[3])) print goodcoordinates Explanation: then we look for the observations within this box: End of explanation goodfilelist = dataindex['file_name'][goodcoordinates] print goodfilelist Explanation: The generation of the file list is direct: End of explanation m2 = Basemap(projection='merc', llcrnrlat=32., urcrnrlat=34., llcrnrlon=12, urcrnrlon=15., lat_ts=38., resolution='h') lon2plot, lat2plot = m2(lonmean[goodcoordinates], latmean[goodcoordinates]) fig = plt.figure(figsize=(10,8)) m2.plot(lon2plot, lat2plot, 'ko', markersize=4) m2.drawcoastlines(linewidth=0.5, zorder=3) m2.fillcontinents(zorder=2) m2.drawparallels(np.arange(-90.,91.,0.5), labels=[1,0,0,0], linewidth=0.5, zorder=1) m2.drawmeridians(np.arange(-180.,181.,0.5), labels=[0,0,1,0], linewidth=0.5, zorder=1) plt.show() Explanation: According to the file names, we have 7 profiling drifters available in the area. <br/> To check, we replot the data only in the selected box: End of explanation
12,851
Given the following text description, write Python code to implement the functionality described below step by step Description: Product SVD in Python In this NoteBook, the reader will find code to load GeoTiff files, single- or multi-band, from HDFS. It reads the GeoTiffs as a ByteArrays and then stores the GeoTiffs in memory using MemFile from the RasterIO Python package. Subsequently, a statistical analysis is performed on each pair of datasets. In particular, the Python module productsvd is used to determine the SVD of the product of the two phenology datasets. Initialization This section initializes the notebook. Dependencies Here, all necessary libraries are imported. Step1: Configuration This configuration determines whether functions print logs during the execution. Step2: Connect to Spark Here, the Spark context is loaded, which allows for a connection to HDFS. Step3: Functions This section defines various functions used in the analysis. Support functions These functions support other functions. Step4: Read functions These functions allow for the reading of data. Step5: Utility functions These functions analyse and manipulate data. Step6: Write functions These functions write data and plots. Step7: Analysis function This function combines all the necessary steps for the analysis. Step8: Analyses In this section, the various analyses are initiated. Each analysis uses a different pair of datasets. Analysis 0 Step9: Analysis 1 This analysis focusses on Bloom and Leaf data from the USA from 1980 to 2016 at a 4K spatial resolution. Step10: Analysis 2 This analysis focusses on Bloom and SOS data from the USA from 1980 to 2016 at a 4K spatial resolution. Step11: Analysis 3 This analysis focusses on Leaf and SOS data from the USA from 1980 to 2016 at a 4K spatial resolution. Step12: Analysis 4 This analysis focusses on BloomFinalLowPR and SOSTLowPR data from the USA from 1989 to 2014 1Km resolution. Step13: Analysis 5 This analysis focusses on LeafFinalLowPR and SOSTLowPR data from the USA from 1989 to 2014 1Km resolution. Step14: Analysis 6 This analysis focusses on BloomFinalLowPR and LeafFinalLowPR data from the USA from 1989 to 2014 1Km resolution.
Python Code: #Add all dependencies to PYTHON_PATH import sys sys.path.append("/usr/lib/spark/python") sys.path.append("/usr/lib/spark/python/lib/py4j-0.10.4-src.zip") sys.path.append("/usr/lib/python3/dist-packages") sys.path.append("/data/local/jupyterhub/modules/python") #Define environment variables import os os.environ["HADOOP_CONF_DIR"] = "/etc/hadoop/conf" os.environ["PYSPARK_PYTHON"] = "python3" os.environ["PYSPARK_DRIVER_PYTHON"] = "ipython" import subprocess #Load PySpark to connect to a Spark cluster from pyspark import SparkConf, SparkContext from hdfs import InsecureClient from tempfile import TemporaryFile #from osgeo import gdal #To read GeoTiffs as a ByteArray from io import BytesIO from rasterio.io import MemoryFile import numpy as np import pandas import datetime import matplotlib.pyplot as plt import rasterio from rasterio import plot from os import listdir from os.path import isfile, join from numpy import exp, log from numpy.random import standard_normal import scipy.linalg from productsvd import qrproductsvd from sklearn.utils.extmath import randomized_svd Explanation: Product SVD in Python In this NoteBook, the reader will find code to load GeoTiff files, single- or multi-band, from HDFS. It reads the GeoTiffs as a ByteArrays and then stores the GeoTiffs in memory using MemFile from the RasterIO Python package. Subsequently, a statistical analysis is performed on each pair of datasets. In particular, the Python module productsvd is used to determine the SVD of the product of the two phenology datasets. Initialization This section initializes the notebook. Dependencies Here, all necessary libraries are imported. End of explanation debugMode = True maxModes = 26 Explanation: Configuration This configuration determines whether functions print logs during the execution. End of explanation appName = "plot_GeoTiff" masterURL = "spark://pheno0.phenovari-utwente.surf-hosted.nl:7077" #A context needs to be created if it does not already exist try: sc.stop() except NameError: print("A new Spark Context will be created.") sc = SparkContext(conf = SparkConf().setAppName(appName).setMaster(masterURL)) conf = sc.getConf() Explanation: Connect to Spark Here, the Spark context is loaded, which allows for a connection to HDFS. End of explanation def dprint(msg): if (debugMode): print(str(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")) + " | " + msg) def progressBar(message, value, endvalue, bar_length = 20): if (debugMode): percent = float(value) / endvalue arrow = '-' * int(round(percent * bar_length)-1) + '>' spaces = ' ' * (bar_length - len(arrow)) sys.stdout.write("\r" + str(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")) + " | " + message + ": [{0}] {1}%".format(arrow + spaces, int(round(percent * 100))) ) if value == endvalue: sys.stdout.write("\n") sys.stdout.flush() def get_hdfs_client(): return InsecureClient("emma0.emma.nlesc.nl:50070", user="pheno", root="/") Explanation: Functions This section defines various functions used in the analysis. Support functions These functions support other functions. End of explanation def getDataSet(directoryPath, bandNum): dprint("Running getDataSet(directoryPath)") files = sc.binaryFiles(directoryPath + "/*.tif") fileList = files.keys().collect() dprint("Number of files: " + str(len(fileList))) dataSet = [] plotShapes = [] flattenedShapes = [] for i, f in enumerate(sorted(fileList)): print(f) #progressBar("Reading files", i + 1, len(fileList)) data = files.lookup(f) dataByteArray = bytearray(data[0]) memfile = MemoryFile(dataByteArray) dataset = memfile.open() relevantBand = np.array(dataset.read()[bandNum]) memfile.close() plotShapes.append(relevantBand.shape) flattenedDataSet = relevantBand.flatten() flattenedShapes.append(flattenedDataSet.shape) dataSet.append(flattenedDataSet) dataSet = np.array(dataSet).T dprint("dataSet.shape: " + str(dataSet.shape)) dprint("Ending getDataSet(directoryPath)") return dataSet def getMask(filePath): dprint("Running getMask(filePath)") mask_data = sc.binaryFiles(filePath).take(1) mask_byteArray = bytearray(mask_data[0][1]) mask_memfile = MemoryFile(mask_byteArray) mask_dataset = mask_memfile.open() maskTransform = mask_dataset.transform mask_data = np.array(mask_dataset.read()[0]) mask_memfile.close() dprint("mask_data.shape: " + str(mask_data.shape)) dprint("Ending getMask(filePath)") return mask_data, maskTransform Explanation: Read functions These functions allow for the reading of data. End of explanation def filterDataSet(dataSet, maskData): dprint("Running filterDataSet(dataSet, maskIndex)") maskIndex = np.nonzero(np.nan_to_num(maskData.flatten()))[0] dataSetFiltered = dataSet[maskIndex] dprint("dataSetFiltered.shape: " + str(dataSetFiltered.shape)) dprint("Ending filterDataSet(dataSet, maskIndex)") return dataSetFiltered def validateNorms(dataSet1, dataSet2, U, s, V): dprint("Running validateNorms(dataSet1, dataSet2, U, s, V)") length = len(s) norms = [] for i in range(length): progressBar("Validating norms", i + 1, length) u = dataSet1 @ (dataSet2.T @ V.T[i]) / s[i] v = dataSet2 @ (dataSet1.T @ U.T[i]) / s[i] norms.append(scipy.linalg.norm(U.T[i] - u)) norms.append(scipy.linalg.norm(V.T[i] - v)) dprint("Largest norm difference: " + str(max(norms))) dprint("Ending validateNorms(dataSet1, dataSet2, U, s, V)") Explanation: Utility functions These functions analyse and manipulate data. End of explanation def writeCSVs(resultDirectory, U, s, V): dprint("Running writeCSV(resultDirectory, U, s, V)") for i, vectorData in enumerate([U, s, V]): progressBar("Writing CSV", i + 1, 3) fileName = ["U", "s", "V"][i] + ".csv" inFile = "/tmp/" + fileName outFile = resultDirectory + fileName #decompositionFile = open(inFile, "w") #vectorData.T.tofile(decompositionFile, sep = ",") #decompositionFile.close() #np.savetxt(inFile, vectorData.T, fmt='%.12f', delimiter=',') np.savetxt(inFile, vectorData.T, delimiter=',') #Upload to HDFS subprocess.run(['hadoop', 'dfs', '-copyFromLocal', '-f', inFile, outFile]) #Remove from /tmp/ subprocess.run(['rm', '-fr', inFile]) dprint("Ending writeCSV(resultDirectory, U, s, V)") def plotSingularValues(resultDirectory, s): dprint("Running plotSingularValues(resultDirectory, s)") fileName = "s.pdf" inFile = "/tmp/" + fileName outFile = resultDirectory + fileName x = range(len(s)) total = s.T @ s cumulativeValue = 0 valueList = [] cumulativeList = [] for i in x: value = np.square(s[i]) / total valueList.append(value) cumulativeValue = cumulativeValue + value cumulativeList.append(cumulativeValue) fig, ax1 = plt.subplots() ax2 = ax1.twinx() ax1.plot(x, valueList, "g^") ax2.plot(x, cumulativeList, "ro") ax1.set_xlabel("Singular values") ax1.set_ylabel("Variance explained", color = "g") ax2.set_ylabel("Cumulative variance explained", color = "r") plt.savefig(inFile) plt.clf() #Upload to HDFS subprocess.run(['hadoop', 'dfs', '-copyFromLocal', '-f', inFile, outFile]) #Remove from /tmp/ subprocess.run(['rm', '-fr', inFile]) dprint("Ending plotSingularValues(resultDirectory, s)") def writeModes(resultDirectory, U, s, V): dprint("Running writeModes(resultDirectory, U, s, V)") for i in range(len(s)): progressBar("Writing modes", i + 1, len(s)) fileName = "Mode" + str(i + 1).zfill(2) + ".txt" inFile = "/tmp/" + fileName outFile = resultDirectory + fileName decompositionFile = open(inFile, "w") U.T[i].tofile(decompositionFile, sep = ",") decompositionFile.close() decompositionFile = open(inFile, "a") decompositionFile.write("\n") s[i].tofile(decompositionFile, sep = ",") decompositionFile.write("\n") V.T[i].tofile(decompositionFile, sep = ",") decompositionFile.close() #Upload to HDFS subprocess.run(['hadoop', 'dfs', '-copyFromLocal', '-f', inFile, outFile]) #Remove from /tmp/ subprocess.run(['rm', '-fr', inFile]) dprint("Ending writeModes(resultDirectory, U, s, V)") def plotModes(resultDirectory, U, s, V, maskData, maskTransform): dprint("Running plotModes(resultDirectory, U, s, V, maskData, maskTransform)") plotTemplate = np.full(maskData.shape[0] * maskData.shape[1], np.nan, dtype=np.float64) maskIndex = np.nonzero(np.nan_to_num(maskData.flatten()))[0] for i in range(min(maxModes, len(s))): progressBar("Plotting modes", i + 1, min(maxModes, len(s))) for vectorData, vectorName in zip([U, V], ["U", "V"]): data = np.copy(plotTemplate) np.put(data, maskIndex, vectorData.T[i]) data = np.reshape(data, maskData.shape, ) fileName = "Mode" + vectorName + str(i + 1).zfill(2) + ".tif" inFile = "/tmp/" + fileName outFile = resultDirectory + fileName rasterioPlot = rasterio.open(inFile, "w", driver = "GTiff", width = data.shape[1], height = data.shape[0], count = 1, dtype = data.dtype, crs = "EPSG:4326", transform = maskTransform) #, compress="deflate") rasterioPlot.write(data, 1) rasterioPlot.close() #Upload to HDFS subprocess.run(['hadoop', 'dfs', '-copyFromLocal', '-f', inFile, outFile]) #Remove from /tmp/ subprocess.run(['rm', '-fr', inFile]) dprint("Ending plotModes(resultDirectory, U, s, V, maskData, maskTransform)") Explanation: Write functions These functions write data and plots. End of explanation import scipy.linalg import numpy as np from numpy import linalg as LA from sklearn.decomposition import PCA def qrproductsvdRG(A, B): QA, RA = scipy.linalg.qr(A, mode = "economic") dprint("QB.shape: " + str(QA.shape)) dprint("RB.shape: " + str(RA.shape)) QB, RB = scipy.linalg.qr(B, mode = "economic") dprint("QB.shape: " + str(QB.shape)) dprint("RB.shape: " + str(RB.shape)) #C = RA @ RB.T C = A @ B.T dprint("C.shape: " + str(C.shape)) #UC, s, VCt = scipy.linalg.svd(C, full_matrices = False) U, s, Vt = scipy.linalg.svd(C, full_matrices = True) #U = QA @ UC #Vt = VCt @ QB.T return U, s, Vt def runAnalysis(dataDirectory1, dataDirectory2, bandNum1, bandNum2, maskFile, resultDirectory): dprint("Running runAnalysis(dataDirectory1, dataDirectory2, maskFile, resultDirectory)") dataSet1 = getDataSet(dataDirectory1, bandNum1) dataSet2 = getDataSet(dataDirectory2, bandNum2) if (dataSet2.shape[1] == 26 and dataSet1.shape[1] != 26): # Hack to align time-dimension of SOS with Bloom and Leaf dataSet1 = dataSet1[:, 9:35] maskData, maskTransform = getMask(maskFile) dataSetFiltered1 = filterDataSet(dataSet1, maskData) dataSetFiltered2 = filterDataSet(dataSet2, maskData) U, s, Vt = qrproductsvd(dataSetFiltered1, dataSetFiltered2) V = Vt.T dprint("U.shape: " + str(U.shape)) dprint("s.shape: " + str(s.shape)) dprint("V.shape: " + str(V.shape)) dprint("Singular values of product: ") dprint(str(s)) validateNorms(dataSetFiltered1, dataSetFiltered2, U, s, V) plotSingularValues(resultDirectory, s) #writeModes(resultDirectory, U, s, V) plotModes(resultDirectory, U, s, V, maskData, maskTransform) writeCSVs(resultDirectory, U, s, V) dprint("Ending runAnalysis(dataDirectory1, dataDirectory2, maskFile, resultDirectory)") Explanation: Analysis function This function combines all the necessary steps for the analysis. End of explanation dprint("-------------------------------") dprint("Running analysis 0") dprint("-------------------------------") dataDirectory1 = "hdfs:///user/hadoop/spring-index/BloomGridmet/" bandNum1 = 3 dataDirectory2 = "hdfs:///user/hadoop/spring-index/LeafGridmet/" bandNum2 = 3 maskFile = "hdfs:///user/hadoop/usa_state_masks/california_4km.tif" resultDirectory = "hdfs:///user/emma/svd/BloomGridmetLeafGridmetCali/" #Create Result dir subprocess.run(['hadoop', 'dfs', '-mkdir', resultDirectory]) runAnalysis(dataDirectory1, dataDirectory2, bandNum1, bandNum2, maskFile, resultDirectory) dprint("-------------------------------") dprint("Ending analysis 0") dprint("-------------------------------") Explanation: Analyses In this section, the various analyses are initiated. Each analysis uses a different pair of datasets. Analysis 0 End of explanation dprint("-------------------------------") dprint("Running analysis 1") dprint("-------------------------------") dataDirectory1 = "hdfs:///user/hadoop/spring-index/BloomGridmet/" bandNum1 = 3 dataDirectory2 = "hdfs:///user/hadoop/spring-index/LeafGridmet/" bandNum2 = 3 maskFile = "hdfs:///user/hadoop/usa_mask_gridmet.tif" resultDirectory = "hdfs:///user/emma/svd/BloomGridmetLeafGridmet/" #Create Result dir subprocess.run(['hadoop', 'dfs', '-mkdir', resultDirectory]) runAnalysis(dataDirectory1, dataDirectory2, bandNum1, bandNum2, maskFile, resultDirectory) dprint("-------------------------------") dprint("Ending analysis 1") dprint("-------------------------------") Explanation: Analysis 1 This analysis focusses on Bloom and Leaf data from the USA from 1980 to 2016 at a 4K spatial resolution. End of explanation dprint("-------------------------------") dprint("Running analysis 2") dprint("-------------------------------") dataDirectory1 = "hdfs:///user/hadoop/spring-index/BloomGridmet/" bandNum1 = 3 dataDirectory2 = "hdfs:///user/hadoop/avhrr/SOST4Km/" bandNum2 = 0 maskFile = "hdfs:///user/hadoop/usa_mask_gridmet.tif" resultDirectory = "hdfs:///user/emma/svd/BloomGridmetSOST4Km/" #Create Result dir subprocess.run(['hadoop', 'dfs', '-mkdir', resultDirectory]) runAnalysis(dataDirectory1, dataDirectory2, bandNum1, bandNum2, maskFile, resultDirectory) dprint("-------------------------------") dprint("Ending analysis 2") dprint("-------------------------------") Explanation: Analysis 2 This analysis focusses on Bloom and SOS data from the USA from 1980 to 2016 at a 4K spatial resolution. End of explanation dprint("-------------------------------") dprint("Running analysis 3") dprint("-------------------------------") dataDirectory1 = "hdfs:///user/hadoop/spring-index/LeafGridmet/" bandNum1 = 3 dataDirectory2 = "hdfs:///user/hadoop/avhrr/SOST4Km/" bandNum2 = 0 maskFile = "hdfs:///user/hadoop/usa_mask_gridmet.tif" resultDirectory = "hdfs:///user/emma/svd/LeafGridmetSOST4Km/" #Create Result dir subprocess.run(['hadoop', 'dfs', '-mkdir', resultDirectory]) runAnalysis(dataDirectory1, dataDirectory2, bandNum1, bandNum2, maskFile, resultDirectory) dprint("-------------------------------") dprint("Ending analysis 3") dprint("-------------------------------") Explanation: Analysis 3 This analysis focusses on Leaf and SOS data from the USA from 1980 to 2016 at a 4K spatial resolution. End of explanation dprint("-------------------------------") dprint("Running analysis 4") dprint("-------------------------------") dataDirectory1 = "hdfs:///user/hadoop/spring-index/BloomFinalLowPR/" bandNum1 = 0 dataDirectory2 = "hdfs:///user/hadoop/avhrr/SOSTLowPR/" bandNum2 = 0 maskFile = "hdfs:///user/hadoop/spring-index/BloomFinalLowPR/1989.tif" resultDirectory = "hdfs:///user/emma/svd/BloomFinalLowPRSOSTLowPR/" #Create Result dir subprocess.run(['hadoop', 'dfs', '-mkdir', resultDirectory]) runAnalysis(dataDirectory1, dataDirectory2, bandNum1, bandNum2, maskFile, resultDirectory) dprint("-------------------------------") dprint("Ending analysis 4") dprint("-------------------------------") Explanation: Analysis 4 This analysis focusses on BloomFinalLowPR and SOSTLowPR data from the USA from 1989 to 2014 1Km resolution. End of explanation dprint("-------------------------------") dprint("Running analysis 5") dprint("-------------------------------") dataDirectory1 = "hdfs:///user/hadoop/spring-index/LeafFinalLowPR/" bandNum1 = 0 dataDirectory2 = "hdfs:///user/hadoop/avhrr/SOSTLowPR/" bandNum2 = 0 maskFile = "hdfs:///user/hadoop/spring-index/LeafFinalLowPR/1989.tif" resultDirectory = "hdfs:///user/emma/svd/LeafFinalLowPRSOSTLowPR/" #Create Result dir subprocess.run(['hadoop', 'dfs', '-mkdir', resultDirectory]) runAnalysis(dataDirectory1, dataDirectory2, bandNum1, bandNum2, maskFile, resultDirectory) dprint("-------------------------------") dprint("Ending analysis 5") dprint("-------------------------------") Explanation: Analysis 5 This analysis focusses on LeafFinalLowPR and SOSTLowPR data from the USA from 1989 to 2014 1Km resolution. End of explanation dprint("-------------------------------") dprint("Running analysis 6") dprint("-------------------------------") dataDirectory1 = "hdfs:///user/hadoop/spring-index/BloomFinalLowPR/" bandNum1 = 0 dataDirectory2 = "hdfs:///user/hadoop/spring-index/LeafFinalLowPR/" bandNum2 = 0 maskFile = "hdfs:///user/hadoop/spring-index/BloomFinalLowPR/1989.tif" resultDirectory = "hdfs:///user/emma/svd/BloomFinalLowPRLeafFinalLowPR/" #Create Result dir subprocess.run(['hadoop', 'dfs', '-mkdir', resultDirectory]) runAnalysis(dataDirectory1, dataDirectory2, bandNum1, bandNum2, maskFile, resultDirectory) dprint("-------------------------------") dprint("Ending analysis 6") dprint("-------------------------------") Explanation: Analysis 6 This analysis focusses on BloomFinalLowPR and LeafFinalLowPR data from the USA from 1989 to 2014 1Km resolution. End of explanation
12,852
Given the following text description, write Python code to implement the functionality described below step by step Description: Using the trained weights in an ensemble of neurons On the function points branch of nengo On the vision branch of nengo_extras Step1: Load the MNIST database Step2: Each digit is represented by a one hot vector where the index of the 1 represents the number Step3: Load the saved weight matrices that were created by trainging the model Step4: The network where the mental imagery and scaling occurs The state, seed and ensemble parameters (including encoders) must all be the same for the saved weight matrices to work The number of neurons (n_hid) must be the same as was used for training The input must be shown for a short period of time to be able to view the scaling The recurrent connection must be from the neurons because the weight matices were trained on the neuron activities Step5: The following is not part of the brain model, it is used to view the output for the ensemble Since it's probing the neurons themselves, the output must be transformed from neuron activity to visual image Step6: Pickle the probe's output if it takes a long time to run Step7: Testing
Python Code: import nengo import numpy as np import cPickle from nengo_extras.data import load_mnist from nengo_extras.vision import Gabor, Mask from matplotlib import pylab import matplotlib.pyplot as plt import matplotlib.animation as animation Explanation: Using the trained weights in an ensemble of neurons On the function points branch of nengo On the vision branch of nengo_extras End of explanation # --- load the data img_rows, img_cols = 28, 28 (X_train, y_train), (X_test, y_test) = load_mnist() X_train = 2 * X_train - 1 # normalize to -1 to 1 X_test = 2 * X_test - 1 # normalize to -1 to 1 Explanation: Load the MNIST database End of explanation temp = np.diag([1]*10) ZERO = temp[0] ONE = temp[1] TWO = temp[2] THREE= temp[3] FOUR = temp[4] FIVE = temp[5] SIX = temp[6] SEVEN =temp[7] EIGHT= temp[8] NINE = temp[9] labels =[ZERO,ONE,TWO,THREE,FOUR,FIVE,SIX,SEVEN,EIGHT,NINE] dim =28 Explanation: Each digit is represented by a one hot vector where the index of the 1 represents the number End of explanation label_weights = cPickle.load(open("label_weights1000.p", "rb")) activity_to_img_weights = cPickle.load(open("activity_to_img_weights_scale1000.p", "rb")) scale_up_after_encoder_weights = cPickle.load(open("scale_up_after_encoder_weights1000.p", "r")) scale_down_after_encoder_weights = cPickle.load(open("scale_down_after_encoder_weights1000.p", "r")) scale_up_weights = cPickle.load(open("scale_up_weights1000.p","rb")) scale_down_weights = cPickle.load(open("scale_down_weights1000.p","rb")) Explanation: Load the saved weight matrices that were created by trainging the model End of explanation rng = np.random.RandomState(9) n_hid = 1000 model = nengo.Network(seed=3) with model: #Stimulus only shows for brief period of time stim = nengo.Node(lambda t: ZERO if t < 0.1 else 0) #nengo.processes.PresentInput(labels,1))# ens_params = dict( eval_points=X_train, neuron_type=nengo.LIF(), #Why not use LIF? intercepts=nengo.dists.Choice([-0.5]), max_rates=nengo.dists.Choice([100]), ) # linear filter used for edge detection as encoders, more plausible for human visual system encoders = Gabor().generate(n_hid, (11, 11), rng=rng) encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True) ens = nengo.Ensemble(n_hid, dim**2, seed=3, encoders=encoders, **ens_params) #Recurrent connection on the neurons of the ensemble to perform the rotation nengo.Connection(ens.neurons, ens.neurons, transform = scale_down_after_encoder_weights.T, synapse=0.1) #Connect stimulus to ensemble, transform using learned weight matrices nengo.Connection(stim, ens, transform = np.dot(label_weights,activity_to_img_weights).T, synapse=0.1) #Collect output, use synapse for smoothing probe = nengo.Probe(ens.neurons,synapse=0.1) sim = nengo.Simulator(model) sim.run(5) Explanation: The network where the mental imagery and scaling occurs The state, seed and ensemble parameters (including encoders) must all be the same for the saved weight matrices to work The number of neurons (n_hid) must be the same as was used for training The input must be shown for a short period of time to be able to view the scaling The recurrent connection must be from the neurons because the weight matices were trained on the neuron activities End of explanation '''Animation for Probe output''' fig = plt.figure() output_acts = [] for act in sim.data[probe]: output_acts.append(np.dot(act,activity_to_img_weights)) def updatefig(i): im = pylab.imshow(np.reshape(output_acts[i],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'),animated=True) return im, ani = animation.FuncAnimation(fig, updatefig, interval=0.1, blit=True) plt.show() Explanation: The following is not part of the brain model, it is used to view the output for the ensemble Since it's probing the neurons themselves, the output must be transformed from neuron activity to visual image End of explanation #The filename includes the number of neurons and which digit is being rotated filename = "mental_scaling_output_ZERO_" + str(n_hid) + ".p" cPickle.dump(sim.data[probe], open( filename , "wb" ) ) Explanation: Pickle the probe's output if it takes a long time to run End of explanation testing = np.dot(ZERO,np.dot(label_weights,activity_to_img_weights)) plt.subplot(121) pylab.imshow(np.reshape(testing,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) #Get image testing = np.dot(ZERO,np.dot(label_weights,activity_to_img_weights)) #Get activity of image _, testing_act = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=testing) #Get rotated encoder outputs testing_scale = np.dot(testing_act,scale_down_after_encoder_weights) #Get activities testing_scale = ens.neuron_type.rates(testing_scale, sim.data[ens].gain, sim.data[ens].bias) for i in range(2): testing_scale = np.dot(testing_scale,scale_down_after_encoder_weights) testing_scale = ens.neuron_type.rates(testing_scale, sim.data[ens].gain, sim.data[ens].bias) #testing_rotate = np.dot(testing_rotate,rotation_weights) testing_scale = np.dot(testing_scale,activity_to_img_weights) plt.subplot(122) pylab.imshow(np.reshape(testing_scale,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r')) plt.show() Explanation: Testing End of explanation
12,853
Given the following text description, write Python code to implement the functionality described below step by step Description: Author Step1: Simulated FastQ data Installation Step2: Creating the BAM (mapping) and BED files Step3: This uses bwa and samtools behind the scene. Then, we will convert the resulting BAM file (FN433596.fasta.sorted.bam) into a BED file once for all. To do so, we use bioconvert (http Step4: sequana_coverage We execute sequana_coverage to find the ROI (region of interest). We should a few detections (depends on the threshold and length of the genome of course). Later, we will inject events as long as 8000 bases. So, we should use at least 16000 bases for the window parameter length. As shown in the window_impact notebook a 20,000 bases is a good choice to keep false detection rate low. Step5: The false positives Step7: Most of the detected events have a zscore close to the chosen thresholds (-4 and 4). Moreover, most events have a size below 50. So for the detection of CNVs with size above let us say 2000, the False positives is (FP = 0). More simulations would be required to get a more precise idea of the FP for short CNVs but the FP would remain small. For instance on this example, FP=1 for CNV size >100, FP=2 for CNV size >40, which remains pretty small given the length of the genome (3Mbp). Checking CNV detection Event injections (deletion, duplication, or mix of both) Step8: Deleted regions are all detected Step9: duplicated regions Step10: Same results with W=20000,40000,60000,100000 but recovered CN is better with larger W Step11: Note that you may see events with negative zscore. Those are false detection due to the presence of two CNVs close to each other. This can be avoided by increasing the window size e.g. to 40000 Step12: Mixes of duplicated and deleted regions Step13: Some events (about 1%) may be labelled as not found but visual inspection will show that there are actually detected. This is due to a starting position being offset due to noise data set that interfer with the injected CNVs. Conclusions with simulated data and no CNV injections, sequana coverage detects some events that cross the threshold. However, there all have low zscores (close to the chosen threshold) and exhibit short lengths (below 100 bases). Simulated CNVs Step14: With 50 simulations, we get 826 events. (100 are removed because on the edge of the origin of replication), which means about 16 events per simulation. The max length is 90. None of the long events (above 50) appear at the same position (distance by more than 500 bases at least) so long events are genuine false positives.
Python Code: !sequana_coverage --download-reference FN433596 Explanation: Author: Thomas Cokelaer Jan 2018 Local time execution: about 10 minutes In this notebook, we will simulate fastq reads and inject CNVs. We will then look at the sensitivity (proportion of true positive by the sum of positives) of sequana_coverage. We use the data and strategy described in section 3.2 of "CNOGpro: detection and quantification of CNVs in prokaryotic whole-genome sequencing data, bioinformatics 31(11), 2015 (Brynildsrud et al)" Here, we will use the same reference: FN433596 (staphylococcus aureus) as in the paper above, which is also used in the manuscript The main goal is to generate simulated data, and check that the sensitivity is high (keeping specificity low) by injecting various CNVs. Requirements sequana version 0.7.0 was used art_illumina Get the reference There are many ways to download the reference (FN433596). Here below we use sequana_coverage tool but of course, you can use your own tool, or simply go to http://github.com/sequana/resources/coverage (look for FN433596.fasta.bz2). End of explanation ! art_illumina -sam -i FN433596.fa -p -l 100 -ss HS20 -f 20 -m 500 -s 40 -o paired_dat -f 100 Explanation: Simulated FastQ data Installation: conda install art Simulation of data coverage 100X -l: length of the reads -f: coverage -m: mean size of fragments -s: standard deviation of fragment size -ss: type of hiseq This takes a few minutes to produce End of explanation # no need for the *aln and *sam, let us remove them to save space !rm -f paired*.aln paired_dat.sam !sequana_mapping --reference FN433596.fa --file1 paired_dat1.fq --file2 paired_dat2.fq 1>out 2>err Explanation: Creating the BAM (mapping) and BED files End of explanation # bioconvert FN433596.fa.sorted.bam simulated.bed -f # or use e.g. bedtools: !bedtools genomecov -d -ibam FN433596.fa.sorted.bam > simulated.bed Explanation: This uses bwa and samtools behind the scene. Then, we will convert the resulting BAM file (FN433596.fasta.sorted.bam) into a BED file once for all. To do so, we use bioconvert (http://bioconvert.readthedocs.io) that uses bedtools behind the scene: End of explanation !sequana_coverage --input simulated.bed --reference FN433596.fa -w 20001 -o --level WARNING -C .5 !cp report/*/*/rois.csv rois_noise_20001.csv # An instance of coverage signal (yours may be slightly different) from IPython.display import Image Image("coverage.png") Explanation: sequana_coverage We execute sequana_coverage to find the ROI (region of interest). We should a few detections (depends on the threshold and length of the genome of course). Later, we will inject events as long as 8000 bases. So, we should use at least 16000 bases for the window parameter length. As shown in the window_impact notebook a 20,000 bases is a good choice to keep false detection rate low. End of explanation %pylab inline # Here is a convenient function to plot the ROIs in terms of sizes # and max zscore def plot_results(file_roi, choice="max"): import pandas as pd roi = pd.read_csv(file_roi) #"rois_cnv_deletion.csv") roi = roi.query("start>100 and end<3043210") plot(roi["size"], roi["{}_zscore".format(choice)], "or", label="candidate ROIs") for this in [3,4,5,-3,-4,-5]: if this == 3: label = "thresholds" else: label="_nolegend_" axhline(this, ls="--", label=label) print("{} ROIs found".format(len(roi))) xlabel("length of the ROIs") ylabel("z-scores") legend() return roi roi = plot_results("rois_noise_20001.csv", "max") Explanation: The false positives End of explanation import random import pandas as pd def create_deletion(): df = pd.read_csv("simulated.bed", sep="\t", header=None) positions = [] sizes = [] for i in range(80): # the + and -4000 shift are there to guarantee the next # CNV does not overlap with the previous one since # CNV length can be as much as 8000 pos = random.randint(37000*i+4000, 37000*(i+1)-4000) size = random.randint(1,8) * 1000 positions.append(pos) #size = 2000 df.loc[pos:pos+size,2] = 0 #deletion sizes.append(size) df.to_csv("cnv_deletion.bed", sep="\t", header=None, index=None) return positions, sizes def create_duplicated(): df = pd.read_csv("simulated.bed", sep="\t", header=None) positions = [] sizes = [] for i in range(80): pos = random.randint(37000*i+4000, 37000*(i+1)-4000) size = random.randint(1,8) * 1000 positions.append(pos) df.loc[pos:pos+size,2] += 100 #duplicated sizes.append(size) df.to_csv("cnv_duplicated.bed", sep="\t", header=None, index=None) return positions, sizes def create_cnvs_mixed(): df = pd.read_csv("simulated.bed", sep="\t", header=None) # we will place 10% of CNV of size from 1000 to 8000 import random positions = [] sizes = [] for i in range(80): pos = random.randint(37000*i+4000, 37000*(i+1)-4000) size = random.randint(1,8) * 1000 positions.append(pos) status = random.randint(0,1) if status == 0: df.loc[pos:pos+size,2] -= 50 elif status == 1: df.loc[pos:pos+size,2] += 50 sizes.append(size) df.to_csv("cnv_mixed.bed", sep="\t", header=None, index=None) return positions, sizes def check_found(positions, sizes, roi, precision=200, min_size=150): A simple function to check given the position and size that the injected CNVs are detected in the ROIs We check that the starting or ending position of at least one ROI coincide with one ROI and that this ROI has at least a length of 200. Indeed, injections are at least 1000 bases and noise are generally below 100 bases as shown above. found = [False] * len(positions) i = 0 zscores = [] for position,size in zip(positions, sizes): for this in roi.iterrows(): this = this[1] if (abs(this.start-position)<precision or abs(this.end-position-size)<precision )and this['size'] > min_size: #print(this.start, this.end, position, size) found[i] = True zscores.append(this.mean_zscore) continue if found[i] is False: print("position not found {} size={}".format(position, size)) i+=1 print("Found {}".format(sum(found))) return zscores Explanation: Most of the detected events have a zscore close to the chosen thresholds (-4 and 4). Moreover, most events have a size below 50. So for the detection of CNVs with size above let us say 2000, the False positives is (FP = 0). More simulations would be required to get a more precise idea of the FP for short CNVs but the FP would remain small. For instance on this example, FP=1 for CNV size >100, FP=2 for CNV size >40, which remains pretty small given the length of the genome (3Mbp). Checking CNV detection Event injections (deletion, duplication, or mix of both) End of explanation # call this only once !!!! positions_deletion, sizes_deletion = create_deletion() !sequana_coverage --input cnv_deletion.bed -o -w 20001 --level WARNING !cp report/*/*/rois.csv rois_cnv_deleted.csv rois_deletion = plot_results("rois_cnv_deleted.csv") # as precise as 2 base positions but for safety, we put precision of 10 and we can check that the detection rate is 100% zscores = check_found(positions_deletion, sizes_deletion, rois_deletion, precision=5) Explanation: Deleted regions are all detected End of explanation positions_duplicated, sizes_duplicated = create_duplicated() !sequana_coverage --input cnv_duplicated.bed -o -w 40001 --level ERROR -C .3 --no-html --no-multiqc !cp report/*/*/rois.csv rois_cnv_duplicated_40001.csv rois_duplicated = plot_results("rois_cnv_duplicated_40001.csv", choice="max") Explanation: duplicated regions End of explanation rois_duplicated = plot_results("rois_cnv_duplicated_20000.csv", choice="max") Explanation: Same results with W=20000,40000,60000,100000 but recovered CN is better with larger W End of explanation check_found(positions_duplicated, sizes_duplicated, rois_duplicated, precision=5) Explanation: Note that you may see events with negative zscore. Those are false detection due to the presence of two CNVs close to each other. This can be avoided by increasing the window size e.g. to 40000 End of explanation positions_mix, sizes_mix = create_cnvs_mixed() !sequana_coverage --input cnv_mixed.bed -o -w 40001 --level ERROR --no-multiqc --no-html --cnv-clustering 1000 !cp report/*/*/rois.csv rois_cnv_mixed.csv Image("coverage_with_cnvs.png") rois_mixed = plot_results("rois_cnv_mixed.csv", choice="max") # note that here we increase the precision to 100 bases. The positions # are not as precise as in the duplication or deletion cases. check_found(positions_mix, sizes_mix, rois_mixed, precision=20) Explanation: Mixes of duplicated and deleted regions End of explanation roi = plot_results("rois_noise_20001.csv") what is happening here is that we detect many events close to the threshold. So for instance all short events on the left hand side have z-score close to 4, which is our threshold. By pure chance, we get longer events of 40 or 50bp. This is quite surprinsing and wanted to know whether those are real false positives or due to a genuine feature in the genome (e.g. repeated regions that prevent a good mapping) What is not shown in this plot is the position of the event. We can simulate the same data again (different seed). If those long events appear at the same place, they ca be considered as genuine, otherwise, they should be considered as potential background appearing just by chance. so, we generated 50 simulated data set and reproduce the image above. We store the data in 50_rois.csv from easydev import execute as shell def create_data(start=0,end=10): for i in range(start, end): print("---------------- {}".format(i)) cmd = "art_illumina -sam -i FN433596.fa -p -l 100 -ss HS20 -f 20 -m 500 -s 40 -o paired_dat -f 100" shell(cmd) cmd = "rm -f paired*.aln paired_dat.sam" shell(cmd) cmd = "sequana_mapping --reference FN433596.fa --file1 paired_dat1.fq --file2 paired_dat2.fq 1>out 2>err" shell(cmd) cmd = "bedtools genomecov -d -ibam FN433596.fa.sorted.bam > simulated.bed" shell(cmd) cmd = "sequana_coverage --input simulated.bed --reference FN433596.fa -w 20001 -o --no-html --no-multiqc" shell(cmd) cmd = "cp report/*/*/rois.csv rois_{}.csv".format(i) shell(cmd) #create_data(0,50) import pandas as pd rois = pd.read_csv("50_simulated_rois.csv") rois = rois.query("start>100 and end <3043210") roi = plot_results("50_simulated_rois.csv", choice="max") Explanation: Some events (about 1%) may be labelled as not found but visual inspection will show that there are actually detected. This is due to a starting position being offset due to noise data set that interfer with the injected CNVs. Conclusions with simulated data and no CNV injections, sequana coverage detects some events that cross the threshold. However, there all have low zscores (close to the chosen threshold) and exhibit short lengths (below 100 bases). Simulated CNVs: the 80 deletions are all detected with the correct position (+/- 1 base) and sizes (+/- 1 base) the 80 duplications are all detected with the correct position (+/- 1 base) and sizes (+/- 1 base !) the mix of 80 detection with coverage at 50 and 150 are all detected. Note, however, that some CNVs detections are split in several events. We implemented an additional clustering for CNVs (use --cnv-clustering 1000). This solve this issue. As for the position, there are usually correct (+-20 bases). Visual inspection of the ROIs files show that the events are all detected. However, they may be split or the actual starting point of the event is not precise. So, for those simulated data and type of CNVs injection (CN 0, 2, 0.5, 1.5), we get a sensitivity close to 100%. Extra notes about False positives when we applied sequana coverage on the simulated mapped reads to estimate the rate of False Positives, we got about 20 ROIs with events having (max) zscore below 5 but up to 50 bases. One question we had is Does ROIs shorter than 50bp and with z-scores below 5 should be ignored in CNV or other analyses, or are these bases identifying genuine features in the genome (such as unmappable sequence)? In brief, we think that those events are part of the background noise. So, such events should not be interpreted as genuine features. Here is why End of explanation roi = plot_results("100_simulated_rois.csv", choice="mean") roi = plot_results("100_simulated_rois.csv", choice="max") Explanation: With 50 simulations, we get 826 events. (100 are removed because on the edge of the origin of replication), which means about 16 events per simulation. The max length is 90. None of the long events (above 50) appear at the same position (distance by more than 500 bases at least) so long events are genuine false positives. End of explanation
12,854
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Advanced Step3: Let's also combine our plotting code into a cohesive function Step4: Now we can tie our plot function, plot_planck, to the interact function from ipywidgets
Python Code: # Import numpy and alias to "np" import numpy as np # Import and alias to "plt" import matplotlib.pyplot as plt def planck(wavelength, temp): Return the emitted radiation from a blackbody of a given temp and wavelength Args: wavelength (float): wavelength (m) temp (float): temperature of black body (Kelvin) Returns: float: spectral radiance (W / (sr m^3)) k_b = 1.3806488e-23 # J/K Boltzmann constant h = 6.626070040e-34 # J s - Planck's constant c = 3e8 # m/s - speed of light return ((2 * h * c ** 2) / wavelength ** 5 * 1 / (np.exp(h * c / (wavelength * k_b * temp)) - 1)) Explanation: Advanced: ipywidgets Hate manually parameters and re-running your code for a parameter change? Ever want to interact with your data using a GUI? ipywidgets allows you to easily link up Python code with user interface widgets. They provide documentation in the form of tutorials and examples on their Github site. Let's continue the demonstration of Planck's equation by incorporating some interactivity: Installation First, install ipywidgets using either pip or conda: pip: bash pip install ipywidgets conda: bash conda install ipywidgets Example First we'll get our code from our basic demonstration: End of explanation def plot_planck(temp): Plot the spectral radiance for a blackbody of a given temperature Args: temp (float): temperature of body wavelength = np.linspace(1e-8, 10e-6, 1000) rad = planck(wavelength, temp) text_x = wavelength[rad.argmax()] * 1e6 text_y = rad.max() / 1e3 / 1e9 temp_str = '%.2f K' % temp fig, ax = plt.subplots() ax.plot(wavelength * 1e6, rad / 1e3 / 1e9) ax.text(text_x, text_y, temp_str, ha='center') ax.set_xlabel(r'Wavelength ($\mu m$)') ax.set_ylabel(r'Spectral radiance ($kW \cdot sr^{-1} \cdot m^{-2} \cdot nm^{-1}$)') ax.set_xlim([1e-8 * 1e6, 10e-6 * 1e6]) Explanation: Let's also combine our plotting code into a cohesive function: End of explanation %matplotlib nbagg from ipywidgets import interactive from IPython.core.display import display vis = interactive(plot_planck, temp=(250, 10e3, 100)) display(vis) Explanation: Now we can tie our plot function, plot_planck, to the interact function from ipywidgets: End of explanation
12,855
Given the following text description, write Python code to implement the functionality described below step by step Description: TOC Thematic Report - February 2019 (Part 1 Step2: 1.2. Finland Jussi has supplied an entirely new dataset for all Finnish stations covering the period from 1990 to 2017. This supersedes the data already in the database. I have therefore removed all post-1990 data for both "core" and "TOC_TRENDS_2015" stations, and replaced the values with the most recent dataset. I have also updated the station properties (NFC codes etc.) based on the latest information provided. One of the "core" Finnish stations, FI13, is no longer monitored. Jussi has proposed that this station be removed from the "core" and replaced with Nat_FI_27. This is a nearby station with similar characteristics and ongoing monitoring. It is also one of the "trends" stations, with code 'Nat_FI_27' in the 2015 trends dataset and code 'Tr18_FI_39478' in the latest trends work. Based on this, I have removed FI13 from the "core" and substituted Nat_FI_27 instead. 1.3. Czech Republic The Czech data was reviewed and updated during 2016. The latest data provided during 2017 has now also been added. 1.4. Germany It looks as though there has been a long-running confusion regarding site names and codes for the German stations. As far as I can make out, the following changes are required Step3: 1.11. USA As described in the trends notebook John has supplied an entirely new set of data for the US, but some of the NFC codes have changed. I have therefore updated the codes in our database to reflect these changes. The latest station codes for the 95 "core" US sites are given in .\ICP_Waters\Call_For_data_2018\usa_trends_and_core_sites.xlsx Note that John's most recent dataset does not include any data for 4 of the "core" stations Step4: 1.12. UK There are 6 UK stations within the "core" programme. As described in the "trends" notebook, some issues have been identified with the historic data already in the database. I will therefore delete all the old data and upload corrected values compatible with the most recent trends work. For the UK, the "core" stations are a subset of the "trends" stations, so it's just a question of uploading a cut-down version of the cleaned "trends" dataset. I have created a new Excel file for just the "core" stations here .\ICP_Waters\Call_For_data_2018\replies\uk\icpw_uk_core_all.xlsx This has been uploaded to the database. Step5: 1.13. Ireland The existing "core" project includes 10 stations in Ireland, but 7 of these (all the river sites) are no longer monitored. Julian has proposed a further 11 lake sites as replacements, which gives 21 Irish sites in total, but only 14 of them with active monitoring. All the station locations and metadata for the Irish sites are given in .\ICP_Waters\Call_for_Data_2017\Replies\ireland\ireland_20170929.xlsx Note that for many of the sites there are several possible monitoring points (e.g. on the lake shore or out in open water), but all of them are given the same geographic co-ordinates in Julian's spreadsheet (i.e. a "station" comprises several nearby monitoring "points"). For ICPW, we have historically focused on the open water sites, so I have filtered the shoreline samples out from Julian's dataset. In some cases, there are several open water sampling points per station as well. This leads to duplicates, because the same site has been monitoried more than once on the same date, but at different points. These duplicated values are usually very similar, and differences will therefore be ignored. I have created 11 new sites in the database based on the details in the Excel file above. This Excel file below contains the aggregated data supplied by Julian for the period from 2007 to 2017 .\ICP_Waters\Call_For_data_2018\replies\ireland\ireland_icpw_2007-2017_jes_tidied.xls Note Step6: Note Step7: 2.3. Select parameters Step8: 2.4. Select chemistry data Step9: 2.5. Basic quality checking 2.5.1. Duplicates The code above produces a warning about duplicated values, which should be investigated. Step10: There are duplicated values for stations in Norway, Canada, Germany, Estonia and Latvia. The duplicates in Norway are due to the way the NIVA lab handles reanalyses, and my code is designed to query these correctly. For the other countries, the problem is due to submitting both DOC and TOC data for the same water sample. These are added to our database under separate methods, but for ICPW both TOC and DOC get mapped to TOC in the output, because ICPW does not distinguish between the two. My code will currently pick one of the duplicates at random, which is not ideal. However, as long as the ICPW assumption that TOC ~ DOC is correct, this shouldn't actually cause problems. (And if this assumption is not correct, we have broader issues to deal with in the ICPW database). 2.5.2. Number of stations with data From above, there are 261 stations associated with the selected projects. Do they all have at least some relevant data? Step11: So, the only station with no data is IE20 - one of the new Irish lakes. I have checked Julian's original data submission and this looks OK Step12: The plots above illustrate the wide variability in monitoring across the ICPW programme. Some points to note Step13: Note the following
Python Code: ## Switch TOC/DOC pre-March 1995 to method with correction factor of 1.28 #with eng.begin() as conn: # sql = ("UPDATE resa2.water_chemistry_values2 " # "SET method_id = 10823 " # "WHERE sample_id IN ( " # " SELECT water_sample_id " # " FROM resa2.water_samples " # " WHERE station_id IN (23461, 23462, 23463, 23464, 23465) " # " AND sample_date < DATE '1995-03-01') " # "AND method_id IN (10294, 10273)") # conn.execute(sql) Explanation: TOC Thematic Report - February 2019 (Part 1: Data clean-up) This notebook describes cleaning and updating data for the ~260 "core" ICPW stations (as opposed to the broader network of 430 "trends" stations). Note that there is substantial overlap between the "core" and "trends" sites, but the data are stored separately in the database: the most recent data for the "trends" work is associated with a project named 'ICPW_TOCTRENDS_2018' (project_id=4390), whereas the "core" stations are split according to country with projects named 'ICPWaters CA', 'ICPWaters NO' etc. In theory, where a station is used in both the "core" and "trends" work, the associated chemistry data should be the same, but in practice this is not always be the case. Note also that, in 2015, Tore created a set of country-specific "TOC trends" projects, named e.g. 'ICPW_TOCTRENDS_2015_NO'. These include yet more duplicated data, but they should now be considered deprecated as they have been superseded by 'ICPW_TOCTRENDS_2018'. The purpose of the work presented here is to (i) quality assess and upload the latest data for the "core" stations; (ii) review - and where necessary update - the sites used in each country (e.g. changes to monitoring mean that some locations are no longer suitable for ICPW); and (iii) to check that the "core" and "trends" datasets are broadly in agreement. 1. Update and restructure data from Focal Centres 1.1. Canada There are 18 Canadian stations in the "core" ICPW programme and I have added the most recent templates to the database. The code below applies a correction factor of 1.28 to the records for TOC in Nova Scotia collected before March 1995. See e-mail exchanges with Don, John and Heleen on 15/02/2019 for further details. End of explanation # Get a list of the Swedish sites stn_xlsx = (r'../../../Call_For_data_2018/replies' r'/sweden/sweden_core_stations_mvm_codes_feb_2019.xlsx') swe_df = pd.read_excel(stn_xlsx) print (len(swe_df)) swe_df.head() # Lookup table matching MVM pars to RESA methods par_map_xlsx = (r'../../../Sweden_MVM_API/map_mvm_pars_to_resa2.xlsx') par_map_df = pd.read_excel(par_map_xlsx) par_map_df def f(row): Function to deal with flags. if '<' in row['value_']: val = '<' elif '>' in row['value_']: val = '>' else: val = np.nan return val # # Path to local file containing MVM access tokens # token_xlsx = r'../../../Sweden_MVM_API/md_mvm_tokens.xlsx' # # Time period of interest # st_yr = 1970 # end_yr = 2016 # # Loop over stations # for idx, row in list(swe_df.iterrows()): # # Get IDs # resa_id = row['station_id'] # mvm_id = row['nfc_code'] # name = row['station_name'] # print('Processing: %s' % name) # print (' Getting data from MVM...') # # Get data from MVM # mvm_df = mvm_python.query_mvm_station_data(mvm_id, # st_yr, # end_yr, # token_xlsx) # print (' Restructuring...') # # Get pars of interest # mvm_df = pd.merge(mvm_df, par_map_df, # how='inner', # left_on='par_name', # right_on='mvm_par') # # Convert units # mvm_df['value'] = mvm_df['value'] * mvm_df['factor'] # # If multiple depths are available, get the shallowest # mvm_df.sort_values(by=['depth1', 'depth2'], inplace=True) # mvm_df.drop_duplicates(subset=['mvm_id', 'sample_date', 'par_name'], # inplace=True) # # Get just cols of interest # mvm_df = mvm_df[['sample_date', 'icpw_method_id', 'value']] # # Occasionally, there are still some duplicates (e.g. Tot-N_ps and Tot-N_TNb) # # at the same site-date. Average these for now # mvm_df = mvm_df.groupby(['sample_date', # 'icpw_method_id']).agg({'value':'mean'}).reset_index() # # Sometimes values of 0 are reported. Drop these as dubious # mvm_df['value'].replace(0, np.nan, inplace=True) # mvm_df.dropna(subset=['value'], inplace=True) # # Make sure the df can be pivoted (integrity check - don't actually need # # to pivot, so result is not saved as variable) # mvm_df.pivot(index='sample_date', columns='icpw_method_id') # # Build water samp df # ws_df = mvm_df[['sample_date']].copy() # ws_df['station_id'] = resa_id # ws_df['depth1'] = 0 # ws_df['depth2'] = 0 # ws_df.drop_duplicates(subset=['sample_date'], inplace=True) # ws_df.sort_values(by='sample_date', inplace=True) # ws_df = ws_df[['station_id', 'sample_date', 'depth1', 'depth2']] # # Add water samples to db # print (' Writing data to WATER_SAMPLES table...') # dtypes = {c:types.VARCHAR(ws_df[c].str.len().max()) # for c in ws_df.columns[ws_df.dtypes == 'object'].tolist()} # ws_df.to_sql(name='water_samples', # schema='resa2', # con=eng, # if_exists='append', # index=False, # dtype=dtypes) # # Get sample_ids back from db # print (' Identifying sample IDs...') # sql = ('SELECT water_sample_id, station_id, sample_date ' # 'FROM resa2.water_samples ' # 'WHERE station_id = %s' % resa_id) # ws_df = pd.read_sql_query(sql, eng) # print (' Checking data integrity...') # # Join sample id to chemistry # chem_df = pd.merge(mvm_df, ws_df, how='left', on='sample_date') # # Get cols of interest # chem_df = chem_df[['water_sample_id', 'icpw_method_id', 'value']] # chem_df.columns = ['sample_id', 'method_id', 'value_'] # # Drop NaNs # chem_df.dropna(how='any', inplace=True) # # Deal with flags # chem_df['value_'] = chem_df['value_'].astype(str) # chem_df['flag1'] = chem_df.apply(f, axis=1) # # Extract numeric chars # chem_df['value'] = chem_df['value_'].str.extract("([-+]?\d*\.\d+|\d+)", expand=True) # chem_df['value'] = chem_df['value'].astype(float) # del chem_df['value_'] # # Reorder cols # chem_df = chem_df[['sample_id', 'method_id', 'value', 'flag1']] # # Check flags are consistent # if not pd.isnull(chem_df['flag1']).all(): # if not set(chem_df['flag1'].unique()).issubset(['<', '>', np.nan]): # print ('Some flags are not valid:') # print (chem_df['flag1'].unique()) # # Add chem to db # print (' Writing data to WATER_CHEMISTRY_VALUES2 table...') # dtypes = {c:types.VARCHAR(chem_df[c].str.len().max()) # for c in chem_df.columns[chem_df.dtypes == 'object'].tolist()} # chem_df.to_sql(name='water_chemistry_values2', # schema='resa2', # con=eng, # if_exists='append', # index=False, # dtype=dtypes) # print (' Done.') Explanation: 1.2. Finland Jussi has supplied an entirely new dataset for all Finnish stations covering the period from 1990 to 2017. This supersedes the data already in the database. I have therefore removed all post-1990 data for both "core" and "TOC_TRENDS_2015" stations, and replaced the values with the most recent dataset. I have also updated the station properties (NFC codes etc.) based on the latest information provided. One of the "core" Finnish stations, FI13, is no longer monitored. Jussi has proposed that this station be removed from the "core" and replaced with Nat_FI_27. This is a nearby station with similar characteristics and ongoing monitoring. It is also one of the "trends" stations, with code 'Nat_FI_27' in the 2015 trends dataset and code 'Tr18_FI_39478' in the latest trends work. Based on this, I have removed FI13 from the "core" and substituted Nat_FI_27 instead. 1.3. Czech Republic The Czech data was reviewed and updated during 2016. The latest data provided during 2017 has now also been added. 1.4. Germany It looks as though there has been a long-running confusion regarding site names and codes for the German stations. As far as I can make out, the following changes are required: Station DE34, which is currently named "Odenwald, Schmerbach 1" in our database, should actually be named "Odenwald, Rombach 4" Station DE35, which is currently named "Taunus, Rombach 4" in our database, should actually be labelled "DE36" and named "Odenwald, Schmerbach 1" Station DE11, which is currently named "Schwarzwald, Kleine Kinzig" in our database, should actually be labelled "DE35" and named "Schwarzwald, Kleine Kinzig Huettenhardt" I have made these changes, as well as updating the station properties with the NFC codes provided by Jens. Hopefully this will help to avoid confusion in the future. Note: Changing site names/codes like this is quite dangerous, as there is a good chance of data becoming associated with the wrong station. It might be worth getting Jens to check the data we have after this clean-up, or perhaps submitting an entirely new dataset if that's not too much work. Other points to note: Monitoring has stopped at several of the German stations, and we also seem to have a gap in data submission between 2013 and 2016 (giving an actual data gap from the end of 2012 to the end of 2014) For the lakes, Jens has provided data from a wide variety of depths. We are primarily interested in surface samples, but restricting the data to just depth = 0 results in a very limited dataset. As with some other countries, I have therefore filtered the data to include samples (including "mixed samples") from less than or equal to 2 m depth. 1.5. Latvia Data tidied by Cathrine - see e-mail received 18.02.2019. 1.6. Moldova Data tidied by Cathrine - see e-mail received 18.02.2019. I have created two new stations (MD01 and MD02) and associated them with a new project ('ICPWaters MD'). 1.7. Estonia Data tidied by Espen - see e-mail received 18.02.2019. 1.8. Italy Data tidied by Espen - see e-mail received 18.02.2019. I have created 6 new stations (IT07 to IT12) and added them to the existing 'ICPWaters IT' project. 1.9. Switzerland Data tidied by Espen - see e-mail received 18.02.2019. 1.10. Sweden The latest Swedish data are available from the MVM database. For consistency, it seems sensible to update the entire data series using the API, thereby ensuring that NIVA has the most recent quality assured data. The file .\ICP_Waters\Call_For_data_2018\replies\sweden\sweden_core_stations_mvm_codes_feb_2019.xlsx lists the 22 Swedish stations in the "core" programme. I have deleted all the data associated with these stations. The code below then uses my MVM_Python package to download updated series from the Swedish database and add them to RESA. End of explanation # Get list of US "core" stations stn_xlsx = r'../../../Call_For_data_2018/usa_trends_and_core_sites.xlsx' stn_df = pd.read_excel(stn_xlsx, sheet_name='Stations') us_cds = list(stn_df['nfc_code'].unique()) # Exclude inactive stations ['US82', 'US94', 'US125', 'X15:1C1-106'] inactive = ['ME-9998E', '1C1-092', '1C1-096', '1C1-106E'] us_cds = [i for i in us_cds if i not in inactive] len(us_cds) # Should be 91 # Read stream data riv_xl_path = r'../../../Call_for_Data_2018/replies/usa/Stoddard LTM streams_2018.xls' riv_df = pd.read_excel(riv_xl_path, sheet_name='LTM stream data 1979-2016') # Tidy del riv_df['PROGRAM'], riv_df['YR'] del riv_df['ALDS'], riv_df['ALOR'] del riv_df['DIC'], riv_df['FLOW'] # Rename riv_df.rename({'SITE_ID':'Code', 'DATE_TIME':'Date', 'ALTD':'TAL', 'ANC':'ALKGMIKRO', 'CA':'Ca', 'CL':'Cl', 'COND':'K25X', 'DOC':'TOC', 'MG':'Mg', 'NA':'Na', 'NH4':'NH4N', 'NO3':'NO3N', 'PH':'pH', 'SIO2':'SiO2', 'SO4':'SULF', 'WTEMP':'TEMP'}, inplace=True, axis=1) # Convert units riv_df['Ca'] = riv_df['Ca']*40.1/(1000*2) # ueq/l to mg/l riv_df['Cl'] = riv_df['Cl']*35.45/(1000*1) # ueq/l to mg/l riv_df['K'] = riv_df['K']*39.1/(1000*1) # ueq/l to mg/l riv_df['Mg'] = riv_df['Mg']*24.3/(1000*2) # ueq/l to mg/l riv_df['Na'] = riv_df['Na']*23./(1000*1) # ueq/l to mg/l riv_df['NH4N'] = riv_df['NH4N']*14. # ueq/l to ugN/l riv_df['NO3N'] = riv_df['NO3N']*14. # ueq/l to ugN/l riv_df['SULF'] = riv_df['SULF']*96.065/(1000*2) # ueq/l to mgSO4/l riv_df.head() # Read lake data la_xl_path = r'../../../Call_for_Data_2018/replies/usa/Stoddard LTM lakes_2018.xls' la_df = pd.read_excel(la_xl_path, sheet_name='LTM lake data 1980-2016') # Fill NaN in DEPTH column with 0 la_df['DEPTH'] = la_df['DEPTH'].fillna(0) # Get surface only la_df = la_df.query("DEPTH <= 1") # Tidy del la_df['PROGRAM'], la_df['SAMTYPE'] del la_df['year'], la_df['month'], la_df['day'] del la_df['ALDS'], la_df['ALOR'], la_df['CHLA'] del la_df['DIC'], la_df['PHEQ'], la_df['DEPTH'] # Rename la_df.rename({'SITE_ID':'Code', 'DATE_TIME':'Date', 'AL':'TAL', 'ANC':'ALKGMIKRO', 'CA':'Ca', 'CL':'Cl', 'COLOR':'COLOUR', 'COND':'K25X', 'DOC':'TOC', 'MG':'Mg', 'NA':'Na', 'NH4':'NH4N', 'NO3':'NO3N', 'PH':'pH', 'PTL':'TOTP', 'SIO2':'SiO2', 'SO4':'SULF', 'WTEMP':'TEMP'}, inplace=True, axis=1) # Convert units la_df['Ca'] = la_df['Ca']*40.1/(1000*2) # ueq/l to mg/l la_df['Cl'] = la_df['Cl']*35.45/(1000*1) # ueq/l to mg/l la_df['K'] = la_df['K']*39.1/(1000*1) # ueq/l to mg/l la_df['Mg'] = la_df['Mg']*24.3/(1000*2) # ueq/l to mg/l la_df['Na'] = la_df['Na']*23./(1000*1) # ueq/l to mg/l la_df['NH4N'] = la_df['NH4N']*14. # ueq/l to ugN/l la_df['NO3N'] = la_df['NO3N']*14. # ueq/l to ugN/l la_df['SULF'] = la_df['SULF']*96.065/(1000*2) # ueq/l to mgSO4/l la_df.head() # Combine river and lake data us_df = pd.concat([riv_df, la_df], axis=0, sort=False) # Filter to just sites in core project us_df = us_df[us_df['Code'].isin(us_cds)] # Join ICPW codes us_df = pd.merge(us_df, stn_df[['station_code', 'nfc_code']], left_on='Code', right_on='nfc_code') # Average duplicates us_df = us_df.groupby(['station_code', 'Date']).mean() # Tidy us_df.reset_index(inplace=True) # Check we have data for all sites assert len(us_df['station_code'].unique()) == len(us_cds), 'Some stations have no data.' # Rename us_df.rename({'station_code':'Code'}, inplace=True, axis=1) # Save to Excel out_xl = r'../../../Call_for_Data_2018/replies/usa/usa_lake_stream_all_core_to_2016_tidied.xlsx' us_df.to_excel(out_xl, sheet_name='Data', index=False) us_df.head() # # Delete existing data for 91 stations # # Get station IDs # sql = ("SELECT station_id FROM resa2.stations " # "WHERE station_code IN %s" % str(tuple(list(us_df['Code'].unique())))) # id_df = pd.read_sql(sql, eng) # # Delete from SAMPLE_SELECTIONS # print ('Deleting from SAMPLE_SELECTIONS') # sql = ("DELETE FROM resa2.sample_selections " # "WHERE water_sample_id IN ( " # " SELECT water_sample_id FROM resa2.water_samples " # " WHERE station_id IN %s)" % str(tuple(list(id_df['station_id'].unique())))) # eng.execute(sql) # # Delete from WCV2 # print ('Deleting from WATER_CHEMISTRY_VALUES2') # sql = ("DELETE FROM resa2.water_chemistry_values2 " # "WHERE sample_id IN ( " # " SELECT water_sample_id FROM resa2.water_samples " # " WHERE station_id IN %s)" % str(tuple(list(id_df['station_id'].unique())))) # eng.execute(sql) # # Delete from WATER_SAMPLES # print ('Deleting from WATER_SAMPLES') # sql = ("DELETE FROM resa2.water_samples " # "WHERE station_id IN %s" % str(tuple(list(id_df['station_id'].unique())))) # eng.execute(sql) Explanation: 1.11. USA As described in the trends notebook John has supplied an entirely new set of data for the US, but some of the NFC codes have changed. I have therefore updated the codes in our database to reflect these changes. The latest station codes for the 95 "core" US sites are given in .\ICP_Waters\Call_For_data_2018\usa_trends_and_core_sites.xlsx Note that John's most recent dataset does not include any data for 4 of the "core" stations: US82, US94, US125 and X15:1C1-106 because these are no longer active. I have deleted all the old data associated with the US sites, except for the 4 stations listed above. The code below reads John's new files and creates an Excel worksheet in the same format as the ICPW template, which can then be added back to the database. End of explanation # # Delete existing data for 6 UK stations # # Delete from SAMPLE_SELECTIONS # print ('Deleting from SAMPLE_SELECTIONS') # sql = ("DELETE FROM resa2.sample_selections " # "WHERE water_sample_id IN ( " # " SELECT water_sample_id FROM resa2.water_samples " # " WHERE station_id IN (23608, 23609, 23610, 23611, 23612, 36165))") # eng.execute(sql) # # Delete from WCV2 # print ('Deleting from WATER_CHEMISTRY_VALUES2') # sql = ("DELETE FROM resa2.water_chemistry_values2 " # "WHERE sample_id IN ( " # " SELECT water_sample_id FROM resa2.water_samples " # " WHERE station_id IN (23608, 23609, 23610, 23611, 23612, 36165))") # eng.execute(sql) # # Delete from WATER_SAMPLES # print ('Deleting from WATER_SAMPLES') # sql = ("DELETE FROM resa2.water_samples " # "WHERE station_id IN (23608, 23609, 23610, 23611, 23612, 36165)") # eng.execute(sql) Explanation: 1.12. UK There are 6 UK stations within the "core" programme. As described in the "trends" notebook, some issues have been identified with the historic data already in the database. I will therefore delete all the old data and upload corrected values compatible with the most recent trends work. For the UK, the "core" stations are a subset of the "trends" stations, so it's just a question of uploading a cut-down version of the cleaned "trends" dataset. I have created a new Excel file for just the "core" stations here .\ICP_Waters\Call_For_data_2018\replies\uk\icpw_uk_core_all.xlsx This has been uploaded to the database. End of explanation # Select projects prj_grid = nivapy.da.select_resa_projects(eng) prj_grid # Select all relevant projects prj_df = prj_grid.get_selected_df() print (len(prj_df), 'projects selected.') prj_df Explanation: 1.13. Ireland The existing "core" project includes 10 stations in Ireland, but 7 of these (all the river sites) are no longer monitored. Julian has proposed a further 11 lake sites as replacements, which gives 21 Irish sites in total, but only 14 of them with active monitoring. All the station locations and metadata for the Irish sites are given in .\ICP_Waters\Call_for_Data_2017\Replies\ireland\ireland_20170929.xlsx Note that for many of the sites there are several possible monitoring points (e.g. on the lake shore or out in open water), but all of them are given the same geographic co-ordinates in Julian's spreadsheet (i.e. a "station" comprises several nearby monitoring "points"). For ICPW, we have historically focused on the open water sites, so I have filtered the shoreline samples out from Julian's dataset. In some cases, there are several open water sampling points per station as well. This leads to duplicates, because the same site has been monitoried more than once on the same date, but at different points. These duplicated values are usually very similar, and differences will therefore be ignored. I have created 11 new sites in the database based on the details in the Excel file above. This Excel file below contains the aggregated data supplied by Julian for the period from 2007 to 2017 .\ICP_Waters\Call_For_data_2018\replies\ireland\ireland_icpw_2007-2017_jes_tidied.xls Note: While adding the data to the database, I have noticed that historic values (i.e. those already in the database) for alkalinity look very high. From a brief check, I suspect many values are a factor of 1000 too large. This will need investigating further before undertaking the trends analysis - see section 2.6, below. 1.14. Poland There are currently four Polish stations within the "core" programme (PL01 to PL04), but these have not been monitored since 2012 (see e-mail from Rafal received 17.06.2017 at 17.43). Instead, he has suggested 5 new stations and supplied data for these from 1999 to 2017. Details for the new sites are given in .\ICP_Waters\Call_for_Data_2016\Replies\poland\poland.xlsx I have therefore created new stations with codes PL05 to PL09, together with a tidied data sheet here .\ICP_Waters\Call_For_data_2018\replies\poland\icpw_poland_post-2012_jes_tidied.xls which has been added to the database. 2. Quality checking There are clearly some issues with the raw data in the database. Before going further with the trends analysis, I'd like to try to catch as many of the data errors as possible. For now, I'll focus on the main parameters of interest for the trends report: SO4, base cations (Ca and Mg), NO3, Alkalinity, ANC (calculated from Ca, Mg, K, Na, NH4, Cl, SO4 and NO3), DOC/TOC and pH. 2.1. Select projects End of explanation # Get stations for these projects stn_df = nivapy.da.select_resa_project_stations(prj_df, eng) print (len(stn_df), 'stations associated with the selected projects.') stn_df.head() # Map nivapy.spatial.quickmap(stn_df, popup='station_code', aerial_imagery=True) Explanation: Note: There are a total of 25 countries that have submitted data to ICPW in the past, but only the 15 above are suitable for the trends analysis. For reference, the others are: | Country | Comment | |:-----------:|:----------------------------------------------------------------------------------------------------:| | Armenia | Data only from 2004 - 2008. Does not use template. No response to recent calls for data | | Austria | Only 1 station. No data since 2012 | | Belarus | Only 1 station. No response to recent calls for data | | France | No longer involved in project. Never supplied data? | | Hungary | No longer involved in project. Never supplied data? | | Montenegro | Data only from 2006 - 2009. Does not use template. No response to recent calls for data | | Netherlands | Currently being processed by Cathrine | | Russia | Data only from 2009 - 2014. Does not use template. No response to recent calls for data | | Slovakia | Supplied as part of "trends" work. Could also be used in "core", but some overlap with Polish sites? | | Spain | No data supplied. No response to recent calls for data | Of these, it should eventually be possible to include data from the Netherlands, and maybe Slovakia too. 2.2. Select stations End of explanation # Get available parameters st_dt = '1970-01-01' end_dt = '2019-01-01' par_grid = nivapy.da.select_resa_station_parameters(stn_df, st_dt, end_dt, eng) par_grid # Select relevant parameters par_df = par_grid.get_selected_df() par_df Explanation: 2.3. Select parameters End of explanation # Get chem wc_df, dup_df = nivapy.da.select_resa_water_chemistry(stn_df, par_df, st_dt, end_dt, eng, drop_dups=True, lod_flags=False) wc_df.head() # Save for speed wc_df.to_csv('working_chem.csv', index=False, encoding='utf-8') # Load saved data wc_df = pd.read_csv('working_chem.csv', encoding='utf-8') wc_df['sample_date'] = pd.to_datetime(wc_df['sample_date']) Explanation: 2.4. Select chemistry data End of explanation # Which stations have duplicates? dup_df['station_code'].unique() Explanation: 2.5. Basic quality checking 2.5.1. Duplicates The code above produces a warning about duplicated values, which should be investigated. End of explanation print(len(wc_df['station_id'].unique()), 'stations with data.') # Identify missing station set(stn_df['station_id']) - set(wc_df['station_id']) stn_df.query('station_id == 38562') Explanation: There are duplicated values for stations in Norway, Canada, Germany, Estonia and Latvia. The duplicates in Norway are due to the way the NIVA lab handles reanalyses, and my code is designed to query these correctly. For the other countries, the problem is due to submitting both DOC and TOC data for the same water sample. These are added to our database under separate methods, but for ICPW both TOC and DOC get mapped to TOC in the output, because ICPW does not distinguish between the two. My code will currently pick one of the duplicates at random, which is not ideal. However, as long as the ICPW assumption that TOC ~ DOC is correct, this shouldn't actually cause problems. (And if this assumption is not correct, we have broader issues to deal with in the ICPW database). 2.5.2. Number of stations with data From above, there are 261 stations associated with the selected projects. Do they all have at least some relevant data? End of explanation # Get countries from proj names sql = ("SELECT station_id, " " SUBSTR(b.project_name, -2, 2) AS country " "FROM resa2.projects_stations a " "LEFT JOIN resa2.projects b " "ON a.project_id = b.project_id " "WHERE station_id IN %s " "AND SUBSTR(b.project_name, 1, 9) = 'ICPWaters'"% str(tuple(list(stn_df['station_id'])))) cnt_df = pd.read_sql(sql, eng) # Join to chem wc_df = pd.merge(wc_df, cnt_df, on='station_id') # Extract year wc_df['year'] = wc_df['sample_date'].dt.year # Number of samples per year per site agg = wc_df.groupby(['station_id', 'country', 'year']).agg({'station_code':'count'}).reset_index() agg.columns = ['station_id', 'country', 'year', 'count'] # Plot g = sn.catplot(data=agg, x='year', y='count', row='country', kind='bar', ci=100, sharey=False, aspect=5) for ax in g.axes: ax[0].xaxis.set_tick_params(which='both', labelbottom=True) plt.tight_layout() Explanation: So, the only station with no data is IE20 - one of the new Irish lakes. I have checked Julian's original data submission and this looks OK: the only data he supplied for IE20 is from the shoreline, and these samples have been deliberately excluded - see section 1.13, above. 2.5.3. Distribution of data through time The code below calculates the number of water samples per year per station. A bar plot is produced for each country, where the height of each bar shows the average number of samples per year and the error bars show the minimum and maximum. For example, in Norway during 1973, the minimum number of samples was 146 (at Langtjern) and the maximum number was 368 (at Birkenes). End of explanation # Melt to long format wc_lng = wc_df.melt(id_vars=['station_id', 'station_code', 'station_name', 'country', 'sample_date', 'year', 'depth1', 'depth2'], var_name='param') # Plot g = sn.catplot(data=wc_lng, x='country', y='value', row='param', kind='box', aspect=4, sharey=False) for ax in g.axes: ax[0].xaxis.set_tick_params(which='both', labelbottom=True) plt.tight_layout() Explanation: The plots above illustrate the wide variability in monitoring across the ICPW programme. Some points to note: The Norwegian sites have been monitored very consistently since the the 1980s (about 1 sample per week), as have some of the Canadian sites. It may be possible to try more sophisticated/detailed trend analyses using these high-frequency data Germany, Poland, Sweden and the USA (plus some Canadian sites) typically have monthly monitoring i.e. still quite detailed Switzerland, Estonia, Finland, Ireland, Italy, Latvia, the UK and the Czech Republic typically have fewer than 12 samples per year. This data is less detailed, and probably only suitable for calculating annual averages (rather than attempting to capture e.g. seasonal effects) The dataset for Moldova currently only covers the period from 2014 to 2017. It is therefore not yet suitable for estimating trends 2.6. Parameter distributions The code below produces boxplots for each parameter, split by country. End of explanation ## Apply factor of 1/83.26 to DOC #with eng.begin() as conn: # sql = ("UPDATE resa2.water_chemistry_values2 " # "SET value = value/83.26 " # "WHERE sample_id IN ( " # " SELECT water_sample_id " # " FROM resa2.water_samples " # " WHERE station_id IN (23658, 23659, 23660, 23661) " # " AND sample_date > DATE '2014-10-01') " # "AND method_id = 10273") # conn.execute(sql) ## Apply factor of 26.98 to Al #with eng.begin() as conn: # sql = ("UPDATE resa2.water_chemistry_values2 " # "SET value = value*26.98 " # "WHERE sample_id IN ( " # " SELECT water_sample_id " # " FROM resa2.water_samples " # " WHERE station_id IN (23658, 23659, 23660, 23661) " # " AND sample_date > DATE '2014-10-01') " # "AND method_id = 10249") # conn.execute(sql) Explanation: Note the following: There are clearly issues with the alkalinity data for the Irish and Italian stations. As mentioned in Section 1, I suspect there may be unit errors in some of the old templates (the recent ones look OK), which have resulted in values being uploaded that are a factor of 1000 too large. Needs further investigation Alkalinity for the Norwegian sites (from NIVA's lab) is reported in mmol/l. These values need converting to ueq/l in order to be compatible with data from the Focal Centres The data from Moldova is very different to that from the other ICPW sites. I don't know much about Moldova, but I guess the values are plausible if the regional is strongly influenced by marine processes and/or limestone geology? These sites don't have enough data for inclusion in the trends work, but check the values look reasonable with Øyvind nevertheless There are a few very high Cl concentrations in a few of the German and Italian samples. Are these values plausible, or likely contaminated? There are very high ammonium concentrations at some of the German sites, and very high nitrate concentrations in both Germany and Latvia. The nitrate values are plausible for heavily impacted (e.g. agricultural) systems, but they look a bit out of place within ICPW. Are these values reasonable? Some of the samples have NO3-N concentrations well above the drinking water limit specified by the Nitrates Directive (11.3 mg-N/l), which seems pretty high for supposedly "pristine" catchments There are clearly issues with the TOC values from some of the US stations. This problem was previously identified during data preparation for the "TOC trends" paper and John is following it up (see e-mail from John received 15.02.2019 at 21.54 for details) Some pH values from Latvia and Germany are very low. Did acidification really cause pH values as low as 3? 2.6.1. Correct alkalinity Ireland Checking of the Irish data shows that, in the past, alkalinity has been supplied both as meq/l and as ueq/l. It seems pretty obvious that errors have been made regarding the units in some cases, such that values in ueq/l have been reported as meq/l, which results in an incorrect multiplier of 1000 being applied. Looking at the distribution of alkalinity values for the Irish sites, the following SQL can be used to identify and fix the problem (by switching the method from meq/l to ueq/l). UPDATE RESA2.WATER_CHEMISTRY_VALUES2 SET method_id = 10298 WHERE sample_id IN (SELECT water_sample_id FROM RESA2.WATER_SAMPLES WHERE station_id IN ( SELECT station_id FROM RESA2.PROJECTS_STATIONS WHERE project_id = 3445 ) ) AND method_id = 10297 AND value &gt; 2; Italy Similarly, during 2010 to 2012, it looks as though alkalinity values in ueq/l for sites IT01 to IT06 have been mistakenly entered into the database as meq/l. This has been fixed using the following SQL UPDATE RESA2.WATER_CHEMISTRY_VALUES2 SET method_id = 10298 WHERE sample_id IN (SELECT water_sample_id FROM RESA2.WATER_SAMPLES WHERE station_id IN ( SELECT station_id FROM RESA2.PROJECTS_STATIONS WHERE project_id = 2987 ) ) AND method_id = 10297 AND value &gt; 10; 2.6.2. Correct Catskills TOC and Al End of explanation
12,856
Given the following text description, write Python code to implement the functionality described below step by step Description: Get information for all stations in list and write out to JSON file Step1: Optional
Python Code: OUTFN = "AK_NCDC_FirstOrderStations.json" SAVEDATA = False stationdata = [] for station in all_stations: path = os.path.join(endpoint_stations, "GHCND:{}".format(station)) fullbase = requests.compat.urljoin(baseurl, path) r = requests.get( fullbase, headers=custom_headers, ) stationdata.append(json.loads(r.text)) if SAVEDATA: with open(OUTFN, "w") as fh: fh.write(json.dumps(stationdata, indent=2)) with open(OUTFN, "w") as fh: fh.write(json.dumps(stationdata, indent=2)) Explanation: Get information for all stations in list and write out to JSON file End of explanation [item["name"] for item in stationdata] shortnamedic = { 'FAIRBANKS INTERNATIONAL AIRPORT, AK US': "Fairbanks", 'ANNETTE WEATHER SERVICE OFFICE AIRPORT, AK US': "Annette", 'JUNEAU AIRPORT, AK US': "Juneau", 'YAKUTAT AIRPORT, AK US': "Yakutat", 'KODIAK AIRPORT, AK US': "Kodiak", 'KING SALMON AIRPORT, AK US': "King Salmon", 'HOMER AIRPORT, AK US': "Homer", 'COLD BAY AIRPORT, AK US': "Cold Bay", 'ANCHORAGE TED STEVENS INTERNATIONAL AIRPORT, AK US': "Anchorage", 'ST PAUL ISLAND AIRPORT, AK US': "St. Paul", 'BIG DELTA AIRPORT, AK US': "Big Delta", 'GULKANA AIRPORT, AK US': "Gulkana", 'VALDEZ WEATHER SERVICE OFFICE, AK US': "Valdez", 'MCGRATH AIRPORT, AK US': "McGrath", 'TALKEETNA AIRPORT, AK US': "Talkeetna", 'BETTLES AIRPORT, AK US': "Bettles", 'BETHEL AIRPORT, AK US': "Bethel", 'KOTZEBUE RALPH WEIN MEMORIAL AIRPORT, AK ': "Kotzebue", 'NOME MUNICIPAL AIRPORT, AK US': "Nome", 'BARROW W POST W ROGERS AIRPORT, AK US': "Barrow (Utqiaġvik)"} for item in stationdata: item['shortname'] = shortnamedic[item['name']] with open(OUTFN, "w") as fh: fh.write(json.dumps(stationdata, indent=2)) Explanation: Optional: Add short names and then save again End of explanation
12,857
Given the following text description, write Python code to implement the functionality described below step by step Description: $$\begin{align}\Omega &= [0, 1]^2\ \Gamma_D &= \partial\Omega\end{align}$$ Step1: $$\begin{align}\kappa(x; \mu) &
Python Code: g = grid.make_cube_grid__2d_simplex_aluconform(lower_left=[0, 0], upper_right=[1, 1], num_elements=[4, 4], num_refinements=2, overlap_size=[0, 0]) #g.visualize('grid') Explanation: $$\begin{align}\Omega &= [0, 1]^2\ \Gamma_D &= \partial\Omega\end{align}$$ End of explanation #bump = functions.make_expression_function_1x1(g, 'x', 'cos(0.5*pi*x[0])*cos(0.5*pi*x[1])', order=3, name='bump') #one = functions.make_constant_function_1x1(g, 1.0, name='one') #diffusion = [one - bump, bump] #diffusion[0].visualize(g, 'diffusion_affine_part') #diffusion[1].visualize(g, 'diffusion_component') #f = functions.make_expression_function_1x1(g, 'x', '0.5*pi*pi*cos(0.5*pi*x[0])*cos(0.5*pi*x[1])', order=3, name='rhs') #f.visualize(g, 'force') #g_D = functions.make_constant_function_1x1(g, 0.0, name='dirichlet') kappa = functions.make_constant_function_1x1(g, 1.0, name='diffusion') identity = functions.make_constant_function_2x2(g, [[0, 1], [1, 0]], name='id') f = functions.make_constant_function_1x1(g, 1.0, name='force') g_D = functions.make_constant_function_1x1(g, 0.0, name='dirichlet') g_N = functions.make_constant_function_1x1(g, 0.0, name='neumann') space = gdt.make_cg_space__1x1__p1__fem(g) #space.visualize("cg_space") elliptic_op = gdt.make_elliptic_matrix_operator__istl_sparse(kappa, space) system_assembler = gdt.make_system_assembler(space) system_assembler.append(elliptic_op) system_assembler.assemble() Explanation: $$\begin{align}\kappa(x; \mu) &:= 1 - (1 - \mu) \cos(\tfrac{1}{2} \pi x_0) \cos(\tfrac{1}{2} \pi x_1)\ f(x) &:= \tfrac{1}{2} \pi^2 \cos(\tfrac{1}{2} \pi x_0) \cos(\tfrac{1}{2} \pi x_1)\ g_D(x) &:= 0\end{align}$$ End of explanation
12,858
Given the following text description, write Python code to implement the functionality described below step by step Description: Contextual Bandits with TF-agents Learning Objectives Learn to load a dataset in BigQuery and connect to it using TensorFlow IO Learn how to transform a classification dataset into a contextual bandit problem Learn how to stream a BigQuery table into a TensorFlow Dataset Learn how to use TensorFlow Agents to define contextual bandit environment Learn how to use the neural epsilon-greedy policy to solve a contextual bandit problem Learn how to train a contextual bandit agent Learn how to predict with a contextual bandit agent Contextual Bandit (CB) is a machine learning framework in which an agent selects actions (also called arms) in order to maximize rewards in the long term. At each round, the agent receives some information about the current state (also called the context) and uses this information to select an action. As a consequence of this choice, it receives a reward. On the one hand, contextual bandit is one of the simplest instance of a reinforcement learning problem where a single state (or context) is provided to the agent and the play or episode stops after the first action has been chosen and the reward gotten. This setting appears in a number of useful problems in the industry, one of the best known being that of ad placements on a website Step1: Note Step2: Loading the dataset into BigQuery In this lab, we are going to use a classification dataset and turn it into a contextual bandit problem. Our dataset will be the Forest Cover Type from the UCI Machine Learning Repository, which associates various cartographic features of a given area with different labels representing different types of forests covering the areas. Here are a few rows from the original dataset (the last column is the label) Step3: At each time step, our CB agent will be given a context $x$ representing an area cartographic features (Elevation, Aspect, Slope, etc.). Then it will have to choose among one of 7 possible forest cover types as defined by the last column (Cover_type), and represented by the integer from 0 to 6. For convenience, we have pre-precessed the categorical features Wilderness_Area and Soil_Type into their one-hot-encoded versions. So the dataset we will use will have more columns (55 exactly) than the original covertype dataset. We will name the columns corresponding to the 54 features from X0 to X54, while the last column Y represents the label. The next cell defines our dataset column names and column types, and displays a few examples Step4: Let us now load this dataset into BigQuery into the table named bash PROJECT_ID.DATASET_ID.TABLE_ID where DATASET_ID and TABLE_ID are defined in the next cell among other variables like the DATASET_SCHEMA Step5: Exercise In the cell below, use the bq command line to create the dataset and populate the table from DATASET_SOURCE using the variables defined in the cell above Step6: Connecting to BigQuery We will now create a tf.data.Dataset connected to the data table we created in BigQuery. For that purpose, we will use Tensorflow_io, which offers a connector BigQueryClient to stream data directly out of BigQuery Step7: From our bq_session we can create a tf.data.Dataset using the parallel_read_rows method, which will read our BigQuery rows in parallel Step8: At this point the examples are stored in our tf_dataset as OrderedDict with the keys being the column names and the values being the corresponding row values Step9: Exercise Configure the tf_dataset we instantiated so that 1. the examples are stored as couples $(x, y)$ where $x$ is the feature vector with 54 components and $y$ is the label (Hint Step10: Verify that now the dataset has the correct form Step11: Initializing and configuring the CB environment In Tensorflow Agents, there are special classes that provide the contexts and the rewards to the agent. These classes are generally called environments. The environment defines the type of actions, states/contexts/observations, and rewards allowed in the problem through its action_spec, observation_spec, and reward_spec methods, respectively. In TensorFlow Agents the environments are specific to a given problem, but the agents are generic. This means that if you want to solve your own RL problem with TensorFlow Agents, you'll likely have to write the environment that represents your problem and define the type of actions and states allowed, but you won't need to implement the agent. Namely, you'll be able to use any of the generic agents in the TensorFlow Agents library. The agent will adapt to your problem setting using observation_spec, action_spec, and reward_spec from your environment. In this section, we will instanciate the environment to solve our "covertype contextual bandit" problem. In the TensorFlow Agents library, there is a special environment class named ClassificationBanditEnvironment that turns any multiclass labeled dataset into a contextual bandit environment. The contexts (or observations) will be the features in the dataset, the actions are the label classes. In general, the rewards can be sampled from a probability distribution depending on the actual and the guessed labels. In our case, the rewards will be deterministic Step12: If we sample from the distribution above, we obtain a $7\times 7$ identity matrix storing the rewards obtained by the agent if the agent selects class $i$ (corresponding to row $i$) when the actual class is $j$ (corresponding to column $j$). In our case we always obtain the identity matrix Step13: Exercise Instanciate the ClassificationBanditEnvironment and invoke its reward_spec, observation_spec, and action_spec methods to make sure they correspond to the covertype bandit problem. Note that the ClassificationBanditEnvironment can process many actions in parallel so it takes a batch_size argument to define the number of actions it will process simultaneously. Let us set that batch size to 1 for now. Step14: Exercise Use the reset method on the environment you have just instantiated to obtain the first step. Inspect this step observation attribute to see which cartographic features the environment has given you to guess the covertype. Then in a next cell guess a possible covertype from 0 to 6, and pass it to the environment using the step method. It will return the next step. Inspect the next step reward attribute to see if you guessed correctly. Step15: The next step returned by the environment also contains the new context Step16: Initializing the Agent In Tensorflow Agents, the classes that implement algorithms to solve contextual bandit problems are called agents. In this lab, we will will use the NeuralEpsilonGreedyAgent, which implements the neural epsilon greedy algorithm. This algorithm predicts the reward for each possible action given a context as input using a neural network. This network is called a Q-Network, since it outputs the $Q$-values $Q(x, a)$ for each of the actions $a$, which are the predicted rewards for each of the actions $a$ given the input context $x$. The action is then chosen to be the one with maximal $Q$-value $1-\epsilon$ of the time, or a random action with probability $\epsilon$. This randomness allows the algorithm to explore states that could be promising even though the neural network may estimate their rewards poorly. Exercise Instanciate a Q-Network so that it takes * as input tensor an observation tensor as defined by the environment * as output tensor an action tensor as defined by the environment Configure the structure of the layers as you wish. Step17: Exercise Generate a step from the environment using its stepmethod, and feed its observation attribute to the $Q$-network. Verify that the tensor of $Q$-values you are getting is of the right shape (you should get a vector with 7 components containing the predicted reward for each of the covertype classes) Step18: Exercise Now that we have our QNetwork to estimate our action rewards, in the next cell, you will instanciate the agent from the NeuralEpsilonGreedyAgent class. You will need to retrieve the time_step_spec and the action_spec from the environment. The reward_network will be the QNetwork you instantiated previously. You can take Adam as optimizer with a LEARNING_RATE of your choice. You will also set the propensity of your agent to explore rather than exploit the action with highest predicted reward through the value of EPSILON. Both the LEARNING_RATE and the EPSILON greediness are hyper-paramaters that can affect the training of the agent very much. Step19: The agent has a policy attribute containing the strategy that the agent will use when confronted to a given context. The agent.policy has anaction method that takes in a TimeStep generated by the environment and containing the context. It then issues a PolicyStep containing the action chosen by the policy given this context Step20: Under the hood, the agent policy uses the QNetwork to predict the values of the different actions (or possible covertype classes in our case). However at this stage the QNetwork has not been trained and its weights are random. This means that the predicted classes are meaningless. To remediate that, the agent has a train method that takes an experience, that is, a triples of a state (or context) $x$, the action taken (or predicted class) $y$, and the obtained reward $r$. It then updates the parameters $\theta$ of the QNetwork $Q_\theta$ with a gradient update so that the predicted reward $Q_\theta(x, y)$ used by the agent to guess $y$ in context $x$ increases if $y$ is the correct class (i.e. reward $1$) and decreases if $y$ has been guessed wrongly. Now training one experience at a time may lead to an unstable training, and the algorithm may have difficulty to converge. To stabilize the training one collects a number of these experiences in a experience replay buffer in a first stage, and then sample batches of these experiences to apply the gradient update to the QNetwork parameters in a second stage, as we do in supervized learning. Collecting experiences in an experience replay buffer Reinforcement learning algorithms use experience replay buffers to store trajectories of experiences when executing a policy in an environment. During training, replay buffers are queried for a subset of the trajectories to "replay" the agent's experience. Sampling from the replay buffer facilitates data re-use and breaks harmful co-relations between sequential data in RL, although in contextual bandits this isn't absolutely required but still helpful to stabilize the training. Tensorflow Agents defines a number of replay buffer classes all sharing a common interface to store and access the experience data. In this lab, we will use TFUniformReplayBuffer which samples uniformly experience trajectories. Exercise Initialize a TFUniformReplayBuffer using for data_spec the trajectory_spec stored in the agent's policy. Then use a batch_size of 1, which will be the size of the batches sampled from the replay buffer for each gradient descent update. Themax_length argument indicates the maximum number of steps we allow to be stored in the replay buffer from a single episode. Since for CB the eposide size is always 1, you'll set this argument to that. Step21: Now we have a Replay buffer, but we also need something to fill it with. Often a common practice is to have the agent interact with and collect experiences from the environment, without actually learning from it (that is without updating the parameters of the QNetwork) in a first step. This data-collection loop can be carried out using a DynamicStepDriver, which will 1. feed the TimeStep generated by the environment and containing the context data for the new step and the reward for the previous step to the agent, 1. collect the action generated by the agent policy from the environment context, and then 1. feed that action back to the environment. All that repeated in a loop. The data encountered by the driver at each step is saved in a NamedTuple called Trajectory and broadcast to a set of observers such as replay buffers. This trajectory includes the observation from the environment, the action recommended by the policy, the reward obtained, the type of the current and the next step, etc. In order for the driver to fill the replay buffer with data it needs acess to the add_batch method of the replay buffer. Exercise Instanciate a DynamicStepDriver using the environment we used so far. Note that the agent has two different policies Step22: The driver.run method will then start making the agent interact with the environment, while its sends experience trajectories to the replay buffer. Once done, one can retrieve the collected experience data from the replay buffer by invoking its as_dataset method Step23: Training the agent now amounts to retrieve (batches of) experience data from the replay_dataset and feed that data to the agent train method Step24: Training the agent We are almost ready now to write our contextual bandit agent training loop. Before we do that, let us recap what we have done so far, and configure all the objects anew for the real training. First of all, let us gather all the constants we have defined until this point in the same cell, and let us add a few which will be useful for saving our trained policy Step25: The next cell instanciates anew the contextual bandit main players (i.e., the environment, the neural network, the agent, and the experience replay buffer) so that we can have a global view of the process Step26: During training it is very useful to collect various metrics on the top of the training loss so that we can assess whether the training was effective or not from the plot of their learning curves. Just like you have metrics such as accuracy or recall in supervised learning, in bandit problems we use the regret metric per episode. To calculate the regret, we need to know what the highest possible expected reward is at every time step. The regret is essentially the difference between the highest reward we could have gotten and the reward we obtained summed up over our experience. As training progress, we expect the regret to decrease. To compute the regret, we need to define the optimal_reward_fn using python env_util.compute_optimal_reward_with_classification_environment which can know the optimal reward at each time step from the environment. Another similar metric is the number of times a suboptimal action has been chosen. That requires the definition if the optimal_action_fn in the same way. In the cell below, we collect all these metrics in the metrics list Step27: At each step of the training loop, we will compute the overall value for each of the metrics from the experience generated by the driver. To broadcast that experience data generated by the driver to the metrics, we can include these metrics to the list of observers that the driver broadcasts experience to. So now our observers list will contain the replay buffer add_batch method as well as all the metrics we defined above Step29: We are almost ready for the training loop! We need a couple of things though to help us save our model and metrics first though before that. Below we provide you with a helper function in order to save your agent and its the metrics, while training the model. For more information on checkpoints and policy savers (which will be used in the training loop below) go here. Step30: Exercise Now we have all the components ready to start training the model. A step in the training loop follows the following sequence Step31: Visualizing the learning curves with TensorBoard Now that the model has trained, we want to visualize the learning curve. For that we will use tensorboard, since we have saved each of the metrics as Tensorboard events with the checkpoint manager. Let us upload the Tensorboard logs to tensorboard.dev; to do that Step32: You should see something like this if all has gone well Step33: Because our trained agent consumes TimeStep, we need to wrap the raw feature using ts.TimeStep, which expects step_type, reward, discount, and observation as input. Since we are in prediction mode now, reward, and discount are irrelant and can be assigned any arbitary values (see here for more details) Step34: At last,let us get the recommeded action from our trained agent
Python Code: pip freeze | grep tf_agents || pip install -q tf_agents==0.11.0 Explanation: Contextual Bandits with TF-agents Learning Objectives Learn to load a dataset in BigQuery and connect to it using TensorFlow IO Learn how to transform a classification dataset into a contextual bandit problem Learn how to stream a BigQuery table into a TensorFlow Dataset Learn how to use TensorFlow Agents to define contextual bandit environment Learn how to use the neural epsilon-greedy policy to solve a contextual bandit problem Learn how to train a contextual bandit agent Learn how to predict with a contextual bandit agent Contextual Bandit (CB) is a machine learning framework in which an agent selects actions (also called arms) in order to maximize rewards in the long term. At each round, the agent receives some information about the current state (also called the context) and uses this information to select an action. As a consequence of this choice, it receives a reward. On the one hand, contextual bandit is one of the simplest instance of a reinforcement learning problem where a single state (or context) is provided to the agent and the play or episode stops after the first action has been chosen and the reward gotten. This setting appears in a number of useful problems in the industry, one of the best known being that of ad placements on a website: The different ads to publish on a webpage are the different actions, the context is given by a user features, and the reward is 1 if the user clicks on the published ad and 0 otherwise. On the other hand, contextual bandit is a natural generalization of a classification problem in supervized learning. Namely, consider a data set of points $(x, y)$ where the $x$'s are the features and the $y$'s are the labels in $k$ possible classes. We can setup an associated contextual bandit problem as follows: The CB agent at each time step is given the context $x$. From that information, it needs to select from $k$ possible actions (which are the $k$ possible classes). If the agent chooses the correct class for feature $x$, then the reward is $1$, and it is zero otherwise. The general goal of maximising the long-term cumulative rewards for the CB agent is equivalent to that of minimizing the training loss in supervized learning. Contextual bandit is more general than supervized classification though, since in many useful CB settings we actually know the reward only for the actions we have taken. In this lab, we will learn how to solve a contextual bandit problem derived from a classification dataset with Q-learning and the associated neural epsilon-greedy strategy using a powerful reinforcement learning library: TensorFlow Agents. Acknowledgement: This lab is based on a tutorial originally written by Anant Nawalgaria and Alex Erfurt. We thank them for making their original material available to us. Setup Let us intall TensorFlow Agents if it is not already installed and import the necessary libraries: End of explanation import functools import os import time import warnings import pandas as pd import tensorflow as tf # pylint: disable=g-explicit-tensorflow-version-import # from tensorflow.python.framework.dtypes import int64 from tensorflow_io.bigquery import BigQueryClient from tensorflow_probability import distributions as tfd from tf_agents.bandits.agents.neural_epsilon_greedy_agent import ( NeuralEpsilonGreedyAgent, ) from tf_agents.bandits.environments import environment_utilities as env_util from tf_agents.bandits.environments.classification_environment import ( ClassificationBanditEnvironment, ) from tf_agents.bandits.metrics import tf_metrics as tf_bandit_metrics from tf_agents.drivers.dynamic_step_driver import DynamicStepDriver from tf_agents.eval import metric_utils from tf_agents.metrics import tf_metrics from tf_agents.networks.q_network import QNetwork from tf_agents.replay_buffers.tf_uniform_replay_buffer import ( TFUniformReplayBuffer, ) from tf_agents.trajectories import time_step as ts warnings.filterwarnings("ignore") REGION = "us-central1" PROJECT_ID = !(gcloud config get-value project) PROJECT_ID = PROJECT_ID[0] os.environ["PROJECT_ID"] = PROJECT_ID Explanation: Note: You may need to restart the kernel after installation of the library. End of explanation pd.read_csv("../../tfx_pipelines/data/dataset.csv").head(2) Explanation: Loading the dataset into BigQuery In this lab, we are going to use a classification dataset and turn it into a contextual bandit problem. Our dataset will be the Forest Cover Type from the UCI Machine Learning Repository, which associates various cartographic features of a given area with different labels representing different types of forests covering the areas. Here are a few rows from the original dataset (the last column is the label): End of explanation DATASET_SOURCE = "../data/covertype.csv" df = pd.read_csv(DATASET_SOURCE, header=None) LABEL_NAME = "Y" FEATURE_PREFIX = "X" N_SAMPLES, N_COLUMNS = df.shape COLUMN_NAMES = [f"{FEATURE_PREFIX}{i}" for i in range(N_COLUMNS - 1)] + [ LABEL_NAME ] COLUMN_TYPES = [tf.int64] * N_COLUMNS df.columns = COLUMN_NAMES df.head(2) Explanation: At each time step, our CB agent will be given a context $x$ representing an area cartographic features (Elevation, Aspect, Slope, etc.). Then it will have to choose among one of 7 possible forest cover types as defined by the last column (Cover_type), and represented by the integer from 0 to 6. For convenience, we have pre-precessed the categorical features Wilderness_Area and Soil_Type into their one-hot-encoded versions. So the dataset we will use will have more columns (55 exactly) than the original covertype dataset. We will name the columns corresponding to the 54 features from X0 to X54, while the last column Y represents the label. The next cell defines our dataset column names and column types, and displays a few examples: End of explanation DATASET_LOCATION = "US" DATASET_SCHEMA = ",".join([f"{name}:INTEGER" for name in COLUMN_NAMES]) DATASET_ID = "contextual_bandit" TABLE_ID = "covertype" os.environ["DATASET_LOCATION"] = DATASET_LOCATION os.environ["DATASET_SOURCE"] = DATASET_SOURCE os.environ["DATASET_SCHEMA"] = DATASET_SCHEMA os.environ["DATASET_ID"] = DATASET_ID os.environ["TABLE_ID"] = TABLE_ID Explanation: Let us now load this dataset into BigQuery into the table named bash PROJECT_ID.DATASET_ID.TABLE_ID where DATASET_ID and TABLE_ID are defined in the next cell among other variables like the DATASET_SCHEMA: End of explanation %%bash # TODO Explanation: Exercise In the cell below, use the bq command line to create the dataset and populate the table from DATASET_SOURCE using the variables defined in the cell above: End of explanation bq_client = BigQueryClient() bq_session = bq_client.read_session( f"projects/{PROJECT_ID}", PROJECT_ID, TABLE_ID, DATASET_ID, COLUMN_NAMES, COLUMN_TYPES, ) Explanation: Connecting to BigQuery We will now create a tf.data.Dataset connected to the data table we created in BigQuery. For that purpose, we will use Tensorflow_io, which offers a connector BigQueryClient to stream data directly out of BigQuery: python from tensorflow_io.bigquery import BigQueryClient The first step is to create aBigQuery client and then a read session from it: End of explanation tf_dataset = bq_session.parallel_read_rows( block_length=N_SAMPLES, num_parallel_calls=tf.data.experimental.AUTOTUNE, ) Explanation: From our bq_session we can create a tf.data.Dataset using the parallel_read_rows method, which will read our BigQuery rows in parallel: End of explanation for example in tf_dataset.take(1): print(example) Explanation: At this point the examples are stored in our tf_dataset as OrderedDict with the keys being the column names and the values being the corresponding row values: End of explanation # TODO Explanation: Exercise Configure the tf_dataset we instantiated so that 1. the examples are stored as couples $(x, y)$ where $x$ is the feature vector with 54 components and $y$ is the label (Hint: Use .map) 1. it loops over the dataset infefinitively (Hint: Use .repeat) 1. it shuffles the dataset (Use buffer_size=400000) End of explanation for example in tf_dataset.take(1): print(example) Explanation: Verify that now the dataset has the correct form: End of explanation covertype_reward_distribution = tfd.Independent( tfd.Deterministic(tf.eye(7)), reinterpreted_batch_ndims=2 ) Explanation: Initializing and configuring the CB environment In Tensorflow Agents, there are special classes that provide the contexts and the rewards to the agent. These classes are generally called environments. The environment defines the type of actions, states/contexts/observations, and rewards allowed in the problem through its action_spec, observation_spec, and reward_spec methods, respectively. In TensorFlow Agents the environments are specific to a given problem, but the agents are generic. This means that if you want to solve your own RL problem with TensorFlow Agents, you'll likely have to write the environment that represents your problem and define the type of actions and states allowed, but you won't need to implement the agent. Namely, you'll be able to use any of the generic agents in the TensorFlow Agents library. The agent will adapt to your problem setting using observation_spec, action_spec, and reward_spec from your environment. In this section, we will instanciate the environment to solve our "covertype contextual bandit" problem. In the TensorFlow Agents library, there is a special environment class named ClassificationBanditEnvironment that turns any multiclass labeled dataset into a contextual bandit environment. The contexts (or observations) will be the features in the dataset, the actions are the label classes. In general, the rewards can be sampled from a probability distribution depending on the actual and the guessed labels. In our case, the rewards will be deterministic: The agent will receive 1 if it chooses the right class, and 0 otherwise. The next cell creates this reward structure using Tensorflow Probability: End of explanation covertype_reward_distribution.sample() Explanation: If we sample from the distribution above, we obtain a $7\times 7$ identity matrix storing the rewards obtained by the agent if the agent selects class $i$ (corresponding to row $i$) when the actual class is $j$ (corresponding to column $j$). In our case we always obtain the identity matrix: End of explanation # TODO: Define the environment BATCH_SIZE = 1 # TODO: Inspect the reward_spec to ensure the reward is scalar float tensors # TODO: Inspect the observation_spec to ensure that the shape=(NUMBER_OF_FEATURE_COLUMNS, ) # TODO: Inspect the action_spec to ensure that it is an integer tensor with values in the range [0, 6] Explanation: Exercise Instanciate the ClassificationBanditEnvironment and invoke its reward_spec, observation_spec, and action_spec methods to make sure they correspond to the covertype bandit problem. Note that the ClassificationBanditEnvironment can process many actions in parallel so it takes a batch_size argument to define the number of actions it will process simultaneously. Let us set that batch size to 1 for now. End of explanation # TODO - Use the reset method to get the first step and inspect the observation attribute # TODO - Choose an action (i.e. integer between 0 to 6), make a step, and inspect the reward Explanation: Exercise Use the reset method on the environment you have just instantiated to obtain the first step. Inspect this step observation attribute to see which cartographic features the environment has given you to guess the covertype. Then in a next cell guess a possible covertype from 0 to 6, and pass it to the environment using the step method. It will return the next step. Inspect the next step reward attribute to see if you guessed correctly. End of explanation next_step.observation Explanation: The next step returned by the environment also contains the new context: End of explanation # TODO - Instantiate a Q-Network with the following layer structure: LAYERS = (300, 200, 100, 100, 50, 50) Explanation: Initializing the Agent In Tensorflow Agents, the classes that implement algorithms to solve contextual bandit problems are called agents. In this lab, we will will use the NeuralEpsilonGreedyAgent, which implements the neural epsilon greedy algorithm. This algorithm predicts the reward for each possible action given a context as input using a neural network. This network is called a Q-Network, since it outputs the $Q$-values $Q(x, a)$ for each of the actions $a$, which are the predicted rewards for each of the actions $a$ given the input context $x$. The action is then chosen to be the one with maximal $Q$-value $1-\epsilon$ of the time, or a random action with probability $\epsilon$. This randomness allows the algorithm to explore states that could be promising even though the neural network may estimate their rewards poorly. Exercise Instanciate a Q-Network so that it takes * as input tensor an observation tensor as defined by the environment * as output tensor an action tensor as defined by the environment Configure the structure of the layers as you wish. End of explanation # TODO - Get a step from the environment, pass it to the QNetwork, and inspect the Q-values Explanation: Exercise Generate a step from the environment using its stepmethod, and feed its observation attribute to the $Q$-network. Verify that the tensor of $Q$-values you are getting is of the right shape (you should get a vector with 7 components containing the predicted reward for each of the covertype classes): End of explanation EPSILON = 0.01 LEARNING_RATE = 0.002 # TODO - Instantiate a NeuralEpsilonGreedyAgent agent Explanation: Exercise Now that we have our QNetwork to estimate our action rewards, in the next cell, you will instanciate the agent from the NeuralEpsilonGreedyAgent class. You will need to retrieve the time_step_spec and the action_spec from the environment. The reward_network will be the QNetwork you instantiated previously. You can take Adam as optimizer with a LEARNING_RATE of your choice. You will also set the propensity of your agent to explore rather than exploit the action with highest predicted reward through the value of EPSILON. Both the LEARNING_RATE and the EPSILON greediness are hyper-paramaters that can affect the training of the agent very much. End of explanation policy_step = agent.policy.action(step) policy_step.action Explanation: The agent has a policy attribute containing the strategy that the agent will use when confronted to a given context. The agent.policy has anaction method that takes in a TimeStep generated by the environment and containing the context. It then issues a PolicyStep containing the action chosen by the policy given this context: End of explanation # TODO - Instantiate a TFUniformReplayBuffer Explanation: Under the hood, the agent policy uses the QNetwork to predict the values of the different actions (or possible covertype classes in our case). However at this stage the QNetwork has not been trained and its weights are random. This means that the predicted classes are meaningless. To remediate that, the agent has a train method that takes an experience, that is, a triples of a state (or context) $x$, the action taken (or predicted class) $y$, and the obtained reward $r$. It then updates the parameters $\theta$ of the QNetwork $Q_\theta$ with a gradient update so that the predicted reward $Q_\theta(x, y)$ used by the agent to guess $y$ in context $x$ increases if $y$ is the correct class (i.e. reward $1$) and decreases if $y$ has been guessed wrongly. Now training one experience at a time may lead to an unstable training, and the algorithm may have difficulty to converge. To stabilize the training one collects a number of these experiences in a experience replay buffer in a first stage, and then sample batches of these experiences to apply the gradient update to the QNetwork parameters in a second stage, as we do in supervized learning. Collecting experiences in an experience replay buffer Reinforcement learning algorithms use experience replay buffers to store trajectories of experiences when executing a policy in an environment. During training, replay buffers are queried for a subset of the trajectories to "replay" the agent's experience. Sampling from the replay buffer facilitates data re-use and breaks harmful co-relations between sequential data in RL, although in contextual bandits this isn't absolutely required but still helpful to stabilize the training. Tensorflow Agents defines a number of replay buffer classes all sharing a common interface to store and access the experience data. In this lab, we will use TFUniformReplayBuffer which samples uniformly experience trajectories. Exercise Initialize a TFUniformReplayBuffer using for data_spec the trajectory_spec stored in the agent's policy. Then use a batch_size of 1, which will be the size of the batches sampled from the replay buffer for each gradient descent update. Themax_length argument indicates the maximum number of steps we allow to be stored in the replay buffer from a single episode. Since for CB the eposide size is always 1, you'll set this argument to that. End of explanation # TODO - Instantiate a DynamicStepDriver with the replay buffer as observer observer = [replay_buffer.add_batch] Explanation: Now we have a Replay buffer, but we also need something to fill it with. Often a common practice is to have the agent interact with and collect experiences from the environment, without actually learning from it (that is without updating the parameters of the QNetwork) in a first step. This data-collection loop can be carried out using a DynamicStepDriver, which will 1. feed the TimeStep generated by the environment and containing the context data for the new step and the reward for the previous step to the agent, 1. collect the action generated by the agent policy from the environment context, and then 1. feed that action back to the environment. All that repeated in a loop. The data encountered by the driver at each step is saved in a NamedTuple called Trajectory and broadcast to a set of observers such as replay buffers. This trajectory includes the observation from the environment, the action recommended by the policy, the reward obtained, the type of the current and the next step, etc. In order for the driver to fill the replay buffer with data it needs acess to the add_batch method of the replay buffer. Exercise Instanciate a DynamicStepDriver using the environment we used so far. Note that the agent has two different policies: * the agent.policy which is the exploitation policy that outputs the action with maximal predicted reward. This is the policy that should be deployed inthe production environment. * the agent.collect_policy which is the exploration policy that outputs the maximal reward action only $1 - \epsilon$ of the time, allowing for exploring a wider range of actions. This is the policy that is the most beneficial to use to collect training data, and the one that the DynmicStepDriver needs to use. (Note that both of these policies use the same underlying QNetwork; only the final action choice from the computed $Q$-values is different.) End of explanation driver.run() replay_dataset = replay_buffer.as_dataset( sample_batch_size=BATCH_SIZE, num_steps=1, single_deterministic_pass=True, ) Explanation: The driver.run method will then start making the agent interact with the environment, while its sends experience trajectories to the replay buffer. Once done, one can retrieve the collected experience data from the replay buffer by invoking its as_dataset method: End of explanation experience, _ = next(iter(replay_dataset)) loss_info = agent.train(experience) loss_info Explanation: Training the agent now amounts to retrieve (batches of) experience data from the replay_dataset and feed that data to the agent train method: End of explanation BATCH_SIZE = 128 LAYERS = (300, 200, 100, 100, 50, 50) EPSILON = 0.01 LEARNING_RATE = 0.002 TRAINING_LOOPS = 150 STEPS_PER_LOOP = 1 AGENT_ALPHA = 10.0 AGENT_CHECKPOINT_NAME = "agent" STEP_CHECKPOINT_NAME = "step" CHECKPOINT_FILE_PREFIX = "ckpt" TIMESTAMP = time.strftime("%Y%m%d_%H%M%S") ROOT_DIR = f"./contextual_bandit_checkpoints/{TIMESTAMP}" Explanation: Training the agent We are almost ready now to write our contextual bandit agent training loop. Before we do that, let us recap what we have done so far, and configure all the objects anew for the real training. First of all, let us gather all the constants we have defined until this point in the same cell, and let us add a few which will be useful for saving our trained policy: End of explanation covertype_reward_distribution = tfd.Independent( tfd.Deterministic(tf.eye(7)), reinterpreted_batch_ndims=2 ) environment = ClassificationBanditEnvironment( tf_dataset, covertype_reward_distribution, BATCH_SIZE ) network = QNetwork( input_tensor_spec=environment.observation_spec(), action_spec=environment.action_spec(), fc_layer_params=LAYERS, ) agent = NeuralEpsilonGreedyAgent( time_step_spec=environment.time_step_spec(), action_spec=environment.action_spec(), reward_network=network, optimizer=tf.compat.v1.train.AdamOptimizer(LEARNING_RATE), epsilon=EPSILON, ) replay_buffer = TFUniformReplayBuffer( data_spec=agent.policy.trajectory_spec, batch_size=BATCH_SIZE, max_length=STEPS_PER_LOOP, ) Explanation: The next cell instanciates anew the contextual bandit main players (i.e., the environment, the neural network, the agent, and the experience replay buffer) so that we can have a global view of the process: End of explanation optimal_reward_fn = functools.partial( env_util.compute_optimal_reward_with_classification_environment, environment=environment, ) optimal_action_fn = functools.partial( env_util.compute_optimal_action_with_classification_environment, environment=environment, ) step_metric = tf_metrics.EnvironmentSteps() metrics = [ tf_metrics.NumberOfEpisodes(), tf_bandit_metrics.RegretMetric(optimal_reward_fn), tf_bandit_metrics.SuboptimalArmsMetric(optimal_action_fn), tf_metrics.AverageReturnMetric(batch_size=environment.batch_size), ] Explanation: During training it is very useful to collect various metrics on the top of the training loss so that we can assess whether the training was effective or not from the plot of their learning curves. Just like you have metrics such as accuracy or recall in supervised learning, in bandit problems we use the regret metric per episode. To calculate the regret, we need to know what the highest possible expected reward is at every time step. The regret is essentially the difference between the highest reward we could have gotten and the reward we obtained summed up over our experience. As training progress, we expect the regret to decrease. To compute the regret, we need to define the optimal_reward_fn using python env_util.compute_optimal_reward_with_classification_environment which can know the optimal reward at each time step from the environment. Another similar metric is the number of times a suboptimal action has been chosen. That requires the definition if the optimal_action_fn in the same way. In the cell below, we collect all these metrics in the metrics list: * RegretMetric - Computes the regret with respect to a baseline (ex: optimal_reward_fn) * SubOptimalArmsMetric - Computes the number of suboptimal arms with respect to a baseline (ex: optimal_action_fn) * NumberOfEpisode - Keeps track of the number of episode so far * AveratgeReturnMetric - Computes the average return so far End of explanation observers = [replay_buffer.add_batch, step_metric] + metrics driver = DynamicStepDriver( env=environment, policy=agent.collect_policy, num_steps=STEPS_PER_LOOP * environment.batch_size, observers=observers, ) Explanation: At each step of the training loop, we will compute the overall value for each of the metrics from the experience generated by the driver. To broadcast that experience data generated by the driver to the metrics, we can include these metrics to the list of observers that the driver broadcasts experience to. So now our observers list will contain the replay buffer add_batch method as well as all the metrics we defined above: End of explanation def restore_and_get_checkpoint_manager(root_dir, agent, metrics, step_metric): Restores from `root_dir` and returns a function that writes checkpoints. trackable_objects = {metric.name: metric for metric in metrics} trackable_objects[AGENT_CHECKPOINT_NAME] = agent trackable_objects[STEP_CHECKPOINT_NAME] = step_metric checkpoint = tf.train.Checkpoint(**trackable_objects) checkpoint_manager = tf.train.CheckpointManager( checkpoint=checkpoint, directory=root_dir, max_to_keep=5 ) latest = checkpoint_manager.latest_checkpoint if latest is not None: print("Restoring checkpoint from %s.", latest) checkpoint.restore(latest) print("Successfully restored to step %s.", step_metric.result()) else: print( "Did not find a pre-existing checkpoint. " "Starting from scratch." ) return checkpoint_manager checkpoint_manager = restore_and_get_checkpoint_manager( ROOT_DIR, agent, metrics, step_metric ) summary_writer = tf.summary.create_file_writer(ROOT_DIR) summary_writer.set_as_default() Explanation: We are almost ready for the training loop! We need a couple of things though to help us save our model and metrics first though before that. Below we provide you with a helper function in order to save your agent and its the metrics, while training the model. For more information on checkpoints and policy savers (which will be used in the training loop below) go here. End of explanation # TODO - Complete the training loop for _ in range(TRAINING_LOOPS): # COMPLETE HERE metric_utils.log_metrics(metrics) for metric in metrics: metric.tf_summaries(train_step=step_metric.result()) checkpoint_manager.save() Explanation: Exercise Now we have all the components ready to start training the model. A step in the training loop follows the following sequence: 1. Run the driver to store experience in the replay buffer 1. Generate a tf.data.Dataset from the replay buffer using its as_dataset method 1. Obtain a batch of experience by iterating once over the replay dataset 1. Use agent.train to train the agent with the experience batch and log the loss 1. Clear the replay buffer and log the metrics With that in mind, complete the training loop below: End of explanation !echo tensorboard dev upload --logdir $(cd $ROOT_DIR && pwd) Explanation: Visualizing the learning curves with TensorBoard Now that the model has trained, we want to visualize the learning curve. For that we will use tensorboard, since we have saved each of the metrics as Tensorboard events with the checkpoint manager. Let us upload the Tensorboard logs to tensorboard.dev; to do that: * execute the cell below that will generate a bash command * copy this command, open a terminal in JupyterLab, and paste this command * follow the authentication instructions End of explanation feature, label = iter(tf_dataset).next() Explanation: You should see something like this if all has gone well: Remark: The $x$-axis in the loss graph represents the number of steps in the global training loop, that is, TRAINING_LOOPS, while for the metrics graph the same $x$-axis represent the total number of episode generated, that is, TRAINING_LOOPS * STEPS_PER_LOOP * BATCH_SIZE. Predicting with our trained contextual bandit Now that our model is trained, what if we want to determine which action to take given a new context. For that we start by iterating over our dataset to get a context: End of explanation step = ts.TimeStep( tf.constant(ts.StepType.FIRST, dtype=tf.int32, shape=[1], name="step_type"), tf.constant(0.0, dtype=tf.float32, shape=[1], name="reward"), tf.constant(1.0, dtype=tf.float32, shape=[1], name="discount"), tf.constant(feature, dtype=tf.float32, shape=[1, 54], name="observation"), ) Explanation: Because our trained agent consumes TimeStep, we need to wrap the raw feature using ts.TimeStep, which expects step_type, reward, discount, and observation as input. Since we are in prediction mode now, reward, and discount are irrelant and can be assigned any arbitary values (see here for more details): End of explanation policy_step = agent.policy.action(step) policy_step.action.numpy()[0] Explanation: At last,let us get the recommeded action from our trained agent: End of explanation
12,859
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: Sklearn Gradient Boosting Regressor - Training a Regression Model
Python Code:: from sklearn.ensemble import GradientBoostingRegressor from sklearn.metrics import mean_squared_error, mean_absolute_error, max_error, explained_variance_score, mean_absolute_percentage_error # initialise & fit Gradient Boosting Regressor model = GradientBoostingRegressor(loss='squared_error', n_estimators=100, max_depth=None, subsample=0.8, random_state=101) model.fit(X_train, y_train) # create dictionary that contains feature importance feature_importance= dict(zip(X_train.columns, model.feature_importances_)) print('Feature Importance',feature_importance) # make prediction for test data & evaluate performance y_pred = model.predict(X_test) print('RMSE:',mean_squared_error(y_test, y_pred, squared = False)) print('MAE:',mean_absolute_error(y_test, y_pred)) print('MAPE:',mean_absolute_percentage_error(y_test, y_pred)) print('Max Error:',max_error(y_test, y_pred)) print('Explained Variance Score:',explained_variance_score(y_test, y_pred))
12,860
Given the following text description, write Python code to implement the functionality described below step by step Description: d3viz Step1: Like Theano’s printing module, d3viz requires graphviz binary to be available. Overview d3viz extends Theano’s printing module to interactively visualize compute graphs. Instead of creating a static picture, it creates an HTML file, which can be opened with current web-browsers. d3viz allows to zoom to different regions and to move graphs via drag and drop, to position nodes both manually and automatically, to retrieve additional information about nodes and edges such as their data type or definition in the source code, to edit node labels, to visualizing profiling information, and to explore nested graphs such as OpFromGraph nodes. Step2: As an example, consider the following multilayer perceptron with one hidden layer and a softmax output layer. Step3: The function predict outputs the probability of 10 classes. You can visualize it with pydotprint as follows Step4: To visualize it interactively, import the d3viz function from the d3viz module, which can be called as before Step5: Open visualization! When you open the output file mlp.html in your web-browser, you will see an interactive visualization of the compute graph. You can move the whole graph or single nodes via drag and drop, and zoom via the mouse wheel. When you move the mouse cursor over a node, a window will pop up that displays detailed information about the node, such as its data type or definition in the source code. When you left-click on a node and select Edit, you can change the predefined node label. If you are dealing with a complex graph with many nodes, the default node layout may not be perfect. In this case, you can press the Release node button in the top-left corner to automatically arrange nodes. To reset nodes to their default position, press the Reset nodes button. You can also display the interactive graph inline in IPython using IPython.display.IFrame Step6: Currently if you use display.IFrame you still have to create a file, and this file can't be outside notebooks root (e.g. usually it can't be in /tmp/). Profiling Theano allows function profiling via the profile=True flag. After at least one function call, the compute time of each node can be printed in text form with debugprint. However, analyzing complex graphs in this way can be cumbersome. d3viz can visualize the same timing information graphically, and hence help to spot bottlenecks in the compute graph more easily! To begin with, we will redefine the predict function, this time by using profile=True flag. Afterwards, we capture the runtime on random data Step7: Open visualization! When you open the HTML file in your browser, you will find an additional Toggle profile colors button in the menu bar. By clicking on it, nodes will be colored by their compute time, where red corresponds to a high compute time. You can read out the exact timing information of a node by moving the cursor over it. Different output formats Internally, d3viz represents a compute graph in the Graphviz DOT language, using the pydot package, and defines a front-end based on the d3.js library to visualize it. However, any other Graphviz front-end can be used, which allows to export graphs to different formats. Step8: Here, we used the PyDotFormatter class to convert the compute graph into a pydot graph, and created a PNG and PDF file. You can find all output formats supported by Graphviz here. OpFromGraph nodes An OpFromGraph node defines a new operation, which can be called with different inputs at different places in the compute graph. Each OpFromGraph node defines a nested graph, which will be visualized accordingly by d3viz. Step9: Open visualization! In this example, an operation with three inputs is defined, which is used to build a function that calls this operations twice, each time with different input arguments. In the d3viz visualization, you will find two OpFromGraph nodes, which correspond to the two OpFromGraph calls. When you double click on one of them, the nested graph appears with the correct mapping of its input arguments. You can move it around by drag and drop in the shaded area, and close it again by double-click. An OpFromGraph operation can be composed of further OpFromGraph operations, which will be visualized as nested graphs as you can see in the following example.
Python Code: !pip install pydot-ng Explanation: d3viz: Interactive visualization of Theano compute graphs Requirements d3viz requires the pydot package. pydot-ng fork is better maintained, and it works both in Python 2.x and 3.x. Install it with pip:: End of explanation import theano as th import theano.tensor as T import numpy as np Explanation: Like Theano’s printing module, d3viz requires graphviz binary to be available. Overview d3viz extends Theano’s printing module to interactively visualize compute graphs. Instead of creating a static picture, it creates an HTML file, which can be opened with current web-browsers. d3viz allows to zoom to different regions and to move graphs via drag and drop, to position nodes both manually and automatically, to retrieve additional information about nodes and edges such as their data type or definition in the source code, to edit node labels, to visualizing profiling information, and to explore nested graphs such as OpFromGraph nodes. End of explanation ninputs = 1000 nfeatures = 100 noutputs = 10 nhiddens = 50 rng = np.random.RandomState(0) x = T.dmatrix('x') wh = th.shared(rng.normal(0, 1, (nfeatures, nhiddens)), borrow=True) bh = th.shared(np.zeros(nhiddens), borrow=True) h = T.nnet.sigmoid(T.dot(x, wh) + bh) wy = th.shared(rng.normal(0, 1, (nhiddens, noutputs))) by = th.shared(np.zeros(noutputs), borrow=True) y = T.nnet.softmax(T.dot(h, wy) + by) predict = th.function([x], y) Explanation: As an example, consider the following multilayer perceptron with one hidden layer and a softmax output layer. End of explanation from theano.printing import pydotprint import os if not os.path.exists('examples'): os.makedirs('examples') pydotprint(predict, 'examples/mlp.png') from IPython.display import Image Image('examples/mlp.png', width='80%') Explanation: The function predict outputs the probability of 10 classes. You can visualize it with pydotprint as follows: End of explanation import theano.d3viz as d3v d3v.d3viz(predict, 'examples/mlp.html') Explanation: To visualize it interactively, import the d3viz function from the d3viz module, which can be called as before: End of explanation from IPython.display import IFrame d3v.d3viz(predict, 'examples/mlp.html') IFrame('examples/mlp.html', width=700, height=500) Explanation: Open visualization! When you open the output file mlp.html in your web-browser, you will see an interactive visualization of the compute graph. You can move the whole graph or single nodes via drag and drop, and zoom via the mouse wheel. When you move the mouse cursor over a node, a window will pop up that displays detailed information about the node, such as its data type or definition in the source code. When you left-click on a node and select Edit, you can change the predefined node label. If you are dealing with a complex graph with many nodes, the default node layout may not be perfect. In this case, you can press the Release node button in the top-left corner to automatically arrange nodes. To reset nodes to their default position, press the Reset nodes button. You can also display the interactive graph inline in IPython using IPython.display.IFrame: End of explanation predict_profiled = th.function([x], y, profile=True) x_val = rng.normal(0, 1, (ninputs, nfeatures)) y_val = predict_profiled(x_val) d3v.d3viz(predict_profiled, 'examples/mlp2.html') Explanation: Currently if you use display.IFrame you still have to create a file, and this file can't be outside notebooks root (e.g. usually it can't be in /tmp/). Profiling Theano allows function profiling via the profile=True flag. After at least one function call, the compute time of each node can be printed in text form with debugprint. However, analyzing complex graphs in this way can be cumbersome. d3viz can visualize the same timing information graphically, and hence help to spot bottlenecks in the compute graph more easily! To begin with, we will redefine the predict function, this time by using profile=True flag. Afterwards, we capture the runtime on random data: End of explanation formatter = d3v.formatting.PyDotFormatter() pydot_graph = formatter(predict_profiled) pydot_graph.write_png('examples/mlp2.png'); pydot_graph.write_pdf('examples/mlp2.pdf'); Image('./examples/mlp2.png') Explanation: Open visualization! When you open the HTML file in your browser, you will find an additional Toggle profile colors button in the menu bar. By clicking on it, nodes will be colored by their compute time, where red corresponds to a high compute time. You can read out the exact timing information of a node by moving the cursor over it. Different output formats Internally, d3viz represents a compute graph in the Graphviz DOT language, using the pydot package, and defines a front-end based on the d3.js library to visualize it. However, any other Graphviz front-end can be used, which allows to export graphs to different formats. End of explanation x, y, z = T.scalars('xyz') e = T.nnet.sigmoid((x + y + z)**2) op = th.OpFromGraph([x, y, z], [e]) e2 = op(x, y, z) + op(z, y, x) f = th.function([x, y, z], e2) d3v.d3viz(f, 'examples/ofg.html') Explanation: Here, we used the PyDotFormatter class to convert the compute graph into a pydot graph, and created a PNG and PDF file. You can find all output formats supported by Graphviz here. OpFromGraph nodes An OpFromGraph node defines a new operation, which can be called with different inputs at different places in the compute graph. Each OpFromGraph node defines a nested graph, which will be visualized accordingly by d3viz. End of explanation x, y, z = T.scalars('xyz') e = x * y op = th.OpFromGraph([x, y], [e]) e2 = op(x, y) + z op2 = th.OpFromGraph([x, y, z], [e2]) e3 = op2(x, y, z) + z f = th.function([x, y, z], [e3]) d3v.d3viz(f, 'examples/ofg2.html') Explanation: Open visualization! In this example, an operation with three inputs is defined, which is used to build a function that calls this operations twice, each time with different input arguments. In the d3viz visualization, you will find two OpFromGraph nodes, which correspond to the two OpFromGraph calls. When you double click on one of them, the nested graph appears with the correct mapping of its input arguments. You can move it around by drag and drop in the shaded area, and close it again by double-click. An OpFromGraph operation can be composed of further OpFromGraph operations, which will be visualized as nested graphs as you can see in the following example. End of explanation
12,861
Given the following text description, write Python code to implement the functionality described below step by step Description: <center> <h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1> <h2> Linear Systems of Equations </h2> <h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2> <h2> Version Step1: <div id='intro' /> Introduction In our last Jupyter Notebook we learned how to solve 1D equations. Now, we'll go to the next level and will learn how to solve not just <i>one</i> equation, but a <i>system</i> of linear equations. This is a set of $n$ equations involving $n$ variables wherein all the equations must be satisfied at the same time. You probably know how to solve small 2D systems with methods such as substitution and reduction, but in practical real-life situations it's very likely that you'll find problems of bigger dimensions. As usual, we'll present some useful methods for solving systems of linear equations below. <div id='DM' /> Direct Methods Firstly, we will study direct methods. They compute the analytic solution of the system (from here comes the name direct) limited only by the loss of numerical precision, because of the arithmetic operations performed by the computer. Their counterpart is the iterative methods, which calculate an approximate solution that evolves iteratively converging to the real solution. <div id='lu' /> LU decomposition Given the matrix $A \in \mathbb{R}^{n \times n}$ square and non singular, the main goal of this method involves finding a decomposition like $A = L U$ where $L,U \in \mathbb{R}^{n \times n}$ are lower and upper triangular matrices respectively. The algorithm to perform this decomposition is basically a modified version of Gaussian Elimination. It basically iterates through the first $n-1$ columns, making $0$ all the entries below the main diagonal. This is accomplished by performing row operations. Step3: Once the decomposition is done, solving a linear system like $A x = b$ is straightforward Step4: Let's now try our implementations. We begin by creating a random 100$\times$100 linear system Step5: and then we compute the solution with our LU solver, and aditionally with the NumPy solver which computes the solution using LAPACK routines. Step6: in order to compare these huge vectors, we use the Euclidean metric as follows Step7: which is a very good result! This method has two important facts to be noted Step8: The procedure to solve the system $Ax=b$ remains almost the same. We have to add the efect of the permutation matrix $P$ Step9: Let's test this new method against the LU and NumPy solvers Step11: Here are some questions about PALU Step12: Given a symmetric positive-definite matrix $A \in \mathbb{R}^{n \times n}$, the Cholesky decomposition is of the form $A =R^T R$, with $R$ being an upper triangular matrix. This method takes advantage of the properties of symmetric matrices, reaching approximately twice the efficiency of LU. Step13: The solve stage remains the same as LU Step14: Now we test our implementation, comparing time execution with LU and PALU on two different linear systems Step17: <div id='im' /> Iterative Methods Step18: As before we will create a linear system $A x = b$, with $A$ as a diagonally dominant matrix, which is a sufficient condition for the methods we will study in this section converge Step19: and find the solution $x$ through np.linalg.solve to use it as the reference solution- Step21: Jacobi Step22: Now let's resolve the same linear system with Jacobi method! Step23: Gauss Seidel Step24: Now let's resolve the same linear system with Gauss-Seidel method! Step25: Here are some questions about Gauss-Seidel Step26: SOR Step27: Now let's resolve the same linear system with Jacobi method! Step28: How can we choose a good value of $\omega$? Well there are some methods you could search, but for now we will try a naive way, i.e, computing the solution for a range $\omega \in [1,1.3]$ as follows Step29: as you can see, we compute the SOR solution with 5 iterations for each $\omega$ on the given range. Step30: Here are some questions about SOR Step31: <div id='ex' /> Exercises Now that you know how to solve systems of linear equations problem with these methods, let's try to answer a few questions! $a)$ Find the values of $\alpha$ that make possible to do a LU descomposition of the following matrix Step32: (1/Kappa(A))||b-A x_a||/||b|| <= ||x-x_a||/||x||<= Kappa(A)||b-A x_a||/||b|| Step33: <div id='acknowledgements' /> Acknowledgements Material created by professor Claudio Torres ([email protected]) and assistants
Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline Explanation: <center> <h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1> <h2> Linear Systems of Equations </h2> <h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2> <h2> Version: 1.12</h2> </center> Table of Contents Introduction Direct Methods LU Palu Cholesky Iterative Methods Convergence Analysis Exercises Acknowledgements End of explanation def lu_decomp(A, show=False): N,_ = A.shape U = np.copy(A) L = np.identity(N) if show: print('Initial matrices') print('L = '); print(np.array_str(L, precision=2, suppress_small=True)) print('U = '); print(np.array_str(U, precision=2, suppress_small=True)) print('----------------------------------------') #iterating through columns for j in range(N-1): #iterating through rows for i in range(j+1,N): L[i,j] = U[i,j]/U[j,j] U[i] -= L[i,j]*U[j] if show: print('L = '); print(np.array_str(L, precision=2, suppress_small=True)) print('U = '); print(np.array_str(U, precision=2, suppress_small=True)) print('----------------------------------------') return L,U Explanation: <div id='intro' /> Introduction In our last Jupyter Notebook we learned how to solve 1D equations. Now, we'll go to the next level and will learn how to solve not just <i>one</i> equation, but a <i>system</i> of linear equations. This is a set of $n$ equations involving $n$ variables wherein all the equations must be satisfied at the same time. You probably know how to solve small 2D systems with methods such as substitution and reduction, but in practical real-life situations it's very likely that you'll find problems of bigger dimensions. As usual, we'll present some useful methods for solving systems of linear equations below. <div id='DM' /> Direct Methods Firstly, we will study direct methods. They compute the analytic solution of the system (from here comes the name direct) limited only by the loss of numerical precision, because of the arithmetic operations performed by the computer. Their counterpart is the iterative methods, which calculate an approximate solution that evolves iteratively converging to the real solution. <div id='lu' /> LU decomposition Given the matrix $A \in \mathbb{R}^{n \times n}$ square and non singular, the main goal of this method involves finding a decomposition like $A = L U$ where $L,U \in \mathbb{R}^{n \times n}$ are lower and upper triangular matrices respectively. The algorithm to perform this decomposition is basically a modified version of Gaussian Elimination. It basically iterates through the first $n-1$ columns, making $0$ all the entries below the main diagonal. This is accomplished by performing row operations. End of explanation Solves a linear system A x = b, where A is a triangular (upper or lower) matrix def solve_triangular(A, b, upper=True): n = b.shape[0] x = np.zeros_like(b) if upper==True: #perform back-substitution x[-1] = (1./A[-1,-1]) * b[-1] for i in range(n-2, -1, -1): x[i] = (1./A[i,i]) * (b[i] - np.sum(A[i,i+1:] * x[i+1:])) else: #perform forward-substitution x[0] = (1./A[0,0]) * b[0] for i in range(1,n): x[i] = (1./A[i,i]) * (b[i] - np.sum(A[i,:i] * x[:i])) return x def solve_lu(A, b, show=False): L,U = lu_decomp(A, show) # L.c = b with c = U.x c = solve_triangular(L, b, upper=False) x = solve_triangular(U, c) return x Explanation: Once the decomposition is done, solving a linear system like $A x = b$ is straightforward: $$A x = b \rightarrow L U x = b \ \ \text{ if we set } \ \ U x = c \rightarrow L c = b \ \ \text{ (solve for c) } \ \rightarrow U x = c$$ and as you might know, solving lower and upper triangular systems can be easily performed by back-substitution and forward-subsitution respectively. End of explanation A = np.random.random((3,3)) b = np.ones(3) Explanation: Let's now try our implementations. We begin by creating a random 100$\times$100 linear system: End of explanation lu_sol = solve_lu(A,b, show=True) np_sol = np.linalg.solve(A,b) Explanation: and then we compute the solution with our LU solver, and aditionally with the NumPy solver which computes the solution using LAPACK routines. End of explanation np.linalg.norm(lu_sol - np_sol) Explanation: in order to compare these huge vectors, we use the Euclidean metric as follows: End of explanation #permutation between rows i and j on matrix A def row_perm(A, i, j): tmp = np.copy(A[i]) A[i] = A[j] A[j] = tmp def palu_decomp(A, show=False): N,_ = A.shape P = np.identity(N) L = np.zeros((N,N)) U = np.copy(A) if show: print('Initial matrices') print('P = '); print(np.array_str(P, precision=2, suppress_small=True)) print('L = '); print(np.array_str(L, precision=2, suppress_small=True)) print('U = '); print(np.array_str(U, precision=2, suppress_small=True)) print('----------------------------------------') #iterating through columns for j in range(N-1): #determine the new pivot p_index = np.argmax(np.abs(U[j:,j])) if p_index != 0: row_perm(P, j, j+p_index) row_perm(U, j, j+p_index) row_perm(L, j, j+p_index) if show: print('A permutation has been made') print('P = '); print(np.array_str(P, precision=2, suppress_small=True)) print('L = '); print(np.array_str(L, precision=2, suppress_small=True)) print('U = '); print(np.array_str(U, precision=2, suppress_small=True)) print('----------------------------------------') #iterating through rows for i in range(j+1,N): L[i,j] = U[i,j]/U[j,j] U[i] -= L[i,j]*U[j] if show: print('P = '); print(np.array_str(P, precision=2, suppress_small=True)) print('L = '); print(np.array_str(L, precision=2, suppress_small=True)) print('U = '); print(np.array_str(U, precision=2, suppress_small=True)) print('----------------------------------------') np.fill_diagonal(L,1) return P,L,U Explanation: which is a very good result! This method has two important facts to be noted: Computing the LU decomposition requires $2n^3/3$ floating point operations. Can you check that? When computing the LU decomposition you can see the instruction L[i,j] = U[i,j]/U[j,j]. Here we divide an entry below the main diagonal by the pivot value. What happens if the pivot equals 0? How can we prevent that? Answer: PALU. <div id='palu' /> PALU decomposition As you might've noted previously, LU has a problem when a pivot has the value of $0$. To handle this problem, we add row permutations to the original LU algorithm. The procedure is as follows: When visiting the row $j$, search for $\max(|a_{j,j}|,\ |a_{j+1,j}|,\ \ldots,\ |a_{N-1,j}|,\ |a_{N,j}|)$ (the maximum between the pivot and the entries below it). If such maximum is $|a_{j,k}| \neq |a_{j,j}|$, permutate rows $i$ and $k$ making $a_{j,k}$ the new pivot. To keep track of all the permutations performed, we use the permutation matrix $P$. It's inicially an identity matrix which permutes its rows in the same way the algorithm does on the resulting matrix. End of explanation def solve_palu(A, b, show=False): P,L,U = palu_decomp(A, show) #A.x = b -> P.A.x = P.b = b' b = np.dot(P,b) # L.c = b' with c = U.x c = solve_triangular(L, b, upper=False) x = solve_triangular(U, c) return x Explanation: The procedure to solve the system $Ax=b$ remains almost the same. We have to add the efect of the permutation matrix $P$: $$A x = b \rightarrow P A x = P b \rightarrow L U x = b' \ \ \text{ if we set } \ \ U x = c \rightarrow L c = b' \ \ \text{ (solve for c) } \ \rightarrow U x = c$$ End of explanation palu_sol = solve_palu(A, b, show=True) np.linalg.norm(palu_sol - lu_sol) np.linalg.norm(palu_sol - np_sol) Explanation: Let's test this new method against the LU and NumPy solvers End of explanation Randomly generates an nxn symmetric positive- definite matrix A. def generate_spd_matrix(n, flag=True): if flag: A = np.random.random((n,n)) # Constructing symmetry A += A.T # A = np.dot(A.T,A) # Another way #symmetric+diagonally dominant -> symmetric positive-definite deltas = 0.1*np.random.random(n) row_sum = A.sum(axis=1)-np.diag(A) np.fill_diagonal(A, row_sum+deltas) else: B = np.random.random((n,n)) # A way to make sure the quadratic form is greater or equal to zero: # this means x^T*B^T\B*x >= ||B*x||, but if B is singular, it could be zero. A = np.dot(B.T,B) # To avoid a being singular, we just add a positive diagonal matrix A = A + np.eye(n) return A Explanation: Here are some questions about PALU: 1. How much computational complexity has been added to the original $2n^3/3$ of LU? 2. Clearly PALU is more robust than LU, but given a non sigular matrix $A$ will it always be possible to perform the PALU decomposition? <div id='cholesky' /> Cholesky This is another direct method only applicable to symmetric positive-definite matrices. In order to try this algorithm we have to create this kind of matrices. The next function generates random symmetric positive-definite matrices. End of explanation def cholesky_decomp(A, show=False): N,_ = A.shape A = np.copy(A) R = np.zeros((N,N)) if show: print('Initial matrix') print('A = '); print(np.array_str(A, precision=2, suppress_small=True)) print('R = '); print(np.array_str(R, precision=2, suppress_small=True)) print('----------------------------------------') for i in range(N): R[i,i] = np.sqrt(A[i,i]) u = (1./R[i,i])*A[i,i+1:] R[i,i+1:] = u A[i+1:,i+1:] -= np.outer(u,u) if show: print('A = '); print(np.array_str(A, precision=2, suppress_small=True)) print('R = '); print(np.array_str(R, precision=2, suppress_small=True)) print('----------------------------------------') return R Explanation: Given a symmetric positive-definite matrix $A \in \mathbb{R}^{n \times n}$, the Cholesky decomposition is of the form $A =R^T R$, with $R$ being an upper triangular matrix. This method takes advantage of the properties of symmetric matrices, reaching approximately twice the efficiency of LU. End of explanation def solve_cholesky(A, b, show=False): R = cholesky_decomp(A, show) #R^T.R.x = b -> R^T.c = b with R.x = c c = solve_triangular(R.T, b, upper=False) x = solve_triangular(R, c) return x Explanation: The solve stage remains the same as LU: End of explanation A = generate_spd_matrix(3) b = np.ones(3) b=np.array([4,2,0]) solve_cholesky(A, b, show=True) A = generate_spd_matrix(100) b = np.ones(100) %timeit solve_cholesky(A, b) %timeit solve_lu(A, b) %timeit solve_palu(A, b) A = generate_spd_matrix(1000) b = np.ones(1000) %timeit solve_cholesky(A, b) %timeit solve_lu(A, b) %timeit solve_palu(A, b) Explanation: Now we test our implementation, comparing time execution with LU and PALU on two different linear systems End of explanation Randomly generates an nxn strictly diagonally dominant matrix A. def generate_dd_matrix(n): A = np.random.random((n,n)) deltas = 0.1*np.random.random(n) row_sum = A.sum(axis=1)-np.diag(A) np.fill_diagonal(A, row_sum+deltas) return A Computes relative error between each row on X matrix and y vector. def error(X, y): D = X-y err = np.linalg.norm(D, axis=1, ord=np.inf) return err Explanation: <div id='im' /> Iterative Methods End of explanation A = np.array([[3, -1, 0, 0, 0, 0.5],[-1, 3, -1, 0, 0.5, 0],[0, -1, 3, -1, 0, 0],[0, 0, -1, 3, -1, 0], [0, 0.5, 0, -1, 3, -1],[0.5, 0, 0, 0, -1, 3]]) b = np.array([2.5, 1.5, 1., 1., 1.5, 2.5]) print ('A='); print (A) print ('b='); print (b) Explanation: As before we will create a linear system $A x = b$, with $A$ as a diagonally dominant matrix, which is a sufficient condition for the methods we will study in this section converge End of explanation np_sol = np.linalg.solve(A,b) Explanation: and find the solution $x$ through np.linalg.solve to use it as the reference solution- End of explanation Iterative methods implementations returns an array X with the the solutions at each iteration def jacobi(A, b, n_iter=50): n = A.shape[0] #array with solutions X = np.empty((n_iter, n)) #initial guess X[0] = np.zeros(n) #submatrices D = np.diag(A) Dinv = D**-1 R = A - np.diag(D) # R = (L+U) for i in range(1, n_iter): # X[i] = Dinv*(b - np.dot(R, X[i-1])) # v1.12 ri = b - np.dot(A, X[i-1]) X[i] = X[i-1]+Dinv*ri return X def jacobi_M(A): L = np.tril(A,-1) U = np.triu(A,1) D = np.diag(np.diag(A)) M = -np.dot(np.linalg.inv(D),L+U) return M # \mathbf{x}_{n+1}=M\,\mathbf{x}_{n}+\mathbf{c} Explanation: Jacobi End of explanation jac_sol = jacobi(A,b, n_iter=50) jac_err = error(jac_sol, np_sol) it = np.linspace(1, 50, 50) plt.figure(figsize=(12,6)) plt.semilogy(it, jac_err, marker='o', linestyle='--', color='b') plt.grid(True) plt.xlabel('Iterations') plt.ylabel('Error') plt.title('Infinity norm error for Jacobi method') plt.show() Mj = jacobi_M(A) print(np.linalg.norm(Mj)) np.linalg.eigvals(Mj) np.abs(np.linalg.eigvals(Mj)) np.max(np.abs(np.linalg.eigvals(Mj))) Explanation: Now let's resolve the same linear system with Jacobi method! End of explanation def gauss_seidel(A, b, n_iter=50): n = A.shape[0] #array with solutions X = np.empty((n_iter, n)) #initial guess X[0] = np.zeros(n) #submatrices R = np.tril(A) #R=(L+D) U = A-R for i in range(1, n_iter): #X[i] = solve_triangular(R, b-np.dot(U, X[i-1]), upper=False) # v1.11 X[i] = X[i-1]+solve_triangular(R, b-np.dot(A, X[i-1]), upper=False) return X def gauss_seidel_M(A): L = np.tril(A,-1) U = np.triu(A,1) D = np.diag(np.diag(A)) M = -np.dot(np.linalg.inv(L+D),U) return M Explanation: Gauss Seidel End of explanation gauss_sol = gauss_seidel(A,b) gauss_err = error(gauss_sol, np_sol) plt.figure(figsize=(12,6)) plt.semilogy(it, gauss_err, marker='o', linestyle='--', color='r') plt.grid(True) plt.xlabel('Iterations') plt.ylabel('Error') plt.title('Infinity norm error for Gauss method') plt.show() Explanation: Now let's resolve the same linear system with Gauss-Seidel method! End of explanation Mgs = gauss_seidel_M(A) print(np.linalg.norm(Mgs)) np.max(np.abs(np.linalg.eigvals(Mgs))) Explanation: Here are some questions about Gauss-Seidel: - Can you explain what the differences between this and Jacobi method are? - Why do we use solve_triangular instead of np.linalg.solve or something similar? End of explanation def sor(A, b, w=1.05, n_iter=50): n = A.shape[0] #array with solutions X = np.empty((n_iter, n)) #initial guess X[0] = np.zeros(n) #submatrices R = np.tril(A) #R=(L+D) U = A-R # v1.11 L = np.tril(A,-1) D = np.diag(np.diag(A)) M = L+D/w for i in range(1, n_iter): #X_i = solve_triangular(R, b-np.dot(U, X[i-1]), upper=False) #X[i] = w*X_i + (1-w)*X[i-1] # v1.11 X[i] = X[i-1]+solve_triangular(M, b-np.dot(A, X[i-1]), upper=False) return X def sor_M(A,w=1.05): L = np.tril(A,-1) U = np.triu(A,1) D = np.diag(np.diag(A)) M = np.dot(np.linalg.inv(w*L + D),((1-w)*D -w*U)) return M Explanation: SOR End of explanation sor_sol = sor(A, b, w=1.15) sor_err = error(sor_sol, np_sol) plt.figure(figsize=(12,6)) plt.semilogy(it, sor_err, marker='o', linestyle='--', color='g') plt.grid(True) plt.xlabel('Iterations') plt.ylabel('Error') plt.title('Infinity norm error for SOR method') plt.show() Msor = sor_M(A) print(np.linalg.norm(Msor)) np.max(np.abs(np.linalg.eigvals(Msor))) Explanation: Now let's resolve the same linear system with Jacobi method! End of explanation n = 30 #width of subdivisions sor_solutions = list() for w in np.linspace(1., 1.3, n): sor_solutions.append(sor(A, b, w, n_iter=5)[-1]) np.asarray(sor_solutions) #now compute error solutions with each w sor_errors = error(sor_solutions, np_sol) w = np.linspace(1., 1.3, n) Explanation: How can we choose a good value of $\omega$? Well there are some methods you could search, but for now we will try a naive way, i.e, computing the solution for a range $\omega \in [1,1.3]$ as follows: End of explanation plt.figure(figsize=(12,6)) plt.semilogy(w, sor_errors, marker='o', linestyle='--', color='g') plt.grid(True) plt.xlabel('w') plt.ylabel('Error') plt.title('Infinity norm error after 5 steps of SOR as a function of w') plt.show() Explanation: as you can see, we compute the SOR solution with 5 iterations for each $\omega$ on the given range. End of explanation plt.figure(figsize=(12,6)) plt.semilogy(it, jac_err, marker='o', linestyle='--', color='b', label='Jacobi') plt.semilogy(it, gauss_err, marker='o', linestyle='--', color='r', label='Gauss-Seidel') plt.semilogy(it, sor_err, marker='o', linestyle='--', color='g', label='SOR') plt.grid(True) plt.xlabel('Iterations') plt.ylabel('Error') plt.title('Infinity norm error for all methods') plt.legend(loc=0) plt.show() Explanation: Here are some questions about SOR: - Why can averaging the current solution with the Gauss-Seidel solution improve convergence? - Why do we use $\omega > 1$ and not $\omega < 1$? - Could you describe a method to find the best value of $\omega$ (the one which optimizes convergence)? - Would it be a better option to re-compute $\omega$ at each iteration? <div id='ca' /> Convergence Analysis Let's see convergence plots all together End of explanation from scipy.linalg import hilbert N=20 errors=np.zeros(N+1) kappas=np.zeros(N+1) my_range=np.arange(5,N+1) for n in my_range: A=hilbert(n) x_exact=np.ones(n) b=np.dot(A,x_exact) x=np.linalg.solve(A,b) errors[n]=np.linalg.norm(x-x_exact,ord=np.inf) kappas[n]=np.linalg.cond(A,2) plt.figure(figsize=(12,6)) plt.semilogy(my_range, 10.**(-16+np.log10(kappas[my_range])), marker='o', linestyle='--', color='b',label='Estimated forward error') plt.semilogy(my_range, errors[my_range], marker='o', linestyle='--', color='r',label='Forward error') plt.grid(True) plt.xlabel('n') #plt.ylabel('$\kappa$ and errors') plt.title('') plt.legend(loc=0) plt.show() Explanation: <div id='ex' /> Exercises Now that you know how to solve systems of linear equations problem with these methods, let's try to answer a few questions! $a)$ Find the values of $\alpha$ that make possible to do a LU descomposition of the following matrix: $$ \begin{bmatrix} \alpha & 2 \[0.3em] 1 & \alpha \end{bmatrix} $$ $b)$- Let $A$ be the following matrix: $$ A = \begin{bmatrix} 2 & 4 & 2 \[0.3em] -1 & 1 & 2 \[0.3em] -1 & -3 & -1 \end{bmatrix} $$ Find the PALU descomposition of the matrix $A$. Solve the system of equations $Ax = [1 , \frac{1}{2}, \frac{1}{3}]^T$. $c)$ Considering this matrix: $$ \begin{bmatrix} 1 & 1 & 0 \[0.3em] 1 & 5 & 2 \[0.3em] 0 & 2 & 3 \end{bmatrix} $$ Find the LU descomposition. Find the Cholesky descomposition. Compare the efficiency of both methods. $d)$ Use Jacobi, Gauss Seidel, and SOR to solve the following system of equations (number of iterations = 2): $$2x + y = 3$$ $$x + 2y = 2$$ Which is the best method to solve this problem (with better results)? $e)$ Explain the pros and cons of using iterative methods instead of the direct ones. Extra: Hilbert Matrix End of explanation n=200 A=hilbert(n) x_exact=np.ones(n) b=np.dot(A,x_exact) x=np.linalg.solve(A,b) kappa=np.linalg.cond(A,2) print(np.log10(kappa)) np.linalg.norm(b-np.dot(A,x)) kappa np.linalg.norm(x-x_exact) x_exact[0] x plt.plot(x,'.') plt.show() Explanation: (1/Kappa(A))||b-A x_a||/||b|| <= ||x-x_a||/||x||<= Kappa(A)||b-A x_a||/||b|| End of explanation lu_decomp(np.array([[1e-20, 1],[1,2]]), show=True) Explanation: <div id='acknowledgements' /> Acknowledgements Material created by professor Claudio Torres ([email protected]) and assistants: Laura Bermeo, Alvaro Salinas, Axel Simonsen and Martín Villanueva. DI UTFSM. April 2016. Update May 2020 - v1.11 - C.Torres : Fixing formatting issues. End of explanation
12,862
Given the following text description, write Python code to implement the functionality described below step by step Description: Python API to EasyForm Step1: You can access the values from the form by treating it as an array indexed on the field names Step2: The array works both ways, so you set default values on the fields by writing the array Step3: Event Handlers for Smarter Forms You can use onInit and onChange to handle component events. For button events use actionPerformed or addAction. Step4: All Kinds of Fields Step5: Dates Step6: SetData Step7: Default Values and placeholder Step8: JupyterJSWidgets work with EasyForm The widgets from JupyterJSWidgets are compatible and can appear in forms.
Python Code: from beakerx import * f = EasyForm("Form and Run") f.addTextField("first") f['first'] = "First" f.addTextField("last") f['last'] = "Last" f.addButton("Go!", tag="run") f Explanation: Python API to EasyForm End of explanation "Good morning " + f["first"] + " " + f["last"] f['last'][::-1] + '...' + f['first'] Explanation: You can access the values from the form by treating it as an array indexed on the field names: End of explanation f['first'] = 'Beaker' f['last'] = 'Berzelius' Explanation: The array works both ways, so you set default values on the fields by writing the array: End of explanation import operator f1 = EasyForm("OnInit and OnChange") f1.addTextField("first", width=15) f1.addTextField("last", width=15)\ .onInit(lambda: operator.setitem(f1, 'last', "setinit1"))\ .onChange(lambda text: operator.setitem(f1, 'first', text + ' extra')) button = f1.addButton("action", tag="action_button") button.actionPerformed = lambda: operator.setitem(f1, 'last', 'action done') f1 f1['last'] + ", " + f1['first'] f1['last'] = 'new Value' f1['first'] = 'new Value2' Explanation: Event Handlers for Smarter Forms You can use onInit and onChange to handle component events. For button events use actionPerformed or addAction. End of explanation g = EasyForm("Field Types") g.addTextField("Short Text Field", width=10) g.addTextField("Text Field") g.addPasswordField("Password Field", width=10) g.addTextArea("Text Area") g.addTextArea("Tall Text Area", 10, 5) g.addCheckBox("Check Box") options = ["a", "b", "c", "d"] g.addComboBox("Combo Box", options) g.addComboBox("Combo Box editable", options, editable=True) g.addList("List", options) g.addList("List Single", options, multi=False) g.addList("List Two Row", options, rows=2) g.addCheckBoxes("Check Boxes", options) g.addCheckBoxes("Check Boxes H", options, orientation=EasyForm.HORIZONTAL) g.addRadioButtons("Radio Buttons", options) g.addRadioButtons("Radio Buttons H", options, orientation=EasyForm.HORIZONTAL) g.addDatePicker("Date") g.addButton("Go!", tag="run2") g result = dict() for child in g: result[child] = g[child] TableDisplay(result) Explanation: All Kinds of Fields End of explanation gdp = EasyForm("Field Types") gdp.addDatePicker("Date") gdp gdp['Date'] Explanation: Dates End of explanation easyForm = EasyForm("Field Types") easyForm.addDatePicker("Date", value=datetime.today().strftime('%Y%m%d')) easyForm Explanation: SetData End of explanation h = EasyForm("Default Values") h.addTextArea("Default Value", value = "Initial value") h.addTextArea("Place Holder", placeholder = "Put here some text") h.addCheckBox("Default Checked", value = True) h.addButton("Press", tag="check") h result = dict() for child in h: result[child] = h[child] TableDisplay(result) Explanation: Default Values and placeholder End of explanation from ipywidgets import * w = IntSlider() widgetForm = EasyForm("python widgets") widgetForm.addWidget("IntSlider", w) widgetForm.addButton("Press", tag="widget_test") widgetForm widgetForm['IntSlider'] Explanation: JupyterJSWidgets work with EasyForm The widgets from JupyterJSWidgets are compatible and can appear in forms. End of explanation
12,863
Given the following text description, write Python code to implement the functionality described below step by step Description: Ordinary Differential Equations Exercise 3 Imports Step1: Damped, driven nonlinear pendulum The equations of motion for a simple pendulum of mass $m$, length $l$ are Step4: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$. Step5: Simple pendulum Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy. Integrate the equations of motion. Plot $E/m$ versus time. Plot $\theta(t)$ and $\omega(t)$ versus time. Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant. Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable. Step7: Damped pendulum Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$. Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$. Decrease your atol and rtol even futher and make sure your solutions have converged. Make a parametric plot of $[\theta(t),\omega(t)]$ versus time. Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$ Label your axes and customize your plot to make it beautiful and effective. Step8: Here is an example of the output of your plot_pendulum function that should show a decaying spiral. Step9: Use interact to explore the plot_pendulum function with
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import seaborn as sns from scipy.integrate import odeint from IPython.html.widgets import interact, fixed Explanation: Ordinary Differential Equations Exercise 3 Imports End of explanation g = 9.81 # m/s^2 l = 0.5 # length of pendulum, in meters tmax = 50. # seconds t = np.linspace(0, tmax, int(100*tmax)) Explanation: Damped, driven nonlinear pendulum The equations of motion for a simple pendulum of mass $m$, length $l$ are: $$ \frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta $$ When a damping and periodic driving force are added the resulting system has much richer and interesting dynamics: $$ \frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta - a \omega - b \sin(\omega_0 t) $$ In this equation: $a$ governs the strength of the damping. $b$ governs the strength of the driving force. $\omega_0$ is the angular frequency of the driving force. When $a=0$ and $b=0$, the energy/mass is conserved: $$E/m =g\ell(1-\cos(\theta)) + \frac{1}{2}\ell^2\omega^2$$ Basic setup Here are the basic parameters we are going to use for this exercise: End of explanation def derivs(y, t, a, b, omega0): Compute the derivatives of the damped, driven pendulum. Parameters ---------- y : ndarray The solution vector at the current time t[i]: [theta[i],omega[i]]. t : float The current time t[i]. a, b, omega0: float The parameters in the differential equation. Returns ------- dy : ndarray The vector of derviatives at t[i]: [dtheta[i],domega[i]]. # YOUR CODE HERE theta = y[0] omega = y[1] dtheta=y[1] domega=(-g/l)*np.sin(theta)-a*omega-b*np.sin(omega0*t) dy=np.array([dtheta,domega]) return dy assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.]) def energy(y): Compute the energy for the state array y. The state array y can have two forms: 1. It could be an ndim=1 array of np.array([theta,omega]) at a single time. 2. It could be an ndim=2 array where each row is the [theta,omega] at single time. Parameters ---------- y : ndarray, list, tuple A solution vector Returns ------- E/m : float (ndim=1) or ndarray (ndim=2) The energy per mass. # YOUR CODE HERE EM=g*l*(1-np.cos(y[0]))+0.5*(l**2)*(y[1]**2) return EM x=np.array([np.pi,0]) y=g z=np.ones((10,2)) w=np.ones(10)*energy(np.array([1,1])) w.shape assert np.allclose(energy(np.array([np.pi,0])),g) assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1]))) Explanation: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$. End of explanation # YOUR CODE HERE raise NotImplementedError() # YOUR CODE HERE raise NotImplementedError() # YOUR CODE HERE raise NotImplementedError() assert True # leave this to grade the two plots and their tuning of atol, rtol. Explanation: Simple pendulum Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy. Integrate the equations of motion. Plot $E/m$ versus time. Plot $\theta(t)$ and $\omega(t)$ versus time. Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant. Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable. End of explanation def plot_pendulum(a=0.0, b=0.0, omega0=0.0): Integrate the damped, driven pendulum and make a phase plot of the solution. # YOUR CODE HERE raise NotImplementedError() Explanation: Damped pendulum Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$. Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$. Decrease your atol and rtol even futher and make sure your solutions have converged. Make a parametric plot of $[\theta(t),\omega(t)]$ versus time. Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$ Label your axes and customize your plot to make it beautiful and effective. End of explanation plot_pendulum(0.5, 0.0, 0.0) Explanation: Here is an example of the output of your plot_pendulum function that should show a decaying spiral. End of explanation # YOUR CODE HERE raise NotImplementedError() Explanation: Use interact to explore the plot_pendulum function with: a: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$. b: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$. omega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$. End of explanation
12,864
Given the following text description, write Python code to implement the functionality described below step by step Description: SAP Credv2 The following subsections show a representation of the file format portions and how to generate them. First we need to perform some setup to import the packet classes Step1: Credv2 without LPS We'll read the files used in the test case suite and use them as example Step2: The Cred files are comprised of the following main structures Step3: Credv2 without LPS and version 1 cipher format 3DES encryption Step4: Credv2 without LPS and version 1 cipher format AES256 encryption Step5: Credv2 without LPS Cipher Header version 1 cipher format Step6: Credv2 Plain Credential After decrypting the credential using the username provided, the plaintext contains the following structure Step7: Credv2 Plain Credential with DP API When using SSO Credentials in Windows, the CommonCryptoLib encrypts the PIN using DP API. Step8: Credv2 with LPS We'll read the files used in the test case suite and use them as example Step9: The Cred files are comprised of the following main structures Step10: Credv2 with LPS in INT/Fallback mode (Linux without TPM) Step11: SAP LPS Cipher header
Python Code: from pysap.SAPCredv2 import * from IPython.display import display Explanation: SAP Credv2 The following subsections show a representation of the file format portions and how to generate them. First we need to perform some setup to import the packet classes: End of explanation with open("../../tests/data/credv2_lps_off_v0_3des", "rb") as fd: credv2_lps_off_v0_3des_string = fd.read() credv2_lps_off_v0_3des = SAPCredv2(credv2_lps_off_v0_3des_string) with open("../../tests/data/credv2_lps_off_v1_3des", "rb") as fd: credv2_lps_off_v1_3des_string = fd.read() credv2_lps_off_v1_3des = SAPCredv2(credv2_lps_off_v1_3des_string) with open("../../tests/data/credv2_lps_off_v1_aes256", "rb") as fd: credv2_lps_off_v1_aes256_string = fd.read() credv2_lps_off_v1_aes256 = SAPCredv2(credv2_lps_off_v1_aes256_string) with open("../../tests/data/credv2_lps_off_v0_dp_3des", "rb") as fd: credv2_lps_off_v0_dp_3des_string = fd.read() credv2_lps_off_v0_dp_3des = SAPCredv2(credv2_lps_off_v0_dp_3des_string) Explanation: Credv2 without LPS We'll read the files used in the test case suite and use them as example: End of explanation credv2_lps_off_v0_3des.show() Explanation: The Cred files are comprised of the following main structures: Credv2 without LPS and version 0 cipher format 3DES encryption End of explanation credv2_lps_off_v1_3des.show() Explanation: Credv2 without LPS and version 1 cipher format 3DES encryption End of explanation credv2_lps_off_v1_aes256.show() Explanation: Credv2 without LPS and version 1 cipher format AES256 encryption End of explanation cipher_header = SAPCredv2_Cred_Cipher(str(credv2_lps_off_v1_aes256.creds[0].cred.cipher)) cipher_header.canvas_dump() Explanation: Credv2 without LPS Cipher Header version 1 cipher format End of explanation cred_v2_lps_off_aes256_plain = credv2_lps_off_v1_aes256.creds[0].cred.decrypt("username") cred_v2_lps_off_aes256_plain.show() Explanation: Credv2 Plain Credential After decrypting the credential using the username provided, the plaintext contains the following structure: End of explanation cred_v2_lps_off_dp_3des_plain = credv2_lps_off_v0_dp_3des.creds[0].cred.decrypt("username") cred_v2_lps_off_dp_3des_plain.show() Explanation: Credv2 Plain Credential with DP API When using SSO Credentials in Windows, the CommonCryptoLib encrypts the PIN using DP API. End of explanation with open("../../tests/data/credv2_lps_on_v2_dp_aes256", "rb") as fd: credv2_lps_on_v2_dp_aes256_string = fd.read() credv2_lps_on_v2_dp_aes256 = SAPCredv2(credv2_lps_on_v2_dp_aes256_string) with open("../../tests/data/credv2_lps_on_v2_int_aes256", "rb") as fd: credv2_lps_on_v2_int_aes256_string = fd.read() credv2_lps_on_v2_int_aes256 = SAPCredv2(credv2_lps_on_v2_int_aes256_string) Explanation: Credv2 with LPS We'll read the files used in the test case suite and use them as example: End of explanation credv2_lps_on_v2_dp_aes256.show() Explanation: The Cred files are comprised of the following main structures: Credv2 with LPS in DP API Mode (Windows) End of explanation credv2_lps_on_v2_int_aes256.show() cred_v2_lps_on_int_aes256_plain = credv2_lps_on_v2_int_aes256.creds[0].cred.decrypt("username") cred_v2_lps_on_int_aes256_plain.show() Explanation: Credv2 with LPS in INT/Fallback mode (Linux without TPM) End of explanation lps_cipher_header = SAPLPSCipher(str(credv2_lps_on_v2_int_aes256.creds[0].cred.cipher)) lps_cipher_header.canvas_dump() Explanation: SAP LPS Cipher header End of explanation
12,865
Given the following text description, write Python code to implement the functionality described below step by step Description: Recommendation Methods Step1: Recommendation Comparison A more general framework for comparing different recommendation techniques Evaluation DataSet See notes in the creating_dataset_for_evaluation.ipynb From full dataset - removed rows with no nn features (for view or for buy) - remove the items that have been viewed 20minutes before buying. - sub-sampled a set of 1000 users Step2: Load precalculated things for recommendations Step3: Loop through users and score function Step4: Evaluate Different Algorithms Step5: Save
Python Code: import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set_style('white') %matplotlib inline import sys import os sys.path.append('../') os.getcwd() import src import src.recommendation reload(src.recommendation) from src.recommendation import * Explanation: Recommendation Methods: Alg 0: Most similar items to user's previous views Offline: 1. For each item, calculate features on trained neural network $ f_j $ 2. For each user, look up previous views and average the features together of the previous visit $ f_i = \sum_j f_j*I(i,j) $ 3. Store the features of the 'typical' item viewed by this user. 4. Calculate similarity of all items to user's 'typical item', store as a recommend list Online: 1. User comes to website 2. Recommend the top 20 items from his recommend list. End of explanation # load smaller user behavior dataset user_profile = pd.read_pickle('../data_user_view_buy/user_profile_items_nonnull_features_20_mins_5_views_v2_sample1000.pkl') user_sample = user_profile.user_id.unique() print(len(user_profile)) print(len(user_sample)) user_profile.head() # requires nn features spu_fea = pd.read_pickle("../data_nn_features/spu_fea_sample1000.pkl") # make sure all items have features ?? One missing print(len(set(list(user_profile.buy_spu.unique())+list(user_profile.view_spu.unique())))) print(len(spu_fea.spu_id.unique())) Explanation: Recommendation Comparison A more general framework for comparing different recommendation techniques Evaluation DataSet See notes in the creating_dataset_for_evaluation.ipynb From full dataset - removed rows with no nn features (for view or for buy) - remove the items that have been viewed 20minutes before buying. - sub-sampled a set of 1000 users End of explanation # this might be faster # # ## Precalculate average feature per user # average_viewed_features_dict = {} # for user_id in user_profile.user_id.unique(): # average_viewed_features_dict[user_id] = get_user_average_features(user_id,user_profile,spu_fea) Explanation: Load precalculated things for recommendations End of explanation def get_user_buy_ranks(users_sample,user_profile,spu_fea,method,randomize_scores=False): user_buy_ranks = np.empty(len(users_sample)) no_ranks = np.empty(len(users_sample)) for ui,user_id in enumerate(users_sample): print(ui) # rank items item_score_in_category = rank_candidates(user_id,user_profile,spu_fea,method=method,extra_inputs={},randomize_scores=randomize_scores) # get bought item rank and store into array user_buy_ranks[ui]=item_score_in_category.loc[item_score_in_category.buy==1,'rank'].as_matrix()[0] # get number of ranks per category no_ranks[ui]=item_score_in_category['rank'].max() return(user_buy_ranks,no_ranks,item_score_in_category) Explanation: Loop through users and score function End of explanation users_sample = np.random.choice(user_sample,size=50) # nathan's user_buy_ranks1,no_ranks1,item_score_in_category=get_user_buy_ranks(users_sample,user_profile,spu_fea,method='AverageFeatureSim') # just taking the last item user_buy_ranks2,no_ranks2,_=get_user_buy_ranks(users_sample,user_profile,spu_fea,method='LastItemSim') # randomize user_buy_ranks3,no_ranks3,_=get_user_buy_ranks(users_sample,user_profile,spu_fea,method='Randomize',randomize_scores=True) # stack rank_percent = np.vstack((user_buy_ranks1/no_ranks1,user_buy_ranks2/no_ranks2,user_buy_ranks3/no_ranks3)) print(rank_percent.shape) # Plot mean = rank_percent.mean(axis=1) n = np.shape(rank_percent)[1] m = np.shape(rank_percent)[0] print(n) print(m) sem = rank_percent.std(axis=1)/np.sqrt(n) plt.errorbar(np.arange(m),y=mean,yerr=sem,linestyle='None',marker='o') plt.xticks(np.arange(m),['AvgFeatures','LastFeat','Random \n Guess']) plt.xlim([-1,m+1]) plt.ylim(0,1) sns.despine() plt.title('Recommendor Comparison') plt.ylabel('Average (Buy Rank / # in Buy Category)') plt.axhline(y=0.5,linestyle='--') savefile = '../figures/recommender_comparison_sample_1000_subsample50_v1.png' plt.savefig(savefile,dpi=300) from src import s3_data_management s3_data_management.push_results_to_s3(os.path.basename(savefile),savefile) Explanation: Evaluate Different Algorithms End of explanation %%bash jupyter nbconvert --to slides Recommendation_Compare_Methods.ipynb && mv Recommendation_Compare_Methods.slides.html ../notebook_slides/Recommendation_Compare_Methods_v1.slides.html jupyter nbconvert --to html Recommendation_Compare_Methods.ipynb && mv Recommendation_Compare_Methods.html ../notebook_htmls/Recommendation_Compare_Methods_v1.html cp Recommendation_Compare_Methods.ipynb ../notebook_versions/Recommendation_Compare_Methods_v1.ipynb Explanation: Save End of explanation
12,866
Given the following text description, write Python code to implement the functionality described below step by step Description: Implicit Georeferencing This workbook sets explicit georeferences from implicit georeferencing through names of extents given in dataset titles or keywords. A file sources.py needs to contain the CKAN and SOURCE config as follows Step1: Spatial extent name-geometry lookup The fully qualified names and GeoJSON geometries of relevant spatial areas are contained in our custom dataschema. Step4: Name lookups Relevant areas are listed under different synonyms. We'll create a dictionary of synonymous search terms ("s") and extent names (index "i").
Python Code: import ckanapi from harvest_helpers import * from secret import CKAN ckan = ckanapi.RemoteCKAN(CKAN["dpaw-internal"]["url"], apikey=CKAN["dpaw-internal"]["key"]) print("Using CKAN {0}".format(ckan.address)) Explanation: Implicit Georeferencing This workbook sets explicit georeferences from implicit georeferencing through names of extents given in dataset titles or keywords. A file sources.py needs to contain the CKAN and SOURCE config as follows: CKAN = { "dpaw-internal":{ "url": "http://internal-data.dpaw.wa.gov.au/", "key": "API-KEY" } } Configure CKAN and source End of explanation # Getting the extent dictionary e url = "https://raw.githubusercontent.com/datawagovau/ckanext-datawagovautheme/dpaw-internal/ckanext/datawagovautheme/datawagovau_dataset.json" ds = json.loads(requests.get(url).content) choice_dict = [x for x in ds["dataset_fields"] if x["field_name"] == "spatial"][0]["choices"] e = dict([(x["label"], json.dumps(x["value"])) for x in choice_dict]) print("Extents: {0}".format(e.keys())) Explanation: Spatial extent name-geometry lookup The fully qualified names and GeoJSON geometries of relevant spatial areas are contained in our custom dataschema. End of explanation # Creating a search term - extent index lookup # m is a list of keys "s" (search term) and "i" (extent index) m = [ {"s":"Eighty", "i":"MPA Eighty Mile Beach"}, {"s":"EMBMP", "i":"MPA Eighty Mile Beach"}, {"s":"Camden", "i":"MPA Lalang-garram / Camden Sound"}, {"s":"LCSMP", "i":"MPA Lalang-garram / Camden Sound"}, {"s":"Rowley", "i":"MPA Rowley Shoals"}, {"s":"RSMP", "i":"MPA Rowley Shoals"}, {"s":"Montebello", "i":"MPA Montebello Barrow"}, {"s":"MBIMPA", "i":"MPA Montebello Barrow"}, {"s":"Ningaloo", "i":"MPA Ningaloo"}, {"s":"NMP", "i":"MPA Ningaloo"}, {"s":"Shark bay", "i":"MPA Shark Bay Hamelin Pool"}, {"s":"SBMP", "i":"MPA Shark Bay Hamelin Pool"}, {"s":"Jurien", "i":"MPA Jurien Bay"}, {"s":"JBMP", "i":"MPA Jurien Bay"}, {"s":"Marmion", "i":"MPA Marmion"}, {"s":"Swan Estuary", "i":"MPA Swan Estuary"}, {"s":"SEMP", "i":"MPA Swan Estuary"}, {"s":"Shoalwater", "i":"MPA Shoalwater Islands"}, {"s":"SIMP", "i":"MPA Shoalwater Islands"}, {"s":"Ngari", "i":"MPA Ngari Capes"}, {"s":"NCMP", "i":"MPA Ngari Capes"}, {"s":"Walpole", "i":"MPA Walpole Nornalup"}, {"s":"WNIMP", "i":"MPA Walpole Nornalup"} ] def add_spatial(dsdict, extent_string, force=False, debug=False): Adds a given spatial extent to a CKAN dataset dict if "spatial" is None, "" or force==True. Arguments: dsdict (ckanapi.action.package_show()) CKAN dataset dict extent_string (String) GeoJSON geometry as json.dumps String force (Boolean) Whether to force overwriting "spatial" debug (Boolean) Debug noise Returns: (dict) The dataset with spatial extent replaced per above rules. if not dsdict.has_key("spatial"): overwrite = True if debug: msg = "Spatial extent not given" elif dsdict["spatial"] == "": overwrite = True if debug: msg = "Spatial extent is empty" elif force: overwrite = True msg = "Spatial extent was overwritten" else: overwrite = False msg = "Spatial extent unchanged" if overwrite: dsdict["spatial"] = extent_string print(msg) return dsdict def restore_extents(search_mapping, extents, ckan, debug=False): Restore spatial extents for datasets Arguments: search_mapping (list) A list of dicts with keys "s" for ckanapi package_search query parameter "q", and key "i" for the name of the extent e.g.: m = [ {"s":"tags:marinepark_80_mile_beach", "i":"MPA Eighty Mile Beach"}, ... ] extents (dict) A dict with key "i" (extent name) and GeoJSON Multipolygon geometry strings as value, e.g.: {u'MPA Eighty Mile Beach': '{"type": "MultiPolygon", "coordinates": [ .... ]', ...} ckan (ckanapi) A ckanapi instance debug (boolean) Debug noise Returns: A list of dictionaries returned by ckanapi's package_update for x in search_mapping: if debug: print("\nSearching CKAN with '{0}'".format(x["s"])) found = ckan.action.package_search(q=x["s"])["results"] if debug: print("Found datasets: {0}\n".format([d["title"] for d in found])) fixed = [add_spatial(d, extents[x["i"]], force=True, debug=True) for d in found] if debug: print(fixed, "\n") datasets_updated = upsert_datasets(fixed, ckan, debug=False) restore_extents(m, e, ckan) d = [ckan.action.package_show(id = x) for x in ckan.action.package_list()] fix = [x["title"] for x in d if not x.has_key("spatial")] len(fix) d[0] fix Explanation: Name lookups Relevant areas are listed under different synonyms. We'll create a dictionary of synonymous search terms ("s") and extent names (index "i"). End of explanation
12,867
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction To OANDA-System Environment Step1: Data Containers Tick Bar Event Objects market event bar event signal event order event fill event Event Queue Object Should be structured in a dictionary with specific names. Step2: Api class The object that make every requests. Basis of the entire system. Initialized with Config( ) and a dictionary of event queues. ( ) Step3: Simple Requests get instruments get account infomation get positions get orders get price get historical data the return value of these functions are json-like dictionaries. (see examples) get_instruments( ) Step4: Beautify JSON output Step5: get_prices(instrument) instrument Step6: get_account_info(account_id=-1) account_id Step7: get_positions( ) TODO Step8: get_orders( ) TODO Step9: get_trades( ) TODO Step10: place_order( ) params Step14: Streaming PyApi.make_stream(instrument) Monitor market impulses. Support strategies.
Python Code: from api import* myConfig = Config() myConfig.view() Explanation: Introduction To OANDA-System Environment: Python-Anaconda 2.7 pandas, json, requests Config Class Contains infomations that we need to connect to OANDA server and make requests. End of explanation q1 = EventQueue() q2 = EventQueue() q = {'mkt': q1, 'bar': q2} Explanation: Data Containers Tick Bar Event Objects market event bar event signal event order event fill event Event Queue Object Should be structured in a dictionary with specific names. End of explanation myapi = PyApi(Config(), q) Explanation: Api class The object that make every requests. Basis of the entire system. Initialized with Config( ) and a dictionary of event queues. ( ) End of explanation instrumentList = myapi.get_instruments() print instrumentList['instruments'][0:3] Explanation: Simple Requests get instruments get account infomation get positions get orders get price get historical data the return value of these functions are json-like dictionaries. (see examples) get_instruments( ) End of explanation import json print json.dumps(instrumentList['instruments'][0:3], sort_keys=True, indent=4) Explanation: Beautify JSON output End of explanation print json.dumps(myapi.get_prices("EUR_USD"), sort_keys=True, indent=4) Explanation: get_prices(instrument) instrument: string End of explanation print json.dumps(myapi.get_account_info(), sort_keys=True, indent=4) Explanation: get_account_info(account_id=-1) account_id: integer/string, the account id. Default = -1, use that in config. End of explanation print json.dumps(myapi.get_positions(), sort_keys=True, indent=4) Explanation: get_positions( ) TODO: End of explanation print json.dumps(myapi.get_orders(), sort_keys=True, indent=4) Explanation: get_orders( ) TODO: End of explanation print json.dumps(myapi.get_trades(), sort_keys=True, indent=4) Explanation: get_trades( ) TODO: End of explanation print json.dumps(myapi.get_positions(), sort_keys=True, indent=4) myapi.place_order('USD_CAD', 'sell', 150, None, 'market') #fill USD_CAD long position print '\n--------------AFTER--------------\n' print json.dumps(myapi.get_positions(), sort_keys=True, indent=4) Explanation: place_order( ) params: see docs in .py file End of explanation from strat import * class BuyAndHold(BaseStrategy): some parameters here. instrument = 'USD_CAD' BHFlag = True def __init__(self, api): override constructor. self.api = api def on_bar(self, event): Strategy logics. if self.BHFlag: # If BHFlag == True, buy 100 CA dollar self.market_buy(100) self.BHFlag = 0 # then hold. # Print account summary and bar data. print '---------------------' print json.dumps(self.api.get_account_info(), sort_keys=True, indent=4) print '\n---------------------\n' event.body.view() print '---------------------' q1 = EventQueue() q2 = EventQueue() q = {'mkt': q1, 'bar': q2} # construct event queues api = PyApi(Config(), q) # initialize api mystrat = BuyAndHold(api) # initialize strategy q1.bind('ETYPE_MKT', api.on_market_impulse) # let queues listen to some events q2.bind('ETYPE_BAR', mystrat.on_bar) q1.open() # start pushing event to strategies and other functions q2.open() api.make_stream('USD_CAD') # make connection to server. Explanation: Streaming PyApi.make_stream(instrument) Monitor market impulses. Support strategies. End of explanation
12,868
Given the following text description, write Python code to implement the functionality described below step by step Description: Tutorial de especificação de histograma, caso discreto exato O problema consiste em mapear os pixels de uma imagem dada para que o histograma da imagem transformada seja um histograma especificado. Este é um problema que acontece, por exemplo, quando você vai fazer um mosaico e possui um conjunto definido de pastilhas de tons de cinza disponível no seu estoque. Como mapear os pixels de sua imagem para utilizar as pastilhas disponíveis. A solução adotada aqui consiste na seguinte idéia Step1: Passo 1 Step2: Passo 2 Step3: Passo 3
Python Code: %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import sys,os ia898path = os.path.abspath('/etc/jupyterhub/ia898_1s2017/') if ia898path not in sys.path: sys.path.append(ia898path) import ia898.src as ia hout = np.concatenate((np.arange(128),np.arange(128,0,-1))) plt.plot(hout),plt.title('Distribuição desejada do histograma'); Explanation: Tutorial de especificação de histograma, caso discreto exato O problema consiste em mapear os pixels de uma imagem dada para que o histograma da imagem transformada seja um histograma especificado. Este é um problema que acontece, por exemplo, quando você vai fazer um mosaico e possui um conjunto definido de pastilhas de tons de cinza disponível no seu estoque. Como mapear os pixels de sua imagem para utilizar as pastilhas disponíveis. A solução adotada aqui consiste na seguinte idéia: primeiro é importante que a quantidade de pixels com todos os tons possíveis sejam exatamente a mesma dos pixels da imagem de entrada. O próximo passo consiste em ordenar os dois conjuntos e fazer a correspondência entre eles, isto é, o n-ésimo pixel ordenado do seu "estoque" vai substituir o n-ésimo pixel ordenado de sua imagem. Assim, existem 3 etapas: 1) dada a especificação do histograma, modificar este histograma para que represente o mesmo número de pixels da imagem original; 2) dado o histograma desejado, criar o conjunto de pixels ordenados a partir do histograma desejado; 3) calcular o índice de ordenação dos pixels da imagem de entrada e substituir neste local o pixel do histograma desejado. Especificando o histograma desejado Suponha que queremos que o histograma tenha uma distribuição triangular. Estamos considerando que os pixels variam de 0 a 255, assim o histograma é um vetor de 256 elementos: End of explanation f = mpimg.imread('../data/cameraman.tif') ia.adshow(f, 'imagem de entrada') plt.plot(ia.histogram(f)),plt.title('histograma original'); n = f.size hcc = np.cumsum(hout) hcc1 = ia.normalize(hcc,[0,n]) h1 = np.diff(np.concatenate(([0],hcc1))) plt.plot(hcc1), plt.title('histograma acumulado desejado'); plt.show() plt.plot(h1), plt.title('histograma desejado'); plt.show Explanation: Passo 1: Modificar o histograma para que represente o mesmo número de pixels da imagem desejada. A imagem a ser modificada é a "cameraman.tif". A idéia é calcular o histograma acumulado, normalizá-lo para que o valor final acumulado seja o número de pixels (n) da imagem de entrada e fazer a diferença discreta para calcular o histograma que represente o mesmo número de pixels da imagem do "cameraman". End of explanation gs = np.repeat(np.arange(256),h1).astype('uint8') plt.plot(gs), plt.title('pixels desejados, ordenados'); plt.show() plt.plot(np.sort(f.ravel())), plt.title('pixels ordenados da imagem original'); Explanation: Passo 2: Realizar o conjunto de pixels desejados a partir do histograma desejado. É utilizado a função "repeat" do NumPy. End of explanation g = np.empty( (n,), np.uint8) si = np.argsort(f.ravel()) g[si] = gs g.shape = f.shape ia.adshow(g, 'imagem modificada') h = ia.histogram(g) plt.plot(h), plt.title('histograma da imagem modificada'); Explanation: Passo 3: Fazer o mapeando dos pixels ordenados. Aqui existem três técnicas importantes: a primeira é trabalhar com a imagem rasterizada em uma dimensão, com o uso de ravel(); a segunda é o uso da função argsort que retorna os índices dos pixels ordenados pelo nível de cinza; e a terceira é a atribuição indexada g[si] = gs, onde g é a imagem de saída rasterizada, si é o array de índices dos pixels ordenados e gs são os pixels desejados ordenados. O último passo é colocar o shape da imagem desejada. End of explanation
12,869
Given the following text description, write Python code to implement the functionality described below step by step Description: First, load up the data First you're going to want to create a data frame from the dailybots.csv file which can be found in the data directory. You should be able to do this with the pd.read_csv() function. Take a minute to look at the dataframe because we are going to be using it for this entire worksheet. Step1: Exercise 1 Step2: Exercise 2 Step3: Exercise 3 Step4: Exercise 4 Step5: Exercise 5
Python Code: data = pd.read_csv( '../../data/dailybots.csv' ) #Look at a summary of the data data.describe() data['botfam'].value_counts() Explanation: First, load up the data First you're going to want to create a data frame from the dailybots.csv file which can be found in the data directory. You should be able to do this with the pd.read_csv() function. Take a minute to look at the dataframe because we are going to be using it for this entire worksheet. End of explanation grouped_df = data[data.botfam == "Ramnit"].groupby(['industry']) grouped_df.sum() Explanation: Exercise 1: Which industry sees the most Ramnit infections? Least? Count the number of infected days for "Ramnit" in each industry industry. How: 1. First filter the data to remove all the infections we don't care about 2. Aggregate the data on the column of interest. HINT: You might want to use the groupby() function 3. Add up the results End of explanation group2 = data[['botfam','orgs']].groupby( ['botfam']) summary = group2.agg([np.min, np.max, np.mean, np.median, np.std]) summary.sort_values( [('orgs', 'median')], ascending=False) Explanation: Exercise 2: Calculate the min, max, median and mean infected orgs for each bot family, sort by median In this exercise, you are asked to calculate the min, max, median and mean of infected orgs for each bot family sorted by median. HINT: 1. Using the groupby() function, create a grouped data frame 2. You can do this one metric at a time OR you can use the .agg() function. You might want to refer to the documentation here: http://pandas.pydata.org/pandas-docs/stable/groupby.html#applying-multiple-functions-at-once 3. Sort the values (HINT HINT) by the median column End of explanation df3 = data[['date','hosts']].groupby('date').sum() df3.sort_values(by='hosts', ascending=False).head(10) Explanation: Exercise 3: Which date had the total most bot infections and how many infections on that day? In this exercise you are asked to aggregate and sum the number of infections (hosts) by date. Once you've done that, the next step is to sort in descending order. End of explanation filteredData = data[ data['botfam'].isin(['Necurs', 'Ramnit', 'PushDo']) ][['date', 'botfam', 'hosts']] groupedFilteredData = filteredData.groupby( ['date', 'botfam']).sum() groupedFilteredData.unstack(level=1).plot(kind='line', subplots=False) Explanation: Exercise 4: Plot the daily infected hosts for Necurs, Ramnit and PushDo In this exercise you're going to plot the daily infected hosts for three infection types. In order to do this, you'll need to do the following steps: 1. Filter the data to remove the botfamilies we don't care about. 2. Use groupby() to aggregate the data by date and family, then sum up the hosts in each group 3. Plot the data. Hint: You might want to use the unstack() function to prepare the data for plotting. End of explanation data.date = pd.to_datetime( data.date ) data['day'] = data.date.dt.weekday data[['hosts', 'day']].boxplot( by='day') grouped = data[['hosts', 'day']].groupby('day') print( grouped.sum() ) grouped.box Explanation: Exercise 5: What are the distribution of infected hosts for each day-of-week across all bot families? Hint: try a box plot and/or violin plot. In order to do this, there are two steps: 1. First create a day column where the day of the week is represented as an integer. You'll need to convert the date column to an actual date/time object. See here: http://pandas.pydata.org/pandas-docs/stable/timeseries.html 2. Next, use the .boxplot() method to plot the data. This has grouping built in, so you don't have to group by first. End of explanation
12,870
Given the following text description, write Python code to implement the functionality described below step by step Description: Bolidozor FITS files time restorer For use of this notebook you must have mounted space.astro.cz storage server to local filesystem. It is possible to do with sshfs bash sshfs &lt;user&gt;@space.astro.cz /&lt;mnt folder&gt; Then you must set path of &lt;mnt foler&gt; to path variable. Step1: SYSDATE1 This cell browse files with SYSDATE1 parametr in header. (works only on some stations) Step2: Plotter Next cell plots a graph of time differences. Positive number means SYSDATE is ahead of CRVAL2 (radio-observer time is late).<br/> Negativ value means SYSDATE is behind CRVAL2 (radio-observer time is faster). Step3: <br> Calc time difference of one file
Python Code: import os import datetime import numpy import scipy.signal from astropy.io import fits import matplotlib.pyplot as plt import matplotlib.dates as md %matplotlib inline paths = ['/home/roman/mnt/server-space/storage/bolidozor/ZVPP/ZVPP-R6/snapshots/2017/09/'] times = numpy.ndarray((0,2)) start_time = datetime.datetime.now() fits_browsed = 0 for path in paths: for root, dirs, files in os.walk(path): print("") print(root, " ") for name in files: if name.endswith(("snap.fits")): hdulist = fits.open(os.path.join(root, name)) DATE_ts = datetime.datetime.strptime(hdulist[1].header['DATE'], '%Y-%m-%dT%H:%M:%S').timestamp()*1000+2*60*60*1000 crval = hdulist[1].header['CRVAL2'] #print(DATE_ts, crval) time = [DATE_ts - hdulist[1].header['CDELT2']* hdulist[1].header['NAXIS2'], crval] times = numpy.vstack( [times, time] ) hdulist.close() print("+", end='') fits_browsed += 1 times.sort(axis=0) print("") print("===================================") print(fits_browsed, "was successfully processed") print("It takes", datetime.datetime.now()-start_time) Explanation: Bolidozor FITS files time restorer For use of this notebook you must have mounted space.astro.cz storage server to local filesystem. It is possible to do with sshfs bash sshfs &lt;user&gt;@space.astro.cz /&lt;mnt folder&gt; Then you must set path of &lt;mnt foler&gt; to path variable. End of explanation paths = ['/home/roman/mnt/server-space/storage/bolidozor/ZVPP/ZVPP-R6/snapshots/2017/09/03/', '/home/roman/mnt/server-space/storage/bolidozor/ZVPP/ZVPP-R6/snapshots/2017/09/04/', '/home/roman/mnt/server-space/storage/bolidozor/ZVPP/ZVPP-R6/snapshots/2017/09/05/'] times_ts = numpy.ndarray((0,2)) start_time = datetime.datetime.now() fits_browsed = 0 for path in paths: for root, dirs, files in os.walk(path): print("") print(root, " ") for name in files: if name.endswith(("snap.fits")): try: hdulist = fits.open(os.path.join(root, name)) sysdate = hdulist[1].header['SYSDATE1'] sysdate_beg = sysdate - hdulist[1].header['CDELT2']* hdulist[1].header['NAXIS2'] crval = hdulist[1].header['CRVAL2'] time = [sysdate_beg, crval] times_ts = numpy.vstack( [times_ts, time] ) hdulist.close() print("+", end='') fits_browsed += 1 except Exception: print("-", end='') times_ts.sort(axis=0) print("") print("===================================") print(fits_browsed, "was successfully processed") print("It takes", datetime.datetime.now()-start_time) Explanation: SYSDATE1 This cell browse files with SYSDATE1 parametr in header. (works only on some stations) End of explanation plt.figure(figsize=(30, 20)) data=md.date2num([datetime.datetime.fromtimestamp(ts, datetime.timezone.utc) for ts in times[:,0]/1000]) data_ts=md.date2num([datetime.datetime.fromtimestamp(ts, datetime.timezone.utc) for ts in times_ts[:,0]/1000]) plt.xticks( rotation=25 ) ax=plt.gca() xfmt = md.DateFormatter('%Y-%m-%d %H:%M:%S') ax.xaxis.set_major_formatter(xfmt) ax.set_title('Difference between DATE and CRVAL2 (radio-observer time of 1st .FITS row)') ax.set_xlabel('datetime [UTC]') ax.set_ylabel('time difference (DATE - CRVAL2) [s]') plt.plot(data_ts, (times_ts[:,0]-times_ts[:,1])/1000.0, 'or') plt.plot(data, (times[:,0]-times[:,1])/1000.0, 'xb') plt.plot(data, scipy.signal.savgol_filter(times[:,0]-times[:,1],501, 3)/1000.0, 'w') plt.show() Explanation: Plotter Next cell plots a graph of time differences. Positive number means SYSDATE is ahead of CRVAL2 (radio-observer time is late).<br/> Negativ value means SYSDATE is behind CRVAL2 (radio-observer time is faster). End of explanation fits_path = '/home/roman/mnt/server-space/storage/bolidozor/ZVPP/ZVPP-R6/snapshots/2017/09/04/19/20170904192530311_ZVPP-R6_snap.fits' print("") hdulist = fits.open(fits_path) sysdate = hdulist[1].header['SYSDATE1'] #sysdate_beg = sysdate - hdulist[1].header['CDELT2']* hdulist[1].header['NAXIS2'] DATE_ts = datetime.datetime.strptime(hdulist[1].header['DATE-OBS'], '%Y-%m-%dT%H:%M:%S').timestamp()*1000.0 crval = hdulist[1].header['CRVAL2'] hdulist.close() time = (DATE_ts - crval)/1000.0 if time>0: print("difference between times is", time, "s. (SYSDATE is ahead, radio-observer time is late)") else: print("difference between times is", time, "s. (CRVAL2 is ahead, radio-observer time is in the future :-) )") Explanation: <br> Calc time difference of one file End of explanation
12,871
Given the following text description, write Python code to implement the functionality described below step by step Description: Object Detection API Demo <table align="left"><td> <a target="_blank" href="https Step1: Make sure you have pycocotools installed Step2: Get tensorflow/models or cd to parent directory of the repository. Step3: Compile protobufs and install the object_detection package Step4: Imports Step5: Import the object detection module. Step6: Patches Step7: Model preparation Variables Any model exported using the export_inference_graph.py tool can be loaded here simply by changing the path. By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader Step8: Loading label map Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine Step9: For the sake of simplicity we will test on 2 images Step10: Detection Load an object detection model Step11: Check the model's input signature, it expects a batch of 3-color images of type uint8 Step12: And returns several outputs Step13: Add a wrapper function to call the model, and cleanup the outputs Step14: Run it on each test image and show the results Step15: Instance Segmentation Step16: The instance segmentation model includes a detection_masks output
Python Code: !pip install -U --pre tensorflow=="2.*" !pip install tf_slim Explanation: Object Detection API Demo <table align="left"><td> <a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/models/blob/master/research/object_detection/colab_tutorials/colab_tutorials/object_detection_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab </a> </td><td> <a target="_blank" href="https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/colab_tutorials/object_detection_tutorial.ipynb"> <img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td></table> Welcome to the Object Detection API. This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Important: This tutorial is to help you through the first step towards using Object Detection API to build models. If you just just need an off the shelf model that does the job, see the TFHub object detection example. Setup Important: If you're running on a local machine, be sure to follow the installation instructions. This notebook includes only what's necessary to run in Colab. Install End of explanation !pip install pycocotools Explanation: Make sure you have pycocotools installed End of explanation import os import pathlib if "models" in pathlib.Path.cwd().parts: while "models" in pathlib.Path.cwd().parts: os.chdir('..') elif not pathlib.Path('models').exists(): !git clone --depth 1 https://github.com/tensorflow/models Explanation: Get tensorflow/models or cd to parent directory of the repository. End of explanation %%bash cd models/research/ protoc object_detection/protos/*.proto --python_out=. %%bash cd models/research pip install . Explanation: Compile protobufs and install the object_detection package End of explanation import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image from IPython.display import display Explanation: Imports End of explanation from object_detection.utils import ops as utils_ops from object_detection.utils import label_map_util from object_detection.utils import visualization_utils as vis_util Explanation: Import the object detection module. End of explanation # patch tf1 into `utils.ops` utils_ops.tf = tf.compat.v1 # Patch the location of gfile tf.gfile = tf.io.gfile Explanation: Patches: End of explanation def load_model(model_name): base_url = 'http://download.tensorflow.org/models/object_detection/' model_file = model_name + '.tar.gz' model_dir = tf.keras.utils.get_file( fname=model_name, origin=base_url + model_file, untar=True) model_dir = pathlib.Path(model_dir)/"saved_model" model = tf.saved_model.load(str(model_dir)) return model Explanation: Model preparation Variables Any model exported using the export_inference_graph.py tool can be loaded here simply by changing the path. By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Loader End of explanation # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt' category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) Explanation: Loading label map Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine End of explanation # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images') TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg"))) TEST_IMAGE_PATHS Explanation: For the sake of simplicity we will test on 2 images: End of explanation model_name = 'ssd_mobilenet_v1_coco_2017_11_17' detection_model = load_model(model_name) Explanation: Detection Load an object detection model: End of explanation print(detection_model.signatures['serving_default'].inputs) Explanation: Check the model's input signature, it expects a batch of 3-color images of type uint8: End of explanation detection_model.signatures['serving_default'].output_dtypes detection_model.signatures['serving_default'].output_shapes Explanation: And returns several outputs: End of explanation def run_inference_for_single_image(model, image): image = np.asarray(image) # The input needs to be a tensor, convert it using `tf.convert_to_tensor`. input_tensor = tf.convert_to_tensor(image) # The model expects a batch of images, so add an axis with `tf.newaxis`. input_tensor = input_tensor[tf.newaxis,...] # Run inference model_fn = model.signatures['serving_default'] output_dict = model_fn(input_tensor) # All outputs are batches tensors. # Convert to numpy arrays, and take index [0] to remove the batch dimension. # We're only interested in the first num_detections. num_detections = int(output_dict.pop('num_detections')) output_dict = {key:value[0, :num_detections].numpy() for key,value in output_dict.items()} output_dict['num_detections'] = num_detections # detection_classes should be ints. output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64) # Handle models with masks: if 'detection_masks' in output_dict: # Reframe the the bbox mask to the image size. detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( output_dict['detection_masks'], output_dict['detection_boxes'], image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5, tf.uint8) output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy() return output_dict Explanation: Add a wrapper function to call the model, and cleanup the outputs: End of explanation def show_inference(model, image_path): # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = np.array(Image.open(image_path)) # Actual detection. output_dict = run_inference_for_single_image(model, image_np) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks_reframed', None), use_normalized_coordinates=True, line_thickness=8) display(Image.fromarray(image_np)) for image_path in TEST_IMAGE_PATHS: show_inference(detection_model, image_path) Explanation: Run it on each test image and show the results: End of explanation model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28" masking_model = load_model(model_name) Explanation: Instance Segmentation End of explanation masking_model.output_shapes for image_path in TEST_IMAGE_PATHS: show_inference(masking_model, image_path) Explanation: The instance segmentation model includes a detection_masks output: End of explanation
12,872
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to Python This tutorial was originally drawn from Scipy Lecture Notes by this list of contributors. I've continued to modify it as I use it. This work is CC-BY. Author Step1: After you've run this as-is, change the name of the CSV file from cars.csv to cities.csv. Run it again. First Steps Follow along with the instructor in typing instructions Step2: Remember how we saw integers in our CSV script example? The number of columns and number of rows are integers Step3: Floats Note Step4: Booleans Step5: A Python shell can therefore replace your pocket calculator, with the basic arithmetic operations +, -, *, /, % (modulo) natively implemented. Try some things here or follow along with the instructor's examples Step6: Comments Commenting code is good practice and is extremely helpful to help others understand your code. And, often, to help you understand code that you've written earlier. In python, everything following the hash/pound sign # is a comment. Comments can either be their own line(s), or in-line. Use in-line comments sparingly. Step7: Exercise Add three of the same single digit integers (e.g. 1 + 1 + 1). Is the result what you expected? Next, add three of the same tenths digit, which are floats (e.g. .1 + .1 + .1). Is the result what you expected? What happened with those floats? How could we avoid this? There is an explanation and some suggestions in this python documentaion on floating points. Containers Python provides many efficient types of containers, in which collections of data can be stored. Lists A list is an ordered collection of objects, that may have different types. For example Step8: And remember in our CSV script example, we used lots of lists! There was a list to store the column names, which we assumed were in the first row of data Step9: And then each row of the CSV was itself a list, and all the rows were another list. So we used a list of lists, or a matrix. Step10: Indexing Step11: Indexing starts at 0, not 1! Slicing Step12: Note that l[start Step13: The elements of a list may have different types Step14: Python offers a large panel of functions to modify lists, or query them. Here are a few examples; for more details, see https Step15: Add a list to the end of a list with extend() Step16: Two ways to reverse a list Step17: Concatenate lists Step18: Sort Step20: Exercise We used sorted() in our CSV example a few times. That, along with list indexes, helped us get the lowest and highest value. Here's what it looked like Step21: Double quotes are crucial when you have a quote in the string Step22: The newline character is \n, and the tab character is \t. Strings are collections like lists. Hence they can be indexed and sliced, using the same syntax and rules. Indexing strings Step23: Accents and special characters can also be handled in strings because since Python 3, the string type handles unicode (UTF-8) by default.(For a lot more on Unicode, character encoding, and how it relates to python, see https Step24: Strings have many useful methods, such as a.replace as seen above. Remember the a. object-oriented notation and use tab completion or help(str) to search for new methods. Exercise We used a few string methods in the CSV example at the start. See them in this chunk of code? if len(value) &lt; 1 or value.isspace() Step25: Dictionaries A dictionary is basically an efficient table that maps keys to values. It is an unordered container Step26: It can be used to conveniently store and retrieve values associated with a name (a string for a date, a name, etc.). See https Step27: The assignment operator Python library reference says Step28: Control Flow Controls the order in which the code is executed. Conditional Expressions if &lt;THING&gt; Evaluates to false Step29: a in b For any collection b check to see if b contains a Step30: Blocks are delimited by indentation Type the following lines in your Python interpreter, and be careful to respect the indentation depth. The Jupyter Notebook automatically increases the indentation depth after a coln Step31: for/range Iterating with an index Step32: But most often, it is more readable to iterate over values Step33: Advanced iteration Iterate over any sequence You can iterate over any sequence (string, list, keys in a dictionary, lines in a file, ...) Step34: Few languages (in particular, languages for scientific computing) allow to loop over anything but integers/indices. With Python it is possible to loop exactly over the objects of interest without bothering with indices you often don’t care about. This feature can often be used to make code more readable. It is not safe to modify the sequence you are iterating over. Keeping track of enumeration number Common task is to iterate over a sequence while keeping track of the item number. * Could use while loop with a counter as above. Or a for loop Step35: But, Python provides a built-in function - enumerate - for this Step36: When looping over a dictionary use .items() Step37: The ordering of a dictionary in random, thus we use sorted() which will sort on the keys. Exercise Countdown to blast off Write code that uses a loop to print a count down from 10 to 1, followed by printing the string "blast off!". There's more than one way to do this, so figure out what works for you, drawing on things we've already learned. List Comprehensions Step38: Same as Step39: Now that you've done the countdown exercise above, consider how you could have used a list comprehension in a solution Step40: Defining Functions Function definition Step41: Function blocks must be indented as other control flow blocks Return statement Functions can optionally return values Step42: By default, functions return None. Note the syntax to define a function Step43: Optional parameters (keyword or named arguments) Step44: Keyword arguments allow you to specify default values. Default values are evaluated when the function is defined, not when it is called. This can be problematic when using mutable types (e.g. dictionary or list) and modifying them in the function body, since the modifications will be persistent across invocations of the function. Global variables Variables declared outside the function can be referenced within the function Step46: Docstrings Documentation about what the function does and its parameters. General convention Step47: There's a great help feature build into Jupyter Step48: Docstring guidelines For the sake of standardization, the Docstring Conventions webpage documents the semantics and conventions associated with Python docstrings. Also, the Numpy and Scipy modules have defined a precise standard for documenting scientific functions, that you may want to follow for your own functions, with a Parameters section, an Examples section, etc. See http Step49: Methods Methods are functions attached to objects. You’ve seen these in our examples on lists, dictionaries, strings, etc... Step50: And also Step51: Using alias Step52: Modules are thus a good way to organize code in a hierarchical way. Actually, all the data science tools we are going to use are modules Step53: Good Practices Use meaningful object names Indentation Step54: Spaces Write well-spaced code Step55: A certain number of rules for writing “beautiful” code (and more importantly using the same conventions as anybody else!) are given in the PEP-8 Step56: To read from a file Step57: For more details Step58: File modes Step59: List a directory Step60: Make a directory Step61: Rename the directory Step62: Delete a file Step63: glob Step64: Exception handling in Python It is likely that you have raised Exceptions if you have typed all the previous commands of the tutorial. For example, you may have raised an exception if you entered a command with a typo. Exceptions are raised by different kinds of errors arising when executing Python code. In your own code, you may also catch errors, or define custom error types. You may want to look at the descriptions of the built-in Exceptions when looking for the right exception type. Exceptions Exceptions are raised by errors in Python Step65: Catching exceptions try/except Step66: try/finally Step67: Raising exceptions Capturing and reraising an exception Step68: Exceptions to pass messages between parts of the code
Python Code: # open the source CSV file csv = open("cars.csv") # create a list with the column names. we assume the first row contiains them. # we strip the carriage return (if there is one) from the line, then split values on the commas. # Note: this uses a nifty python feature called 'list comprehension' to do it in one line column_names = [i for i in csv.readline().strip().split(',')] # read the rest of the file into a matrix (a list of lists). Use the same strip and split methods. data = [line.strip().split(',') for line in csv.readlines()] # now, try to infer the data types of each column from the values in the first row. # the testing here shows some string methods, like isspace(), isalpha(), isdigit(). # we'll save these data type assumptions because we'll use them later in a report. column_datatypes = [] for value in data[0]: if len(value) < 1 or value.isspace(): column_datatypes.append('string') elif value.isalpha(): column_datatypes.append('string') elif '.' in value or value.isdigit(): column_datatypes.append('numeric') else: column_datatypes.append('string') # now let's do some basic reporting on the csv # overall stats of the file: print("this csv file has " + str(len(column_names)) + " columns and " + str(len(data)) + " rows.") # loop over each column name, do some different things depending on whether we've inferred # it contains string or numeric values. we declare certain variables with 'False' so even if # we can't fill them we can test them without an error. for i, value in enumerate(column_names): average_value = False highest_value = False lowest_value = False # if it's a numeric column, we'll get all the values for this column out of our data matrix, # convert them to float (remember they are all strings by default), and then get the average, # high, and low values. If there's an error doing this, just get the values as strings if column_datatypes[i] == 'numeric': try: column_values = [float(data[j][i]) for j in range(len(data))] average_value = sum(column_values)/len(column_values) highest_value = sorted(column_values)[-1] lowest_value = sorted(column_values)[0] except ValueError: column_values = [data[j][i] for j in range(len(data))] else: column_values = [data[j][i] for j in range(len(data))] # the set function removes duplicates from a list, so taking its length is equivilent # to the number of unique values unique_value_count = len(set(column_values)) # now we start printing. First just the field name. The simple way of formatting a string # is with the + operator. Note: we add one to the index because we don't want our list # to start with zero. print(str(i+1) + ". \"" + value + "\"") # next the type we think it is, and the number of unique values # Note: using the + style of string formatting all non-string values have to be cast to strings print("\t{0} ({1} of {2} unique)".format(column_datatypes[i], unique_value_count, len(data))) # now different details if it's numeric and successfully converted to float, if it's # numeric, and didnt', and otherwise we assume it's a string. # Note: also showing a different, more powerful string formatting method here if column_datatypes[i] == 'numeric': if average_value: print("\taverage value: {0:g}".format(average_value)) print("\tlowest value: {0:g}".format(lowest_value)) print("\thighest value: {0:g}".format(highest_value)) else: print("\tNOTE: problems converting values to float!") else: print("\tfirst value: {0:s}".format(column_values[0])) print("\tlast value: {0:s}".format(column_values[-1])) Explanation: Introduction to Python This tutorial was originally drawn from Scipy Lecture Notes by this list of contributors. I've continued to modify it as I use it. This work is CC-BY. Author: Aaron L. Brenner Python is a programming language, as are C, Fortran, BASIC, PHP, etc. Some specific features of Python are as follows: an interpreted (as opposed to compiled) language. Contrary to e.g. C or Fortran, one does not compile Python code before executing it. In addition, Python can be used interactively: many Python interpreters are available, from which commands and scripts can be executed. a free software released under an open-source license: Python can be used and distributed free of charge, even for building commercial software. multi-platform: Python is available for all major operating systems, Windows, Linux/Unix, MacOS X, most likely your mobile phone OS, etc. a very readable language with clear non-verbose syntax* a language for which a large variety of high-quality packages are available for various applications, from web frameworks to scientific computing. a language very easy to interface with other languages, in particular C and C++. Some other features of the language are illustrated just below. For example, Python is an object-oriented language, with dynamic typing (the same variable can contain objects of different types during the course of a program). See https://www.python.org/about/ for more information about distinguishing features of Python. Some Key Learning and Reference Resources If you are interested in moving forward with learning Python, it is worth your time to get acquainted with all of these resources. The tutorial will step you through more Python, and you should be familiar with the basics of the Python language and its standard library. Python documentation home https://docs.python.org/3/ Tutorial https://docs.python.org/3/tutorial/index.html Python Language Reference https://docs.python.org/3/reference/index.html#reference-index The Python Standard Library https://docs.python.org/3/library/index.html#library-index Additional Learning Resources The Python Cookbook, 3rd Edition - This is one of many various 'cookbooks'. These can be very useful not only for seeing solutions to common problems, but also as a way to read brief examples of ideomatic code. Reading code snippets in this way can be a great compliment to language reference documentation and traditional tutorials. http://chimera.labs.oreilly.com/books/1230000000393/ Also, don't be embarrased to Google your questions! Try some variation of python [thing] example Let's dive in* with an example that does something (kind of) useful: * hat tip to Mark Pilgrim This is a script that inspects a CSV data file and reports on some summary characteristics. Take a minute to read over the code before running it. Don't worry if you don't understand all of what's happening. We'll step through some of this code in more detail as we learn the basics of python. For now, just try to get a feel for what a complete script looks like. After you've read it over, go ahead and execute it. End of explanation 1 + 1 Explanation: After you've run this as-is, change the name of the CSV file from cars.csv to cities.csv. Run it again. First Steps Follow along with the instructor in typing instructions: Two variables a and b have been defined above. Note that one does not declare the type of an variable before assigning its value. In addition, the type of a variable may change, in the sense that at one point in time it can be equal to a value of a certain type, and a second point in time, it can be equal to a value of a different type. b was first equal to an integer, but it became equal to a string when it was assigned the value ’hello’. But you can see that type often matters, as when we try to print an integer in the midst of a string. Basic Types Numerical Types Python supports the following numerical, scalar types. Integer End of explanation a = len(column_names) type(a) Explanation: Remember how we saw integers in our CSV script example? The number of columns and number of rows are integers End of explanation c = 2.1 type(c) type(average_value) Explanation: Floats Note: most decimal fractions cannot be represented exactly as binary fractions, and certain operations using floats may lead to surprising results. For more details, start here. End of explanation 3 > 4 test = (3 > 4) test type(test) Explanation: Booleans End of explanation float(1) Explanation: A Python shell can therefore replace your pocket calculator, with the basic arithmetic operations +, -, *, /, % (modulo) natively implemented. Try some things here or follow along with the instructor's examples: Type conversion (casting): End of explanation # this is a comment. We might say, for example, that we're setting the value of Pi: pi = 3.14 pie = 'pumpkin' # and this is an in-line comment. Setting the value of pie. Explanation: Comments Commenting code is good practice and is extremely helpful to help others understand your code. And, often, to help you understand code that you've written earlier. In python, everything following the hash/pound sign # is a comment. Comments can either be their own line(s), or in-line. Use in-line comments sparingly. End of explanation l = ['red', 'blue', 'green', 'black', 'white'] type(l) Explanation: Exercise Add three of the same single digit integers (e.g. 1 + 1 + 1). Is the result what you expected? Next, add three of the same tenths digit, which are floats (e.g. .1 + .1 + .1). Is the result what you expected? What happened with those floats? How could we avoid this? There is an explanation and some suggestions in this python documentaion on floating points. Containers Python provides many efficient types of containers, in which collections of data can be stored. Lists A list is an ordered collection of objects, that may have different types. For example: End of explanation column_names Explanation: And remember in our CSV script example, we used lots of lists! There was a list to store the column names, which we assumed were in the first row of data: End of explanation data Explanation: And then each row of the CSV was itself a list, and all the rows were another list. So we used a list of lists, or a matrix. End of explanation column_names[0] column_names[-1] Explanation: Indexing: accessing individual objects contained in the list: End of explanation column_names[1:3] Explanation: Indexing starts at 0, not 1! Slicing: obtaining sublists of regularly-spaced elements: End of explanation column_names[0] = 'LOOK AT ME!' column_names Explanation: Note that l[start:stop] contains the elements with indices i such as start&lt;= i &lt; stop (i ranging from start to stop-1). Therefore, l[start:stop] has (stop - start) elements. Slicing syntax: l[start:stop:stride] Lists are mutable objects and can be modified: End of explanation l = [3, -200, 'hello'] l Explanation: The elements of a list may have different types: End of explanation L = ['red', 'blue', 'green', 'black', 'white'] L.append('pink') L L.pop() # removes and returns the last item L Explanation: Python offers a large panel of functions to modify lists, or query them. Here are a few examples; for more details, see https://docs.python.org/tutorial/datastructures.html#more-on-lists Add and remove elements: End of explanation L.extend(['pink', 'purple']) # extend L, in-place L L = L[:-2] L Explanation: Add a list to the end of a list with extend() End of explanation r = L[::-1] r r.reverse() # in-place r Explanation: Two ways to reverse a list: End of explanation r + L Explanation: Concatenate lists: End of explanation sorted(r) # new object r r.sort() #in-place r Explanation: Sort: End of explanation s = 'Hello, how are you?' s = "Hi, what's up" s = '''Hello, # tripling the quotes allows the how are you''' # the string to span more than one line s = Hi, what's up? Explanation: Exercise We used sorted() in our CSV example a few times. That, along with list indexes, helped us get the lowest and highest value. Here's what it looked like: highest_value = sorted(column_values)[-1] lowest_value = sorted(column_values)[0] Try creating your own list of unsorted items. Can you replicate the highest and lowest value expressions above? What happens if your list is not made up of numbers? Methods The notation r.method() (e.g. r.append(3) and L.pop()) is our first example of object-oriented programming (OOP). Being a list, the object r has a method function that is called using the notation .methodname(). We will talk about functions later in this tutorial. When you're using jupyter, to see all the different methods available to a variable, type a period after the variable name and hit the tab key. Strings We've already seen strings a few times. Python supports many different string syntaxes (single, double or triple quotes): End of explanation s = 'Hi, what's up?' Explanation: Double quotes are crucial when you have a quote in the string: End of explanation a = "hello" a[0] a[1] a[-1] Explanation: The newline character is \n, and the tab character is \t. Strings are collections like lists. Hence they can be indexed and sliced, using the same syntax and rules. Indexing strings: End of explanation a = "hello, world!" a[2] = 'z' a.replace('l', 'z', 1) a.replace('l', 'z') Explanation: Accents and special characters can also be handled in strings because since Python 3, the string type handles unicode (UTF-8) by default.(For a lot more on Unicode, character encoding, and how it relates to python, see https://docs.python.org/3/howto/unicode.html). A string is an immutable object and it is not possible to modify its contents. If you want to modify a string, you'll create a new string from the original one (or use a method that returns a new string). End of explanation 'An integer: {0} ; a float: {1} ; another string: {2} '.format(1, 0.1, 'string') i = 102 filename = 'processing_of_dataset_{0}.txt'.format(i) filename Explanation: Strings have many useful methods, such as a.replace as seen above. Remember the a. object-oriented notation and use tab completion or help(str) to search for new methods. Exercise We used a few string methods in the CSV example at the start. See them in this chunk of code? if len(value) &lt; 1 or value.isspace(): column_datatypes.append('string') elif value.isalpha(): column_datatypes.append('string') elif '.' in value or value.isdigit(): Now you try. Create a new variable and assign it with a string. Then, try a few of python's string methods to see how you can return different versions of your string, or test whether it has certain characteristics. String formatting: We also saw string formatting in the CSV example, when we printed some of the reporting: print("\t{0} ({1} of {2} unique)".format(column_datatypes[i], unique_value_count, len(data))) End of explanation tel = {'emmanuelle': 5752, 'sebastian': 5578} tel['francis'] = 5915 tel tel['sebastian'] tel.keys() tel.values() 'francis' in tel Explanation: Dictionaries A dictionary is basically an efficient table that maps keys to values. It is an unordered container: End of explanation d = {'a':1, 'b':2, 3:'hello'} d Explanation: It can be used to conveniently store and retrieve values associated with a name (a string for a date, a name, etc.). See https://docs.python.org/tutorial/datastructures.html#dictionaries for more information. A dictionary can have keys (resp. values) with different types: End of explanation a = [1, 2, 3] b = a a b a is b b[1] = "hi!" a Explanation: The assignment operator Python library reference says: Assignment statements are used to (re)bind names to values and to modify attributes or items of mutable objects. In short, it works as follows (simple assignment): 1. an expression on the right hand side is evaluated, the corresponding object is created/obtained 2. a name on the left hand side is assigned, or bound, to the r.h.s. object Things to note: * a single object can have several names bound to it: End of explanation if 2**2 == 4: print('Obviously') Explanation: Control Flow Controls the order in which the code is executed. Conditional Expressions if &lt;THING&gt; Evaluates to false: * any number equal to zero (0,0.0) * an empty container (list, dictionary) * False, None Evaluates to True: * everything else a == b Tests equality: if/elif/else End of explanation b = [1,2,3] 2 in b 5 in b Explanation: a in b For any collection b check to see if b contains a: End of explanation a = 10 if a == 1: print(1) elif a == 2: print(2) else: print("A lot") Explanation: Blocks are delimited by indentation Type the following lines in your Python interpreter, and be careful to respect the indentation depth. The Jupyter Notebook automatically increases the indentation depth after a coln : sign; to decrease the indentation depth, go four spaces to the left with the Backspace key. Press the Enter key twice to leave the logical block. End of explanation for i in range(4): print(i) Explanation: for/range Iterating with an index: End of explanation for word in ('cool', 'powerful', 'readable'): print('Python is %s ' % word) Explanation: But most often, it is more readable to iterate over values: End of explanation vowels = 'aeiouy' for i in 'powerful': if i in vowels: print(i) message = "Hello how are you?" message.split() # returns a list for word in message.split(): print(word) Explanation: Advanced iteration Iterate over any sequence You can iterate over any sequence (string, list, keys in a dictionary, lines in a file, ...): End of explanation words = ['cool', 'powerful', 'readable'] for i in range(0, len(words)): print(i, words[i]) Explanation: Few languages (in particular, languages for scientific computing) allow to loop over anything but integers/indices. With Python it is possible to loop exactly over the objects of interest without bothering with indices you often don’t care about. This feature can often be used to make code more readable. It is not safe to modify the sequence you are iterating over. Keeping track of enumeration number Common task is to iterate over a sequence while keeping track of the item number. * Could use while loop with a counter as above. Or a for loop: End of explanation for index, item in enumerate(words): print(index, item) Explanation: But, Python provides a built-in function - enumerate - for this: End of explanation d = {'a': 1, 'b':1.2, 'c':"hi"} for key, val in sorted(d.items()): print('Key: %s has value: %s ' % (key, val)) Explanation: When looping over a dictionary use .items(): End of explanation [i**2 for i in range(4)] Explanation: The ordering of a dictionary in random, thus we use sorted() which will sort on the keys. Exercise Countdown to blast off Write code that uses a loop to print a count down from 10 to 1, followed by printing the string "blast off!". There's more than one way to do this, so figure out what works for you, drawing on things we've already learned. List Comprehensions End of explanation l = [] for i in range(4): l.append(i) l Explanation: Same as: End of explanation [10 - i for i in range(10)] Explanation: Now that you've done the countdown exercise above, consider how you could have used a list comprehension in a solution: End of explanation def test(): print('in test function') test() Explanation: Defining Functions Function definition End of explanation def disk_area(radius): return 3.14 * radius * radius disk_area(1.5) Explanation: Function blocks must be indented as other control flow blocks Return statement Functions can optionally return values: End of explanation def double_it(x): return x * 2 double_it(3) double_it() Explanation: By default, functions return None. Note the syntax to define a function: * the def keyword; * is followed by the function’s name, then * the arguments of the function are given between parentheses followed by a colon. * the function body; * and return object for optionally returning values. Parameters Mandatory parameters (positional arguments): End of explanation def double_it(x=2): return x * 2 double_it() double_it(3) Explanation: Optional parameters (keyword or named arguments) End of explanation # We're defining a global variable for pi, and it's actually a special kind of global # because we intend it to be constant (i.e. it's value doesn't change). There's a convention # of using uppercase in naming constants. See https://www.python.org/dev/peps/pep-0008/#constants PI = 3.14159 def disk_area(radius): return PI * radius * radius disk_area(1.5) Explanation: Keyword arguments allow you to specify default values. Default values are evaluated when the function is defined, not when it is called. This can be problematic when using mutable types (e.g. dictionary or list) and modifying them in the function body, since the modifications will be persistent across invocations of the function. Global variables Variables declared outside the function can be referenced within the function: End of explanation def funcname(params): Concise one-line sentence describing the function. Extended summary which can contain multiple paragraphs. # function body pass Explanation: Docstrings Documentation about what the function does and its parameters. General convention: End of explanation funcname? Explanation: There's a great help feature build into Jupyter: type a question mark after any object or function to get quick access to its docstring. Try it: End of explanation import os os Explanation: Docstring guidelines For the sake of standardization, the Docstring Conventions webpage documents the semantics and conventions associated with Python docstrings. Also, the Numpy and Scipy modules have defined a precise standard for documenting scientific functions, that you may want to follow for your own functions, with a Parameters section, an Examples section, etc. See http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines#docstring-standard and http://projects.scipy.org/numpy/browser/trunk/doc/example.py#L37 Create your own function Use any features of Python that we've already worked on, or play with something new from the documentation. Write a function, and include a docstring explaining the function's purpose. Test your function by executing it. If your function uses parameters, try calling the function a few times with different parameters. Define the function here: Call the function here: Importing modules Importing objects from modules Modules let you use code that doesn't reside within the notebook and might be part of the standard library. (more on this later) End of explanation os.listdir('.') Explanation: Methods Methods are functions attached to objects. You’ve seen these in our examples on lists, dictionaries, strings, etc... End of explanation from os import listdir listdir('.') Explanation: And also: End of explanation import pandas as pd Explanation: Using alias: End of explanation import pandas as pd pd.Series([0,1,2,3,4,5,6,7,8,9]) Explanation: Modules are thus a good way to organize code in a hierarchical way. Actually, all the data science tools we are going to use are modules: End of explanation long_line = "Here is a very very long line \ ... that we break in two parts." Explanation: Good Practices Use meaningful object names Indentation: no choice! Indenting is compulsory in Python! Every command block following a colon bears an additional indentation level with respect to the previous line with a colon. One must therefore indent after def f(): or while:. At the end of such logical blocks, one decreases the indentation depth (and re-increases it if a new block is entered, etc.) Strict respect of indentation is the price to pay for getting rid of { or ; characters that delineate logical blocks in other languages. Improper indentation leads to errors such as: ``` IndentationError: unexpected indent (test.py, line 2) `` All this indentation business can be a bit confusing in the beginning. However, with the clear indentation, and in the absence of extra characters, the resulting code is very nice to read compared to other languages. * **Indentation depth**: Inside your text editor, you may choose to indent with any positive number of spaces (1, 2, 3, 4, ...). However, it is considered good practice to **indent with 4 spaces**. You may configure your editor to map theTab` key to a 4-space indentation. In Python(x,y), the editor is already configured this way. * Style guidelines Long lines: you should not write very long lines that span over more than (e.g.) 80 characters. Long lines can be broken with the \ character End of explanation a = 1 # yes a=1 # too cramped Explanation: Spaces Write well-spaced code: put whitespaces after commas, around arithmetic operators, etc.: End of explanation f = open('workfile.txt', 'w') # opens the workfile file in writing mode type(f) f.write('This is a test \nand another test') f.close() # always use close() after opening a file! Very important! Explanation: A certain number of rules for writing “beautiful” code (and more importantly using the same conventions as anybody else!) are given in the PEP-8: Style Guide for Python Code. Input and Output We write or read strings to/from files (other types must be converted to strings). To write in a file: End of explanation f = open('workfile.txt', 'r') s = f.read() print(s) f.close() Explanation: To read from a file: End of explanation f = open('workfile.txt', 'r') for line in f: print(line) f.close() Explanation: For more details: https://docs.python.org/tutorial/inputoutput.html Iterating over a file End of explanation import os os.getcwd() Explanation: File modes: * r: Read-only * w: Write-only - Note: This will create a new file or overwrite an existing file * a: Append to a file * r+: Read and Write For more information about file modes read the documentation for the open() function. https://docs.python.org/3.5/library/functions.html#open The Standard Library Reference documentation for this section: * The Python Standard Library documentation: https://docs.python.org/library/index.html * Python Essential Reference, David Beazley, Addison-Wesley Professional os module: operating system functionality "A portable way of using operating system dependent functionality.” **Directory and file manipulation Get the current directory: End of explanation os.listdir(os.curdir) Explanation: List a directory: End of explanation os.mkdir('junkdir') 'junkdir' in os.listdir(os.curdir) Explanation: Make a directory: End of explanation os.rename('junkdir', 'foodir') 'junkdir' in os.listdir(os.curdir) 'foodir' in os.listdir(os.curdir) os.rmdir('foodir') #remove directory 'foodir' in os.listdir(os.curdir) Explanation: Rename the directory: End of explanation fp = open('junk.txt', 'w') fp.close() 'junk.txt' in os.listdir(os.curdir) os.remove('junk.txt') 'junk.txt' in os.listdir(os.curdir) Explanation: Delete a file: End of explanation import glob glob.glob('*.txt') Explanation: glob: Pattern matching on files The glob module provides convenient file pattern matching. Find all files ending in .txt: End of explanation 1/0 d = {1:1, 2:2} d[3] l = [1, 2, 3] l[4] l.foobar Explanation: Exception handling in Python It is likely that you have raised Exceptions if you have typed all the previous commands of the tutorial. For example, you may have raised an exception if you entered a command with a typo. Exceptions are raised by different kinds of errors arising when executing Python code. In your own code, you may also catch errors, or define custom error types. You may want to look at the descriptions of the built-in Exceptions when looking for the right exception type. Exceptions Exceptions are raised by errors in Python: End of explanation while True: try: x = int(input('Please enter a number: ')) break except ValueError: print('That was no valid number. Try again...') x Explanation: Catching exceptions try/except End of explanation try: x = int(input('Please enter a number: ')) finally: print('Thank you for your input.') Explanation: try/finally End of explanation def filter_name(name): try: name = name.encode('ascii') except UnicodeError as e: if name == 'Gaël': print("OK, Gaël") else: raise e return name filter_name("Gaël") filter_name('Stéfan') Explanation: Raising exceptions Capturing and reraising an exception: End of explanation def achilles_arrow(x): if abs(x - 1) < 1e-3: raise StopIteration x = 1 - (1-x)/2. return x x = 0 while True: try: x = achilles_arrow(x) except StopIteration: break x Explanation: Exceptions to pass messages between parts of the code: End of explanation
12,873
Given the following text description, write Python code to implement the functionality described below step by step Description: Scraping Reviews This notebook shows how to use the scrape reviews from Indeed and Glassdoor. To visualize the ratings go to the Ratings notebook and to do topic modeling go to the Topic Modeling notebook. Before, make sure you have MongoDB up and running. Parameters Step1: Scrape job listings from Indeed Step2: Scrape company reviews from Indeed This takes all the companies that appear in the jobs scraped. Step3: Fix Company Names Indeed's company names are inconsistent. The same company can be listed several times with various spellings/typos/words. It's necessary to look at the companies and fix the names. The utils module has a function which takes a dictionary that takes the old name and returns the new one (names not in the dictionary are left as is). See below for an example (the one I used had over 30 name fixes. Step4: Scrape Glassdoor Step5: Final Fixes Look at the failed companies. Often they couldn't be found on glassdoor because of an issue with their name. You might need to fix the names again (and search on glassdoor for the name some companies are listed under). Beware of encoding issues Step6: Here I would do one last check too see which companies were scraped in glassdoor and indeed. Occasionally the wrong company might have been scraped on glassdoor.
Python Code: # Search settings KEYWORD_FILTER = "Data Scientist" LOCATION_FILTER = "New York City, NY" # Other settings MAX_PAGES_COMPANIES = 500 MAX_PAGES_REVIEWS = 500 import os import re from datetime import datetime from pymongo import MongoClient import indeed import glassdoor import utils # DB settings client = MongoClient() indeed_db = client.indeed indeed_jobs = indeed_db.jobs indeed_reviews = indeed_db.reviews glassdoor_db = client.glassdoor glassdoor_reviews = glassdoor_db.reviews Explanation: Scraping Reviews This notebook shows how to use the scrape reviews from Indeed and Glassdoor. To visualize the ratings go to the Ratings notebook and to do topic modeling go to the Topic Modeling notebook. Before, make sure you have MongoDB up and running. Parameters End of explanation jobs = indeed.get_jobs(KEYWORD_FILTER, LOCATION_FILTER, indeed_jobs, MAX_PAGES_COMPANIES) Explanation: Scrape job listings from Indeed End of explanation indeed.get_all_company_reviews(jobs, indeed_reviews, MAX_PAGES_REVIEWS) indeed_reviews.find_one() Explanation: Scrape company reviews from Indeed This takes all the companies that appear in the jobs scraped. End of explanation companies = list(set(utils.get_company_names(indeed_reviews))) companies[:5] fix_companies = {'Argus, ISO, Verisk Analytics, Verisk Climate, Veri...': 'Verisk Analytics', 'Barclays Investment Bank': 'Barclays', 'Dun & Brandstreet': u'Dun & Bradstreet', 'Dun & Broadstreet':u'Dun & Bradstreet', 'World Business Lenders - New York, NY':'World Business Lenders' } utils.fix_all_company_names(indeed_reviews, fix_companies) companies = list(set(utils.get_company_names(indeed_reviews))) Explanation: Fix Company Names Indeed's company names are inconsistent. The same company can be listed several times with various spellings/typos/words. It's necessary to look at the companies and fix the names. The utils module has a function which takes a dictionary that takes the old name and returns the new one (names not in the dictionary are left as is). See below for an example (the one I used had over 30 name fixes. End of explanation visited_companies, failed_companies = glassdoor.get_all_company_reviews(companies, glassdoor_reviews, MAX_PAGES_REVIEWS) Explanation: Scrape Glassdoor End of explanation # fix_companies = {u'SigmaTek':u'SigmaTek Consulting LLC', # } # utils.fix_all_company_names(indeed_reviews, fix_companies) # fixed_failed_companies = fixed_failed_companies = [utils.fix_company_name(company, # fix_companies, True) for company in failed_companies] # visited_companies2, failed_companies = glassdoor.get_all_company_reviews(fixed_failed_companies, # glassdoor_reviews, MAX_PAGES_REVIEWS) Explanation: Final Fixes Look at the failed companies. Often they couldn't be found on glassdoor because of an issue with their name. You might need to fix the names again (and search on glassdoor for the name some companies are listed under). Beware of encoding issues: if you pass an optional flag to utils.fix_company_name, you can encode the company names to ascii. Note: this is usually quite a bit slower than Indeed because there are many more reviews (e.g. Goldman Sachs has 198 pages!). End of explanation glassdoor_companies = set(utils.get_company_names(glassdoor_reviews)) indeed_companies = set(utils.get_company_names(indeed_reviews)) # Remove the extra companies: extra_companies = glassdoor_companies - indeed_companies for company in extra_companies: glassdoor_reviews.remove({'company' : company}) print "Missing companies", indeed_companies - glassdoor_companies Explanation: Here I would do one last check too see which companies were scraped in glassdoor and indeed. Occasionally the wrong company might have been scraped on glassdoor. End of explanation
12,874
Given the following text description, write Python code to implement the functionality described below step by step Description: INF-495, v0.01, Claudio Torres, [email protected]. DI-UTFSM Textbook Step1: First algorithm Step2: Second algorithm Step3: Third algorithm Step4: 8.2 Stochastic predator-prey model
Python Code: import numpy as np import scipy.sparse.linalg as sp import sympy as sym from scipy.linalg import toeplitz import ipywidgets as widgets from ipywidgets import IntSlider import matplotlib.pyplot as plt %matplotlib inline from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter plt.style.use('ggplot') import matplotlib as mpl mpl.rcParams['font.size'] = 14 mpl.rcParams['axes.labelsize'] = 20 mpl.rcParams['xtick.labelsize'] = 14 mpl.rcParams['ytick.labelsize'] = 14 sym.init_printing() from scipy.integrate import odeint from ipywidgets import interact def plot_matrices_with_values2(ax,M,flag_values=False): N=M.shape[0] cmap = plt.get_cmap('GnBu') ax.matshow(M, cmap=cmap) if flag_values: for i in np.arange(0, N): for j in np.arange(0, N): # I had to change the order of indices to make it right ax.text(i, j, '{:.2f}'.format(M[j,i]), va='center', ha='center', color='r') ids = np.arange(64) # Making it Python compatible I = np.reshape(ids,(8,8),order='F') print(I) # Defining occupied squares ([row,columns]) occupied = np.array([[2, 3], [5, 5], [6, 2], [6, 3]])-1 # Subtracting one to make it Python compatible! print(occupied) for oc in occupied: print(I[oc[0],oc[1]]) # Defining the peice piece = 'Knight' # 'King' or 'Knight' if piece == 'King': # Squares relative to the current one reached by the King Mask = np.zeros((8,2)) k=0 piece_range=[-1,0,1] for i in piece_range: for j in piece_range: if not ((i==0) and (j==0)): Mask[k,0]=i Mask[k,1]=j k=k+1 print(Mask) elif piece == 'Knight': # Squares relative to the current one reached by the Knight Mask = np.zeros((8,2)) k=0 piece_range=[-2,-1,0,1,2] for i in [-2,-1,0,1,2]: for j in [-2,-1,0,1,2]: if np.linalg.norm([i,j],ord=1)==3: Mask[k,0]=i Mask[k,1]=j k=k+1 print(Mask) else: print('Wrong piece!') M = np.zeros((64,64)) # Scan through all starting squares # find all possible destinations for n in np.arange(8): for m in np.arange(8): k=I[n,m] # Index to current square # Find possible destination square using the mask P = np.array([n, m])+Mask # Discard occupied squares P_tmp=[] for p in P: flag=0 for oc_sq in occupied: if abs(oc_sq[0]-p[0])+abs(oc_sq[1]-p[1])<1e-10: flag=1 if flag==0: P_tmp.append(p) P=np.array(P_tmp,dtype=int) # Discarding squares outside the board P_tmp=[] for p in P: if p[0]>=0 and p[0]<=7 and p[1]>=0 and p[1]<=7: P_tmp.append(p) P=np.array(P_tmp,dtype=int) nP = P.shape[0] # number of possible destinations # Fill in the probabilities for p in P: Ip = I[p[0],p[1]] # Destination index M[Ip,k] = 1/nP # Moving probability fig, (ax1) = plt.subplots(1, 1, figsize=(10,10)) plot_matrices_with_values2(ax1,M,False) Explanation: INF-495, v0.01, Claudio Torres, [email protected]. DI-UTFSM Textbook: Computational Mathematical Modeling, An Integrated Approach Across Scales by Daniela Calvetti and Erkki Somersalo. Chapter 8 8.1 Markov process and random walk End of explanation def plot_random_walk_output(nsteps=1): np.random.seed(0) # nsteps = 100000 prev = I[1,2] ncounts = np.zeros(64) squares_ids = [prev] for n in np.arange(nsteps): # Random draw using the column M[:,prev] next_step = np.random.choice(np.arange(64),p=M[:,prev]) ncounts[next_step] = ncounts[next_step] + 1 prev = next_step squares_ids.append(next_step) fig, (ax1) = plt.subplots(1, 1, figsize=(10,10)) plot_matrices_with_values2(ax1,np.log10(np.reshape(ncounts/np.sum(ncounts),(8,8),order='F')),True) # print(ncounts) # print(squares_ids) # print(I) interact(plot_random_walk_output,nsteps=(0,500000,1)) Explanation: First algorithm: Random walk. Now I added the plot of the resulting probabilities, it is interesting to observe the evolution from the beginning. Compare with the other two outcomes. End of explanation def plot_P_n(nj, I, M): # Defining initial distribution (i0,j0)=(0,1) pcurr = np.zeros(64) pcurr[I[i0,j0]] = 1 for j in np.arange(nj): pcurr = np.dot(M,pcurr) pcurrM = np.reshape(pcurr,(8,8),order='F') fig, (ax1) = plt.subplots(1, 1, figsize=(10,10)) plot_matrices_with_values2(ax1,np.log10(pcurrM),True) # print(pcurr) interact(lambda nj=0: plot_P_n(nj, I=I, M=M), nj = (0,2000,1)) Explanation: Second algorithm: Power iteration. Warning: For the case of the Knight, this algorithm shows an interesting behavior and that seems to be related there are two eigenvalues with the magnitud equal to 1, one of the is actually equal to 1 but the other is equally to -1. This does not happen for the case of the King. End of explanation w, v = np.linalg.eig(M) w v0=np.real(v[:,0])/np.sum(np.real(v[:,0])) print(v0) v1=np.real(v[:,1])/np.sum(np.real(v[:,1])) print(v1) if piece == 'King': v_selected = v0 elif piece == 'Knight': v_selected = v1 else: print('Wrong piece!') v_selected=np.full((64),np.nan) fig, (ax1) = plt.subplots(1, 1, figsize=(10,10)) plot_matrices_with_values2(ax1,np.log10(np.reshape(np.real(v_selected),(8,8),order='F')),True) Explanation: Third algorithm: Eigenvalues and eigenvectors. We need to look for the eigenvector associated to the eigenvalue equal to 1. End of explanation def plot_S_Pred_Prey(seed=0): Nj = 200 # number of X adult pair at t = t(j) Mj = 50 # number of Y adult pair at t = t(j) p_mean = 10 # mean number of X offsprings/adult pair p_std = 3 # std of the number of the X offspring/adult pair beta = 0.3 # Survival probability of X in the absence of Y Mstar = 50 # number of Y that halves the survival prob. of X q_mean = 4 # mean number of Y offspring/adult pair q_std = 2 # std of the number of the Y offsrping/adult pair gamma = 0.6 # Survival prob. of Y in adundance of X Nstar = 400 # Number of X pairs at which the survival prob. of Y is halved nrounds = 40 # Number of generations allowed N = np.full(nrounds,np.nan) N[0] = Nj M = np.full(nrounds,np.nan) M[0] = Mj Surv = np.full((nrounds-1,2),np.nan) np.random.seed(seed) for j in np.arange(1,nrounds): # Draw the number of offsprings of species X Pj = Nj*p_mean+np.sqrt(Nj)*p_std*np.random.randn() # Survival probability of the offspring of species X s = beta/(1+(Mj/Mstar)**2) Nj = s*Pj+np.sqrt(s*Pj*(1-s))*np.random.randn() N[j] = 0.5*Nj # Draw the number of offsprings of species Y Qj = Mj*q_mean+np.sqrt(Mj)*q_std*np.random.randn() # Survival probability of the offspring of species Y r = gamma*(1-1/(1+(Nj/Nstar)**2)) Mj = r*Qj+np.sqrt(r*Qj*(1-r))*np.random.randn() M[j] = 0.5*Mj # Saving the survival probabilities for later analysis Surv[j-1,:]=[s,r] s=Surv[:,0] r=Surv[:,1] smean = np.sum(s)/(nrounds-1) rmean = np.sum(r)/(nrounds-1) Surv_c = Surv-np.array([smean,rmean]) covSurv = np.dot(Surv_c.T,Surv_c)/(nrounds-1) corrcoeff = covSurv[0,1]/np.sqrt(covSurv[0,0]*covSurv[1,1]) fig, ((ax1,ax2),(ax3,ax4)) = plt.subplots(2, 2, figsize=(10,10)) ax4.plot(N,'r.-') ax4.plot(M,'b.-') ax4.grid(True) s=Surv[:,0] r=Surv[:,1] ax1.plot(s,r,'.') ax1.set_xlim(0,1) ax1.set_ylim(0,1) ax1.set_xlabel(r'$s_i$') ax1.set_ylabel(r'$r_i$') ax1.text(0.5, 0.2, 'c={:.5f}'.format(corrcoeff), va='center', ha='center', color='k') ax2.hist(r,orientation='horizontal') ax2.set_ylim(0,1) ax3.hist(s) ax3.set_xlim(0,1) plt.show() interact(plot_S_Pred_Prey,seed=(0,100,1)) Explanation: 8.2 Stochastic predator-prey model End of explanation
12,875
Given the following text description, write Python code to implement the functionality described below step by step Description: Workshop 4 - Performance Metrics In this workshop we study 2 performance metrics(Spread and Inter-Generational Distance) on GA optimizing the POM3 model. Step2: To compute most measures, data(i.e objectives) is normalized. Normalization is scaling the data between 0 and 1. Why do we normalize? TODO2 Step10: Data Format For our experiments we store the data in the following format. data = { "expt1" Step16: Reference Set Almost all the traditional measures you consider need a reference set for its computation. A theoritical reference set would be the ideal pareto frontier. This is fine for a) Mathematical Models Step19: IGD = inter-generational distance; i.e. how good are you compared to the best known? Find a reference set (the best possible solutions) For each optimizer For each item in its final Pareto frontier Find the nearest item in the reference set and compute the distance to it. Take the mean of all the distances. This is IGD for the optimizer Note that the less the mean IGD, the better the optimizer since this means its solutions are closest to the best of the best.
Python Code: %matplotlib inline # All the imports from __future__ import print_function, division import pom3_ga, sys import pickle # TODO 1: Enter your unity ID here __author__ = "dndesai" Explanation: Workshop 4 - Performance Metrics In this workshop we study 2 performance metrics(Spread and Inter-Generational Distance) on GA optimizing the POM3 model. End of explanation def normalize(problem, points): Normalize all the objectives in each point and return them meta = problem.objectives all_objs = [] for point in points: objs = [] for i, o in enumerate(problem.evaluate(point)): low, high = meta[i].low, meta[i].high # TODO 3: Normalize 'o' between 'low' and 'high'; Then add the normalized value to 'objs' if high == low: objs.append(0); continue; objs.append((o-low)/(high-low)) all_objs.append(objs) return all_objs Explanation: To compute most measures, data(i.e objectives) is normalized. Normalization is scaling the data between 0 and 1. Why do we normalize? TODO2 : To put on the same level playing field. That makes it easy to compare two data points even from different sets. End of explanation Performing experiments for [5, 10, 50] generations. problem = pom3_ga.POM3() pop_size = 10 repeats = 10 test_gens = [5, 10, 50] def save_data(file_name, data): Save 'data' to 'file_name.pkl' with open(file_name + ".pkl", 'wb') as f: pickle.dump(data, f, pickle.HIGHEST_PROTOCOL) def load_data(file_name): Retrieve data from 'file_name.pkl' with open(file_name + ".pkl", 'rb') as f: return pickle.load(f) def build(problem, pop_size, repeats, test_gens): Repeat the experiment for 'repeats' number of repeats for each value in 'test_gens' tests = {t: [] for t in test_gens} tests[0] = [] # For Initial Population for _ in range(repeats): init_population = pom3_ga.populate(problem, pop_size) pom3_ga.say(".") for gens in test_gens: tests[gens].append(normalize(problem, pom3_ga.ga(problem, init_population, retain_size=pop_size, gens=gens)[1])) tests[0].append(normalize(problem, init_population)) print("\nCompleted") return tests Repeat Experiments # tests = build(problem, pop_size, repeats, test_gens) Save Experiment Data into a file # save_data("dump", tests) Load the experimented data from dump. tests = load_data("dump") print (tests.keys()) Explanation: Data Format For our experiments we store the data in the following format. data = { "expt1":[repeat1, repeat2, ...], "expt2":[repeat1, repeat2, ...], . . . } repeatx = [objs1, objs2, ....] // All of the final population objs1 = [norm_obj1, norm_obj2, ...] // Normalized objectives of each member of the final population. End of explanation def make_reference(problem, *fronts): Make a reference set comparing all the fronts. Here the comparison we use is bdom. It can be altered to use cdom as well retain_size = len(fronts[0]) reference = [] for front in fronts: reference+=front def bdom(one, two): Return True if 'one' dominates 'two' else return False :param one - [pt1_obj1, pt1_obj2, pt1_obj3, pt1_obj4] :param two - [pt2_obj1, pt2_obj2, pt2_obj3, pt2_obj4] dominates = False for i, obj in enumerate(problem.objectives): gt, lt = pom3_ga.gt, pom3_ga.lt better = lt if obj.do_minimize else gt # TODO 3: Use the varaibles declared above to check if one dominates two if better(one[i], two[i]): dominates = True elif one[i] != two[i]: return False return dominates def fitness(one, dom): return len([1 for another in reference if dom(one, another)]) fitnesses = [] for point in reference: fitnesses.append((fitness(point, bdom), point)) reference = [tup[1] for tup in sorted(fitnesses, reverse=True)] return reference[:retain_size] assert len(make_reference(problem, tests[5][0], tests[10][0], tests[50][0])) == len(tests[5][0]) ''' ### Spread Calculating spread: <img width=300 src="http://mechanicaldesign.asmedigitalcollection.asme.org/data/Journals/JMDEDB/27927/022006jmd3.jpeg"> - Consider the population of final gen(P) and the Pareto Frontier(R). - Find the distances between the first point of P and first point of R(_d<sub>f</sub>_) and last point of P and last point of R(_d<sub>l</sub>_) - Find the distance between all points and their nearest neighbor _d<sub>i</sub>_ and their nearest neighbor - Then: <img width=300 src="https://raw.githubusercontent.com/txt/ase16/master/img/spreadcalc.png"> - If all data is maximally spread, then all distances _d<sub>i</sub>_ are near mean d which would make _&Delta;=0_ ish. Note that _less_ the spread of each point to its neighbor, the _better_ since this means the optimiser is offering options across more of the frontier. ''' def eucledian(one, two): Compute Eucledian Distance between 2 vectors. We assume the input vectors are normalized. :param one: Vector 1 :param two: Vector 2 :return: # TODO 4: Code up the eucledian distance. https://en.wikipedia.org/wiki/Euclidean_distance return sum([(o-t)**2 for o,t in zip(one, two)])**0.5 def sort_solutions(solutions): Sort a list of list before computing spread def sorter(lst): m = len(lst) weights = reversed([10 ** i for i in xrange(m)]) return sum([element * weight for element, weight in zip(lst, weights)]) return sorted(solutions, key=sorter) def closest(one, many): min_dist = sys.maxint closest_point = None for this in many: dist = eucledian(this, one) if dist < min_dist: min_dist = dist closest_point = this return min_dist, closest_point def spread(obtained, ideals): Calculate the spread (a.k.a diversity) for a set of solutions s_obtained = sort_solutions(obtained) s_ideals = sort_solutions(ideals) d_f = closest(s_ideals[0], s_obtained)[0] d_l = closest(s_ideals[-1], s_obtained)[0] n = len(s_ideals) distances = [] for i in range(len(s_obtained)-1): distances.append(eucledian(s_obtained[i], s_obtained[i+1])) d_bar = sum(distances)/len(distances) # TODO 5: Compute the value of spread using the definition defined in the previous cell. d_sum = sum([abs(d_i - d_bar) for d_i in distances]) delta = (d_f + d_l + d_sum)/ (d_f + d_l + (n-1)*d_bar) return delta ref = make_reference(problem, tests[5][0], tests[10][0], tests[50][0]) print(spread(tests[5][0], ref)) print(spread(tests[10][0], ref)) print(spread(tests[50][0], ref)) Explanation: Reference Set Almost all the traditional measures you consider need a reference set for its computation. A theoritical reference set would be the ideal pareto frontier. This is fine for a) Mathematical Models: Where we can solve the problem to obtain the set. b) Low Runtime Models: Where we can do a one time exaustive run to obtain the model. But most real world problems are neither mathematical nor have a low runtime. So what do we do?. Compute an approximate reference set One possible way of constructing it is: 1. Take the final generation of all the treatments. 2. Select the best set of solutions from all the final generations End of explanation def igd(obtained, ideals): Compute the IGD for a set of solutions :param obtained: Obtained pareto front :param ideals: Ideal pareto front :return: # TODO 6: Compute the value of IGD using the definition defined in the previous cell. igd_val = sum([closest (ideal,obtained)[0] for ideal in ideals])/ len(ideals) return igd_val ref = make_reference(problem, tests[5][0], tests[10][0], tests[50][0]) print(igd(tests[5][0], ref)) print(igd(tests[10][0], ref)) print(igd(tests[50][0], ref)) import sk sk = reload(sk) def format_for_sk(problem, data, measure): Convert the experiment data into the format required for sk.py and computet the desired 'measure' for all the data. gens = data.keys() reps = len(data[gens[0]]) measured = {gen:["gens_%d"%gen] for gen in gens} for i in range(reps): ref_args = [data[gen][i] for gen in gens] ref = make_reference(problem, *ref_args) for gen in gens: measured[gen].append(measure(data[gen][i], ref)) return measured def report(problem, tests, measure): measured = format_for_sk(problem, tests, measure).values() sk.rdivDemo(measured) print("*** IGD ***") report(problem, tests, igd) print("\n*** Spread ***") report(problem, tests, spread) Explanation: IGD = inter-generational distance; i.e. how good are you compared to the best known? Find a reference set (the best possible solutions) For each optimizer For each item in its final Pareto frontier Find the nearest item in the reference set and compute the distance to it. Take the mean of all the distances. This is IGD for the optimizer Note that the less the mean IGD, the better the optimizer since this means its solutions are closest to the best of the best. End of explanation
12,876
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Module 1 Step2: Given the dataset, let's test our dataset by seeing some of the images and their corresponding labels. PyTorch provides us with a neat little function called make_grid which plots "x" number of images together in a grid. Step3: Checking for GPU Step5: Creating a Neural Network Now that we're through with the boring part, let's move on to the fun stuff! In the code stub being provided you can write your own network definition and then print it. We've not covered Convolutional Layers yet, so the fun will be limited to just using Linear Layers. When using linear layers keep in mind that the input features are 3*32*32. When writing out the layers it is important to think in terms of matrix multiplication. So if your input features are of dimension 4x3x32x32 then your input features must be the same dimensions. I'll define some terms so that you can use them while designing the net Step6: If we have a gpu then we need to "move" the network to the GPU. Under the hood, it copies the weights and the biases that are in CPU memory over to the GPU memory. In PyTorch, all of this can be accomplished by checking for GPU and then appending cuda() after the net. Training the Network Having defined our network and tested that our dataloader works to our satisfaction, we're going to train the network. For your convenience, the training script is included and it is highly recommended that you try to gain a sense of what's happening. We'll talk more about training in the coming meetings. Using GPUs for training In order to use GPUs for training, we also need to move our data on the GPU. Since data is a tensor, we can move them in the similar way we moved our MyNet i.e. appending cuda() after checking for a GPU. Step7: So far we've trained the network and we're seeing some output loss. However, that's only one part of the story, since we need the model to perform well on unseen inputs. In order to do that we'll evaluate the dataset on the test_batch. Step8: So we've got these images along with their labels as "ground truth". Now let's ask the neural network we just trained as to what it thinks the images are Step9: Pretty sweet! our neural network seems to have learnt something. Let's see how it does on the overall dataset
Python Code: import sys, os import pickle import torch import torch.utils.data as data import glob from PIL import Image import numpy as np def unpickle(fname): with open(fname, 'rb') as f: Dict = pickle.load(f, encoding='bytes') return Dict def load_data(batch): print ("Loading batch:{}".format(batch)) return unpickle(batch) class CIFARLoader(data.Dataset): CIFAR-10 Loader: Loads the CIFAR-10 data according to an index value and returns the data and the labels. args: root: Root of the data directory. Optional args: transforms: The transforms you wish to apply to the data. target_transforms: The transforms you wish to apply to the labels. def __init__(self, root, train=True, transform=None, target_transform=None): self.root = root self.transform = transform self.target_transform = target_transform self.train = train patt = os.path.join(self.root, 'data_batch_*') # create the pattern we want to search for. self.batches = sorted(glob.glob(patt)) self.train_data = [] self.train_labels = [] self.test_data = [] self.test_labels = [] if self.train: for batch in self.batches: entry = {} entry = load_data(batch) self.train_data.append(entry[b'data']) self.train_labels += entry[b'labels'] else: entry = load_data(os.path.join(self.root, 'test_batch')) self.test_data.append(entry[b'data']) self.test_labels += entry[b'labels'] ############################################# # We need to "concatenate" all the different # # training samples into one big array. For # # doing that we're going to use a numpy # # function called "concatenate". # ############################################## if self.train: self.train_data = np.concatenate(self.train_data) self.train_data = self.train_data.reshape((50000, 3, 32,32)) self.train_data = self.train_data.transpose((0,2,3,1)) # pay attention to this step! else: self.test_data = np.concatenate(self.test_data) self.test_data = self.test_data.reshape((10000, 3,32,32)) self.test_data = self.test_data.transpose((0,2,3,1)) def __getitem__(self, index): if self.train: image = self.train_data[index] label = self.train_labels[index] else: image = self.test_data[index] label = self.test_labels[index] if self.transform is not None: image = self.transform(image) if self.target_transform is not None: label = self.target_transform(label) # print(image.size()) return image, label def __len__(self): if self.train: return len(self.train_data) else: return len(self.test_data) Explanation: Module 1: Introduction to Neural Nets The aim of this module is to introduce you to designing simple neural networks. You've already seen how to load data in PyTorch and a sample script of the overall workflow. In this notebook, you'll implement your own neural network and report on it's performance. Gathering data We'll use the dataloading module we developed earlier. End of explanation import torchvision.transforms as transforms import torch.utils.data as data import numpy as np import matplotlib.pyplot as plt import torchvision def imshow(torch_tensor): torch_tensor = torch_tensor/2 + 0.5 npimg = torch_tensor.numpy() plt.imshow(npimg.transpose(1,2,0)) plt.show() tfs = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5))]) root='/home/akulshr/cifar-10-batches-py/' cifar_train = CIFARLoader(root, train=True, transform=tfs) # create a "CIFARLoader instance". cifar_loader = data.DataLoader(cifar_train, batch_size=4, shuffle=True, num_workers=2) # all possible classes in the CIFAR-10 dataset classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') data_iter = iter(cifar_loader) data,label = data_iter.next() #visualize data. imshow(torchvision.utils.make_grid(data)) # print the labels ' '.join(classes[label[j]] for j in range(4)) Explanation: Given the dataset, let's test our dataset by seeing some of the images and their corresponding labels. PyTorch provides us with a neat little function called make_grid which plots "x" number of images together in a grid. End of explanation # checking for GPU can simply be done by adding the following code gpu_avail = torch.cuda.is_available() print(gpu_avail) Explanation: Checking for GPU End of explanation import torch.nn as nn class MyNet(nn.Module): Your neural network here. bs: Batch size, you can include or leave it out. def __init__(self, bs): super(MyNet, self).__init__() pass def forward(self, x): pass net = MyNet(4) # be sure to put any additional parameters you pass to __init__ here if gpu_avail: net = net.cuda() # Move the net to the GPU. print(net) Explanation: Creating a Neural Network Now that we're through with the boring part, let's move on to the fun stuff! In the code stub being provided you can write your own network definition and then print it. We've not covered Convolutional Layers yet, so the fun will be limited to just using Linear Layers. When using linear layers keep in mind that the input features are 3*32*32. When writing out the layers it is important to think in terms of matrix multiplication. So if your input features are of dimension 4x3x32x32 then your input features must be the same dimensions. I'll define some terms so that you can use them while designing the net: N: The batch size --> This determines how many images are pushed through the network during an iteration. C: The number of channels --> It's an RGB image hence we set this to 3. H,W: The height and width of the image. Your input to a network is usually NxCxHxW. Now a linear layer expects a single number as an input feature, so for a batch size of 1 your input features will be 3072(3*32*32). End of explanation import torch.optim as optim import torch.utils.data as data from torch.autograd import Variable tfs = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5))]) root='/home/akulshr/cifar-10-batches-py/' cifar_train = CIFARLoader(root, transform=tfs) # create a "CIFARLoader instance". cifar_train_loader = data.DataLoader(cifar_train, batch_size=4, shuffle=True, num_workers=2) lossfn = nn.NLLLoss() optimz = optim.SGD(net.parameters(), lr=1e-3, momentum=0.9) def train(net): net.train() for ep in range(2): running_loss = 0.0 for ix, (img,label) in enumerate(cifar_train_loader, 0): if gpu_avail: img = img.cuda() label = label.cuda() img_var = Variable(img) label_var = Variable(label) optimz.zero_grad() # print(img_var.size()) op = net(img_var) loss = lossfn(op, label_var) loss.backward() optimz.step() running_loss += loss.data[0] if ix%2000 == 1999: print("[%d/%5d] Loss: %f"%(ep+1, ix+1, running_loss/2000)) running_loss = 0.0 print("Finished Training\n") train(net) Explanation: If we have a gpu then we need to "move" the network to the GPU. Under the hood, it copies the weights and the biases that are in CPU memory over to the GPU memory. In PyTorch, all of this can be accomplished by checking for GPU and then appending cuda() after the net. Training the Network Having defined our network and tested that our dataloader works to our satisfaction, we're going to train the network. For your convenience, the training script is included and it is highly recommended that you try to gain a sense of what's happening. We'll talk more about training in the coming meetings. Using GPUs for training In order to use GPUs for training, we also need to move our data on the GPU. Since data is a tensor, we can move them in the similar way we moved our MyNet i.e. appending cuda() after checking for a GPU. End of explanation def imshow(torch_tensor): torch_tensor = torch_tensor/2 + 0.5 npimg = torch_tensor.numpy() plt.imshow(npimg.transpose(1,2,0)) plt.show() tfs = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5))]) root='/home/akulshr/cifar-10-batches-py/' cifar_test = CIFARLoader(root, train=False, transform=tfs) cifar_test_loader = data.DataLoader(cifar_test,batch_size=4, shuffle=False, num_workers=2) # all possible classes in the CIFAR-10 dataset classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') data_iter = iter(cifar_test_loader) imgs,label = data_iter.next() # Show the test images. imshow(torchvision.utils.make_grid(imgs)) # Print the "Ground Truth labels" print("Ground Truth: ") print(' '.join(classes[label[j]] for j in range(4))) Explanation: So far we've trained the network and we're seeing some output loss. However, that's only one part of the story, since we need the model to perform well on unseen inputs. In order to do that we'll evaluate the dataset on the test_batch. End of explanation ################################################## # If you're using GPUs for training, then care # # must be taken to use "cpu()" before getting any # # data for printing, displaying etc. # # e.g # _, pred = torch.max(op.cpu().data) # # # ################################################# data_iter = iter(cifar_test_loader) imgs,label = data_iter.next() op = net(Variable(imgs)) _, pred = torch.max(op.data, 1) print("Guessed class: ") print(' '.join(classes[pred[j]] for j in range(4))) Explanation: So we've got these images along with their labels as "ground truth". Now let's ask the neural network we just trained as to what it thinks the images are End of explanation correct = 0.0 total = 0.0 for cache in cifar_test_loader: img, label = cache op = net(Variable(img)) _, pred = torch.max(op.data, 1) total += label.size(0) correct += (pred==label).sum() print("accuracy: %f"%(100*(correct/total))) Explanation: Pretty sweet! our neural network seems to have learnt something. Let's see how it does on the overall dataset: End of explanation
12,877
Given the following text description, write Python code to implement the functionality described below step by step Description: Kalman Filter and your Matrix Class Once you have a working matrix class, you can use the class to run a Kalman filter! You will need to put your matrix class into the workspace Step1: Visualizing the Tracked Object Distance The next cell visualizes the simulating data. The first visualization shows the object distance over time. You can see that the car is moving forward although decelerating. Then the car stops for 5 seconds and then drives backwards for 5 seconds. Step2: Visualizing Velocity Over Time The next cell outputs a visualization of the velocity over time. The tracked car starts at 100 km/h and decelerates to 0 km/h. Then the car idles and eventually starts to decelerate again until reaching -10 km/h. Step3: Visualizing Acceleration Over Time This cell visualizes the tracked cars acceleration. The vehicle declerates at 10 m/s^2. Then the vehicle stops for 5 seconds and briefly accelerates again. Step4: Simulate Lidar Data The following code cell creates simulated lidar data. Lidar data is noisy, so the simulator takes ground truth measurements every 0.05 seconds and then adds random noise. Step5: Visualize Lidar Meausrements Run the following cell to visualize the lidar measurements versus the ground truth. The ground truth is shown in red, and you can see that the lidar measurements are a bit noisy. Step6: Part 2 - Using a Kalman Filter The next part of the demonstration will use your matrix class to run a Kalman filter. This first cell initializes variables and defines a few functions. The following cell runs the Kalman filter using the lidar data. Step7: Run the Kalman filter The next code cell runs the Kalman filter. In this demonstration, the prediction step starts with the second lidar measurement. When the first lidar signal arrives, there is no previous lidar measurement with which to calculate velocity. In other words, the Kalman filter predicts where the vehicle is going to be, but it can't make a prediction until time has passed between the first and second lidar reading. The Kalman filter has two steps Step8: Visualize the Results The following code cell outputs a visualization of the Kalman filter. The chart contains ground turth, the lidar measurements, and the Kalman filter belief. Notice that the Kalman filter tends to smooth out the information obtained from the lidar measurement. It turns out that using multiple sensors like radar and lidar at the same time, will give even better results. Using more than one type of sensor at once is called sensor fusion, which you will learn about in the Self-Driving Car Engineer Nanodegree Step9: Visualize the Velocity One of the most interesting benefits of Kalman filters is that they can give you insights into variables that you cannot directly measured. Although lidar does not directly give velocity information, the Kalman filter can infer velocity from the lidar measurements. This visualization shows the Kalman filter velocity estimation versus the ground truth. The motion model used in this Kalman filter is relatively simple; it assumes velocity is constant and that acceleration a random noise. You can see that this motion model might be too simplistic because the Kalman filter has trouble predicting velocity as the object decelerates.
Python Code: %matplotlib inline import pandas as pd import math import matplotlib.pyplot as plt import matplotlib import datagenerator import matrix as m matplotlib.rcParams.update({'font.size': 16}) # data_groundtruth() has the following inputs: # Generates Data # Input variables are: # initial position meters # initial velocity km/h # final velocity (should be a negative number) km/h # acceleration (should be a negative number) m/s^2 # how long the vehicle should idle # how long the vehicle should drive in reverse at constant velocity # time between lidar measurements in milliseconds time_groundtruth, distance_groundtruth, velocity_groundtruth, acceleration_groundtruth = datagenerator.generate_data(5, 100, -10, -10, 5000, 5000, 50) data_groundtruth = pd.DataFrame( {'time': time_groundtruth, 'distance': distance_groundtruth, 'velocity': velocity_groundtruth, 'acceleration': acceleration_groundtruth }) Explanation: Kalman Filter and your Matrix Class Once you have a working matrix class, you can use the class to run a Kalman filter! You will need to put your matrix class into the workspace: * Click above on the "JUPYTER" logo. * Then open the matrix.py file, and copy in your code there. * Make sure to save the matrix.py file. * Then click again on the "JUPYTER" logo and open this file again. You can also download this file kalman_filter_demo.ipynb and run the demo locally on your own computer. Once you have our matrix class loaded, you are ready to go through the demo. Read through this file and run each cell one by one. You do not need to write any code in this Ipython notebook. The demonstration has two different sections. The first section creates simulated data. The second section runs a Kalman filter on the data and visualizes the results. Kalman Filters - Why are they useful? Kalman filters are really good at taking noisy sensor data and smoothing out the data to make more accurate predictions. For autonomous vehicles, Kalman filters can be used in object tracking. Kalman Filters and Sensors Object tracking is often done with radar and lidar sensors placed around the vehicle. A radar sensor can directly measure the distance and velocity of objects moving around the vehicle. A lidar sensor only measures distance. Put aside a Kalman filter for a minute and think about how you could use lidar data to track an object. Let's say there is a bicyclist riding around in front of you. You send out a lidar signal and receive the signal back. The lidar sensor tells you that the bicycle is 10 meters directly ahead of you but gives you no velocity information. By the time your lidar device sends out another signal, maybe 0.05 seconds will have passed. But during those 0.05 seconds, your vehicle still needs to keep track of the bicycle. So your vehicle will predict where it thinks the bycicle will be. But your vehicle has no bicycle velocity information. After 0.05 seconds, the lidar device sends out and receives another signal. This time, the bicycle is 9.95 meters ahead of you. Now you know that the bicycle is traveling -1 meter per second towards you. For the next -.05 seconds, your vehicle will assume the bicycle is traveling -1 m/s towards you. Then another lidar signal goes out and comes back, and you can update the position and velocity again. Sensor Noise Unfortunately, lidar and radar signals are noisy. In other words, they are somewhat inacurrate. A Kalman filter helps to smooth out the noise so that you get a better fix on the bicycle's true position and velocity. A Kalman filter does this by weighing the uncertainty in your belief about the location versus the uncertainty in the lidar or radar measurement. If your belief is very uncertain, the Kalman filter gives more weight to the sensor. If the sensor measurement has more uncertainty, your belief about the location gets more weight than the sensor mearuement. Part 1 - Generate Data The next few cells in the Ipython notebook generate simulation data. Imagine you are in a vehicle and tracking another car in front of you. All of the data you track will be relative to your position. In this simulation, you are on a one-dimensional road where the car you are tracking can only move forwards or backwards. For this simulated data, the tracked vehicle starts 5 meters ahead of you traveling at 100 km/h. The vehicle is accelerating at -10 m/s^2. In other words, the vehicle is slowing down. Once the vehicle stops at 0 km/h, the car stays idle for 5 seconds. Then the vehicle continues accelerating towards you until the vehicle is traveling at -10 km/h. The vehicle travels at -10 km/h for 5 seconds. Don't worry too much about the trajectory of the other vehicle; this will be displayed for you in a visualization You have a single lidar sensor on your vehicle that is tracking the other car. The lidar sensor takes a measurment once every 50 milliseconds. Run the code cell below to start the simulator and collect data about the tracked car. Noticed the line import matrix as m, which imports your matrix code from the final project. You will not see any output yet when running this cell. End of explanation ax1 = data_groundtruth.plot(kind='line', x='time', y='distance', title='Object Distance Versus Time') ax1.set(xlabel='time (milliseconds)', ylabel='distance (meters)') Explanation: Visualizing the Tracked Object Distance The next cell visualizes the simulating data. The first visualization shows the object distance over time. You can see that the car is moving forward although decelerating. Then the car stops for 5 seconds and then drives backwards for 5 seconds. End of explanation ax2 = data_groundtruth.plot(kind='line', x='time', y='velocity', title='Object Velocity Versus Time') ax2.set(xlabel='time (milliseconds)', ylabel='velocity (km/h)') Explanation: Visualizing Velocity Over Time The next cell outputs a visualization of the velocity over time. The tracked car starts at 100 km/h and decelerates to 0 km/h. Then the car idles and eventually starts to decelerate again until reaching -10 km/h. End of explanation data_groundtruth['acceleration'] = data_groundtruth['acceleration'] * 1000 / math.pow(60 * 60, 2) ax3 = data_groundtruth.plot(kind='line', x='time', y='acceleration', title='Object Acceleration Versus Time') ax3.set(xlabel='time (milliseconds)', ylabel='acceleration (m/s^2)') Explanation: Visualizing Acceleration Over Time This cell visualizes the tracked cars acceleration. The vehicle declerates at 10 m/s^2. Then the vehicle stops for 5 seconds and briefly accelerates again. End of explanation # make lidar measurements lidar_standard_deviation = 0.15 lidar_measurements = datagenerator.generate_lidar(distance_groundtruth, lidar_standard_deviation) lidar_time = time_groundtruth Explanation: Simulate Lidar Data The following code cell creates simulated lidar data. Lidar data is noisy, so the simulator takes ground truth measurements every 0.05 seconds and then adds random noise. End of explanation data_lidar = pd.DataFrame( {'time': time_groundtruth, 'distance': distance_groundtruth, 'lidar': lidar_measurements }) matplotlib.rcParams.update({'font.size': 22}) ax4 = data_lidar.plot(kind='line', x='time', y ='distance', label='ground truth', figsize=(20, 15), alpha=0.8, title = 'Lidar Measurements Versus Ground Truth', color='red') ax5 = data_lidar.plot(kind='scatter', x ='time', y ='lidar', label='lidar measurements', ax=ax4, alpha=0.6, color='g') ax5.set(xlabel='time (milliseconds)', ylabel='distance (meters)') plt.show() Explanation: Visualize Lidar Meausrements Run the following cell to visualize the lidar measurements versus the ground truth. The ground truth is shown in red, and you can see that the lidar measurements are a bit noisy. End of explanation # Kalman Filter Initialization initial_distance = 0 initial_velocity = 0 x_initial = m.Matrix([[initial_distance], [initial_velocity * 1e-3 / (60 * 60)]]) P_initial = m.Matrix([[5, 0],[0, 5]]) acceleration_variance = 50 lidar_variance = math.pow(lidar_standard_deviation, 2) H = m.Matrix([[1, 0]]) R = m.Matrix([[lidar_variance]]) I = m.identity(2) def F_matrix(delta_t): return m.Matrix([[1, delta_t], [0, 1]]) def Q_matrix(delta_t, variance): t4 = math.pow(delta_t, 4) t3 = math.pow(delta_t, 3) t2 = math.pow(delta_t, 2) return variance * m.Matrix([[(1/4)*t4, (1/2)*t3], [(1/2)*t3, t2]]) Explanation: Part 2 - Using a Kalman Filter The next part of the demonstration will use your matrix class to run a Kalman filter. This first cell initializes variables and defines a few functions. The following cell runs the Kalman filter using the lidar data. End of explanation # Kalman Filter Implementation x = x_initial P = P_initial x_result = [] time_result = [] v_result = [] for i in range(len(lidar_measurements) - 1): # calculate time that has passed between lidar measurements delta_t = (lidar_time[i + 1] - lidar_time[i]) / 1000.0 # Prediction Step - estimates how far the object traveled during the time interval F = F_matrix(delta_t) Q = Q_matrix(delta_t, acceleration_variance) x_prime = F * x P_prime = F * P * F.T() + Q # Measurement Update Step - updates belief based on lidar measurement y = m.Matrix([[lidar_measurements[i + 1]]]) - H * x_prime S = H * P_prime * H.T() + R K = P_prime * H.T() * S.inverse() x = x_prime + K * y P = (I - K * H) * P_prime # Store distance and velocity belief and current time x_result.append(x[0][0]) v_result.append(3600.0/1000 * x[1][0]) time_result.append(lidar_time[i+1]) result = pd.DataFrame( {'time': time_result, 'distance': x_result, 'velocity': v_result }) Explanation: Run the Kalman filter The next code cell runs the Kalman filter. In this demonstration, the prediction step starts with the second lidar measurement. When the first lidar signal arrives, there is no previous lidar measurement with which to calculate velocity. In other words, the Kalman filter predicts where the vehicle is going to be, but it can't make a prediction until time has passed between the first and second lidar reading. The Kalman filter has two steps: a prediction step and an update step. In the prediction step, the filter uses a motion model to figure out where the object has traveled in between sensor measurements. The update step uses the sensor measurement to adjust the belief about where the object is. End of explanation ax6 = data_lidar.plot(kind='line', x='time', y ='distance', label='ground truth', figsize=(22, 18), alpha=.3, title='Lidar versus Kalman Filter versus Ground Truth') ax7 = data_lidar.plot(kind='scatter', x ='time', y ='lidar', label='lidar sensor', ax=ax6) ax8 = result.plot(kind='scatter', x = 'time', y = 'distance', label='kalman', ax=ax7, color='r') ax8.set(xlabel='time (milliseconds)', ylabel='distance (meters)') plt.show() Explanation: Visualize the Results The following code cell outputs a visualization of the Kalman filter. The chart contains ground turth, the lidar measurements, and the Kalman filter belief. Notice that the Kalman filter tends to smooth out the information obtained from the lidar measurement. It turns out that using multiple sensors like radar and lidar at the same time, will give even better results. Using more than one type of sensor at once is called sensor fusion, which you will learn about in the Self-Driving Car Engineer Nanodegree End of explanation ax1 = data_groundtruth.plot(kind='line', x='time', y ='velocity', label='ground truth', figsize=(22, 18), alpha=.8, title='Kalman Filter versus Ground Truth Velocity') ax2 = result.plot(kind='scatter', x = 'time', y = 'velocity', label='kalman', ax=ax1, color='r') ax2.set(xlabel='time (milliseconds)', ylabel='velocity (km/h)') plt.show() Explanation: Visualize the Velocity One of the most interesting benefits of Kalman filters is that they can give you insights into variables that you cannot directly measured. Although lidar does not directly give velocity information, the Kalman filter can infer velocity from the lidar measurements. This visualization shows the Kalman filter velocity estimation versus the ground truth. The motion model used in this Kalman filter is relatively simple; it assumes velocity is constant and that acceleration a random noise. You can see that this motion model might be too simplistic because the Kalman filter has trouble predicting velocity as the object decelerates. End of explanation
12,878
Given the following text description, write Python code to implement the functionality described below step by step Description: VirtualEATING Andrew Lane, University of California, Berkeley Overview CRISPR-EATING is a molecular biology protocol to generate libraries of CRISPR guide RNAs. The use of this this approach to generate a library suitable for chromosomal locus imaging requires ways to avoid regions that will be processed into non-specific guides, which (in part) is what these scripts are designed to achieve. These scripts contain a set of functions that are assembled into a workflow to Step1: Set up input files 1. The FASTA file to be scored Step2: 2. The genome against which generated guides are scored See Prerequisites above for explanation. Step3: Begin custom processing The FASTA file we've loaded (fasta_file) contains the entire X. laevis genome. The X. laevis genome hasn't yet been definitively assembled into physical chromosomes - instead, it's a large number of contigs or "scaffolds". For the purposes of making a library that labels a single region, we want to work with a big piece that we know is contiguous. So, we find the longest "scaffold". Step4: Next, we want to digest this scaffold into guides. This uses the al_diggesttarget function contained in eating.py to produce a generator of scores. Step5: In this version of the script, the output from al_digesttarget isn't especially readable, but for reference Step6: Next, we'd like to take the 20mers extracted and score them against the entire Xenopus laevis genome. These lines score each guide variable region for specificity using the xl71 BLAST database and the xl71genomedict dict . This takes a couple of days of processing time on a four-core Intel Core i5 with 16 GB RAM, so instead we'll load a pickle of the resulting data object. (Later versions implement a MySQL database to store scores). Uncomment the below cells if you'd like to rebuild the BLAST score DB. The decompressed pickle is too big for GitHub, so unzip finalpicklescores_done.pkl.zip and place it outside of your sync'd directory. Step7: The format of the resulting data is (score, guide). Step8: The scores in this object are an ordered list, with all HpaII scores first, then all BfaI scores and finally all ScrFI scores. We are interested in the distributioœn of scores along the DNA fragment, irrespective of the enzyme used to generate them. Thus, we want to rearrange the list with all scores from 5' to 3'. Step9: Let's extract the scores and plot their distribution on a histogram. Step10: So, there are ~5000 guides that are quite non-specific (score <= 4) and >14,000 guides that have a score of 100 and a further 4000 that score between 95 and 99. Finding clusters of high-scoring guides To make a library of useful guides, we'd like to PCR through continuous clusters of the highest-scoring ones. Our oligonucleotide vendor (IDT) has a minimum order of 288 oligos (3x 96-well plates) on a small and relatively inexpensive scale (5 pmol). To work within this limitation, we'd like to pick out 144 possible regions to PCR-amplify. If we are only willing to accept guides with a score of 100, we'd predict that our 144 PCR products will be short (there are probably few long spans of perfect-scoring guides). However, if we relax our requirement to >=99, we may get longer PCR products and thus more guides in our library. How does this scale at different cutoffs/thresholds? Step11: Our "yield" of guides descends steadily from a cutoff of >=5 to a cutoff of >=95, then drops from 2894 guides produced at a cutoff of 95 to 1719 at 100. So, a cutoff of >=95 might be a good balance between specificity and yield. Step12: We next asked what happens if we concentrate the guides into a smaller region. Does the guide yield scale with the region selected? To test this, we cut the input DNA into sections of 1/7 the ~21MB starting length Step13: Looks like the final 1/7 of the scaffold has the densest guide yield. Step14: OK, list is set up to contain Step15: First, go through the left edge of each amplicon to figure out priming rules depending on what's next to stuff. Step16: Next, go through various scenarios for the right edge Step17: Summarize priming rules Step18: Next up
Python Code: import Bio from Bio.Blast.Applications import NcbiblastnCommandline from Bio import SeqIO from Bio.Blast import NCBIXML from Bio import Restriction from Bio.Restriction import * from Bio.Alphabet.IUPAC import IUPACAmbiguousDNA from Bio.Seq import Seq from Bio.SeqRecord import SeqRecord import cPickle as pickle import subprocess import matplotlib from eating import * import multiprocessing as mp from operator import itemgetter, attrgetter, methodcaller import numpy %pylab inline Explanation: VirtualEATING Andrew Lane, University of California, Berkeley Overview CRISPR-EATING is a molecular biology protocol to generate libraries of CRISPR guide RNAs. The use of this this approach to generate a library suitable for chromosomal locus imaging requires ways to avoid regions that will be processed into non-specific guides, which (in part) is what these scripts are designed to achieve. These scripts contain a set of functions that are assembled into a workflow to: - Predict the sgRNA spacers produced when a particualr substrate DNA is subjected to the EATING protocol described in Lane et al., Dev. Cell (2015). - Score those peredicted guides for specificity within a genome from which a BLAST database and an implementation of the CRISPR guide scoring algorithm described in Hsu et al (2013). - Using the score information, pick out sub-regions within the substrate DNA that will produce clusters of high-specificity guides and design PCR primers to amplify only those regions. Following the generation of suitable PCR primers from this tool, the "wet" portion of the protocol is as follows: 1. The output PCR primers (144 pairs in 144 separate reactions in the case of the labled 3MB region) are used to amplify from the substrate DNA. 2. The resulting products are pooled and subjected to the EATING molecular biology protocol. 3. When complexed to dCas9-mNeonGreen (or other fluorescent protein), the resulting library can be used to image your desired locus. Prerequisites Some experience with Python and the very basics of Biopython and BLAST A Python installation with biopython, pickle A BLAST database generated from the genome against which you would like to score your guides and a working BLAST installation. To generate the BLAST database, use a FASTA file containing your genome of interest. For example, LAEVIS_7.1.repeatmasked.fa. Use the following syntax to generate the BLAST DB. (The -parse_seqids flag is critical; the guide scoring algorithm expects a database generated using this flag). makeblastdb -in LAEVIS_7.1.repeatmasked.fa -dbtype nucl -parse_seqids -out xl71 -title ‘xl71’ This was tested using makeblastdb version 2.2.29+. Perform a test BLAST query on your database to check that your installation can find it. The original FASTA file used to make the BLAST database must also be available; this is necessary so that it can be determined whether a guide BLAST database hit is adjacent to a PAM and therefore relevant for score determination. The entire genome is loaded entirely into memory in the current implementation and thus you need a computer with enough RAM (8-16GB) for this. (Future updates may remove this requirement) References: Hsu PD, Scott DA, Weinstein JA, Ran FA, Konermann S, Agarwala V, et al. DNA targeting specificity of RNA-guided Cas9 nucleases. Nat Biotechnol. Nature Publishing Group; 2013;31: 827–832. doi:10.1038/nbt.2647 Using this notebook and adapting it for a particular purpose The basic EATING-related logic is in the eating module (eating.py). This module contains functions (prefixed with "al_") to predict the guides that will be generated from an input DNA sequence and score the guides. Import modules End of explanation path_to_genomic_data = "../../../MBP 750GB/andypy/Genomic Data/" file_name = "LAEVIS_7.1.repeatMasked.fa" genome = "xl71" fasta_file = SeqIO.parse(str(path_to_genomic_data + file_name), "fasta") Explanation: Set up input files 1. The FASTA file to be scored End of explanation xl71genome = SeqIO.parse(open(str(path_to_genomic_data + file_name), 'rb'), "fasta", alphabet=IUPACAmbiguousDNA()) xl71genomedict = {} for item in xl71genome: xl71genomedict[item.id] = item genomedict = xl71genomedict len(xl71genomedict) Explanation: 2. The genome against which generated guides are scored See Prerequisites above for explanation. End of explanation longest = 0 for item in fasta_file: if len(item) > longest: longest = len(item) longscaffold = [item] print(longscaffold[0].name + " is the longest scaffold at " "{:,}".format(len(longscaffold[0])) + " bp in length.") Explanation: Begin custom processing The FASTA file we've loaded (fasta_file) contains the entire X. laevis genome. The X. laevis genome hasn't yet been definitively assembled into physical chromosomes - instead, it's a large number of contigs or "scaffolds". For the purposes of making a library that labels a single region, we want to work with a big piece that we know is contiguous. So, we find the longest "scaffold". End of explanation cutslist = al_digesttarget(longscaffold) Explanation: Next, we want to digest this scaffold into guides. This uses the al_diggesttarget function contained in eating.py to produce a generator of scores. End of explanation [item for item in al_digesttarget([longscaffold[0][0:1500]])] Explanation: In this version of the script, the output from al_digesttarget isn't especially readable, but for reference: Each item is a SeqRecord (see BioPython docs) The Sequence is the guide 20mer, written from 5' to 3' The ID is the cut-fragment of DNA of that an enzyme produces, counting from the left (i.e. the most 5' guide has an id of 1) and the strand that the guide is found on (F or R, where F is forward with respect to the input DNA), starting with all the HpaII cuts, then all the BfaI cuts, then all the ScrFI cuts. Note that the script predicts the results when each digestion is done in a separate tube, rather than when all enzymes are used as a mixture (which would kill some guides where cut sites of two different enzymes are <20 bp apart). The name is the sequence position of the left edge of the guide along the input DNA. For forward-direction guides, this is position of the 5' end of the guide. For reverse, it's position of the 3' end of the guide. The description is the enzyme that generates the guide's cut site. In this example (the most 5' 1500 bp of the chosen Scaffold), HpaII does not cut. Note that enzyme recognition sites are palindromic and thus recognizes a palindromic sequence containing a PAM on both strands. This results in a guide being generated on both sides of the cut site. End of explanation #def multiscore_pool(x): # score = al_scoreguide(x, "xl71", xl71genomedict) # return score #http://sebastianraschka.com/Articles/2014_multiprocessing_intro.html#An-introduction-to-parallel-programming-using-Python's-multiprocessing-module #pool = mp.Pool(processes=2) #results = [pool.apply(multiscore_pool, args=(x,)) for x in cutslist] #pickle.dump(results, open( "finalpicklescores.pkl", "wb" )) #pool.close() results = pickle.load(open("../../finalpicklescores_done.pkl", "rb")) Explanation: Next, we'd like to take the 20mers extracted and score them against the entire Xenopus laevis genome. These lines score each guide variable region for specificity using the xl71 BLAST database and the xl71genomedict dict . This takes a couple of days of processing time on a four-core Intel Core i5 with 16 GB RAM, so instead we'll load a pickle of the resulting data object. (Later versions implement a MySQL database to store scores). Uncomment the below cells if you'd like to rebuild the BLAST score DB. The decompressed pickle is too big for GitHub, so unzip finalpicklescores_done.pkl.zip and place it outside of your sync'd directory. End of explanation reload(eating) from eating import * import sqlite3 genome = "xl71" dbname = "xl71" cur, con = db_connect("xl71") results_as_dict = [] for item in results: item[1].annotations["score"] = item[0] results_as_dict.append(item[1]) guide = results_as_dict[0] data = [] cur.execute("SELECT * FROM scores WHERE sequence = '{}' AND genome = '{}' AND version = '{}'".format(guide.seq, genome, version)) data = cur.fetchall() data for index, item in enumerate(results_as_dict[0:11000]): eating.db_add_guide(cur, con, item, "xl71", 0) if str(index)[-3:] == "000": print index len(results_as_dict) item Explanation: The format of the resulting data is (score, guide). End of explanation import copy a = [] for (score, details) in results: a.append(int(details.name)) # The guide's name attribute contains its position in bp resultssorted = zip(results, a) resultssorted = sorted(resultssorted, key=itemgetter(1), reverse=False) resultssorted = [item for item, null in resultssorted] resultssorted[:5] resultssorted[-5:] Explanation: The scores in this object are an ordered list, with all HpaII scores first, then all BfaI scores and finally all ScrFI scores. We are interested in the distributioœn of scores along the DNA fragment, irrespective of the enzyme used to generate them. Thus, we want to rearrange the list with all scores from 5' to 3'. End of explanation scores = [score for score, details in resultssorted] def plot_score_histogram(scores): ''' Input is a list of scores only (as ints) ''' path = '/Library/Fonts/Microsoft/Arial.ttf' prop = matplotlib.font_manager.FontProperties(fname=path) matplotlib.rcParams['font.family'] = prop.get_name() bins = range(0,106,5) figure() hist(scores, bins, color="gray") tick_params(axis=u'both', labelsize=18) #savefig('Scaffold score distribution.pdf', format="pdf") print(bins) plot_score_histogram(scores) Explanation: Let's extract the scores and plot their distribution on a histogram. End of explanation def find_clusters_by_cutoff(resultssorted, x): starts=[] ends=[] previtemgood = 0 for index, (score, details) in enumerate(resultssorted): if score >= x and previtemgood ==0 and len(details) >= 20: #this avoids guides that are shorter than 20 bp (from where an enzyme cuts twice in close proximity) starts.append((index, score, int(details.name))) previtemgood = 1 elif score >= x and previtemgood == 1 and len(details) >=20: None elif previtemgood == 1: previtemgood =0 ends.append((index-1, resultssorted[index-1][0], int(resultssorted[index-1][1].name))) run_positions = zip(starts, ends) goodruns_length = sorted([end - start for (start, i, j), (end,l,m) in run_positions], reverse=True) return (goodruns_length, run_positions) threshold = range(0, 105, 5) probeyield = [] for item in threshold: probeyield.append((item, sum(find_clusters_by_cutoff(resultssorted, item)[0][0:143]))) print(probeyield) %pylab inline figure() plot([b for b, c in probeyield], [c for b, c in probeyield], "o") Explanation: So, there are ~5000 guides that are quite non-specific (score <= 4) and >14,000 guides that have a score of 100 and a further 4000 that score between 95 and 99. Finding clusters of high-scoring guides To make a library of useful guides, we'd like to PCR through continuous clusters of the highest-scoring ones. Our oligonucleotide vendor (IDT) has a minimum order of 288 oligos (3x 96-well plates) on a small and relatively inexpensive scale (5 pmol). To work within this limitation, we'd like to pick out 144 possible regions to PCR-amplify. If we are only willing to accept guides with a score of 100, we'd predict that our 144 PCR products will be short (there are probably few long spans of perfect-scoring guides). However, if we relax our requirement to >=99, we may get longer PCR products and thus more guides in our library. How does this scale at different cutoffs/thresholds? End of explanation threshold = 95 runs = find_clusters_by_cutoff(resultssorted, threshold)[1] #(countofguides, (startguidenumberfrom5', startscore, startpositionbp), (endguidenumberfrom5', endscore, endpositionbp)) goodruns = sorted([((i_end - i), (i, s, pos), (i_end, s_end, pos_end)) for (i, s, pos), (i_end, s_end, pos_end) in runs], reverse=True) Explanation: Our "yield" of guides descends steadily from a cutoff of >=5 to a cutoff of >=95, then drops from 2894 guides produced at a cutoff of 95 to 1719 at 100. So, a cutoff of >=95 might be a good balance between specificity and yield. End of explanation probeyield = [] x = 95 fraction = 7.0 overlap = 2.0 region_to_extract = len(resultssorted)/fraction for i in [float(item)/overlap for item in range(int(overlap*fraction+2.0))]: goodruns = find_clusters_by_cutoff(resultssorted[int(region_to_extract*i):int(region_to_extract*(i+1))], x)[0] probeyield.append((i, int(region_to_extract*i), sum(goodruns[0:143]))) if sum(goodruns[0:143]) == 0: break probeyield Explanation: We next asked what happens if we concentrate the guides into a smaller region. Does the guide yield scale with the region selected? To test this, we cut the input DNA into sections of 1/7 the ~21MB starting length End of explanation #Modify resultssorted to only include the 3.4MB region used. (18121076 to (21505465+786) = 21506251) resultssorted = [item for item in resultssorted if int(item[1].name) >= 18121076 and int(item[1].name) <= 21506251] scores = [score for score, details in resultssorted] probeyield Explanation: Looks like the final 1/7 of the scaffold has the densest guide yield. End of explanation probeyield = [] x =95 one_mb = len(resultssorted)/7 i = 6.0 amplicon_positions = [] scores = [(score, len(details)) for score, details in resultssorted[int(one_mb*i):int(one_mb*(i+1))]] scores_and_details = resultssorted[int(one_mb*i):int(one_mb*(i+1))] runs=[] ends=[] previtemgood = 0 for index, (item, guide_length) in enumerate(scores): if item >= x and previtemgood ==0 and guide_length > 19: runs.append(index) previtemgood = 1 elif item >= x and previtemgood == 1 and guide_length > 19: None elif previtemgood == 1: previtemgood =0 ends.append(index) runs = zip(runs, ends) for i in runs: start, end = i try: fiveprimeabuttingguide = scores_and_details[start-1][1] threeprimeabuttingguide = scores_and_details[end][1] except: fiveprimeabuttingguide = "0" threeprimeabuttingguide = "None" print "error" if end - start > 3: amplicon_positions.append(((fiveprimeabuttingguide, threeprimeabuttingguide), end-start, scores_and_details[start:end])) #goodruns = sorted([length for length, guide_list in amplicon_positions], reverse=True) goodruns = sorted(amplicon_positions, reverse=True, key=itemgetter(1))# sorts by the number of guides in list #probeyield.append((i, int(one_mb*i), x, sum([j for i, j, k in goodruns[0:144]]))) len(goodruns) Explanation: OK, list is set up to contain: - Positions of 5' and 3' adjacent guides - Number of guides in amplicons - Actual guide details Write out XML file of these amplicons: Try and alter to better figure out the edge cases of good and bad guides being close to each other and making it hard to prime: End of explanation for index, item in enumerate(goodruns[49:170]): print(str(index)) print("Before: \t" + item[0][0].id + " " + item[0][0].name) #could take all of this print("First good:\t" + item[2][0][1].id + " " + item[2][0][1].name) distancebetweenlastbadandfirstgood = int(item[2][0][1].name) - int(item[0][0].name) if "F" in item[0][0].id and "R" in item[2][0][1].id: print("||||||||||||||||||||||OK, handleable; prime ~10 after start of prior bad. But include another ~18 of first good in primeable region.") elif "R" in item[0][0].id and "R" in item[2][0][1].id and distancebetweenlastbadandfirstgood <=20: print("||||||||||||||||||||||Hard. Prime at window 1nt after start of last bad start, and all the way into 15nt after first good start (to encompass restriction site).") elif "R" in item[0][0].id and "F" in item[2][0][1].id and distancebetweenlastbadandfirstgood: print("||||||||||||||||||||||Prime at window 1nt after start of last bad start, and all the way into 18nt after first good start.") elif "F" in item[0][0].id and "F" in item[2][0][1].id and distancebetweenlastbadandfirstgood < 10: print("||||||||||||||||||||||Impossible. Exclude; too close together and in same direction") else: print("3333333333333333333333333333333333") print(str(distancebetweenlastbadandfirstgood) + "\n") item Explanation: First, go through the left edge of each amplicon to figure out priming rules depending on what's next to stuff. End of explanation ''' for index, item in enumerate(goodruns[50:200]): print(str(index)) print("Last good:\t" + item[2][-1][1].id + " " + item[2][-1][1].name) print("After: \t" + item[0][1].id + " " + item[0][1].name) # distancebetweenlastgoodandfirstbad = int(int(item[0][1].name) - int(item[2][-1][1].name)) if "F" in item[2][-1][1].id and "R" in item[0][1].id: print("||||||||||||||||||||||Prime left 10 after start of next bad to ~15 after last good.") elif "R" in item[2][-1][1].id and "F" in item[0][1].id: print("||||||||||||||||||||||Prime anywhere right of ~2nt after last good to ~19nt after start of first bad") elif "R" in item[2][-1][1].id and "R" in item[0][1].id and distancebetweenlastgoodandfirstbad < 20: print("||||||||||||||||||||||Prime anywhere right of ~2nt after last good to ~10nt after start of first bad") elif "F" in item[2][-1][1].id and "F" in item[0][1].id and distancebetweenlastgoodandfirstbad < 20: print("||||||||||||||||||||||Prime anywhere right of ~2nt after last good to ~19nt after start of first bad") else: print("3333333333333333333333333333333333") print(str(distancebetweenlastgoodandfirstbad) + "\n") ''' Explanation: Next, go through various scenarios for the right edge: (constraints on priming regions given different configurations of last good and first bad guides) End of explanation # Set up the input for primer3: # Sequence available to PCR: guide_count = [] amps_in_3MB = [] for index, item in enumerate(goodruns[0:400]): left_outside = item[0][0].id[-1] left_inside = item[2][0][1].id[-1] if left_outside == "F" and left_inside == "R": permissible_start = int(item[0][0].name) + 10 required_start_absolute = int(item[2][0][1].name) +14 elif left_outside == "R" and left_inside == "R": permissible_start = int(item[0][0].name) + 1 required_start_absolute = int(item[2][0][1].name) +14 elif left_outside == "R" and left_inside == "F": permissible_start = int(item[0][0].name) + 1 required_start_absolute = int(item[2][0][1].name) +18 elif left_outside == "F" and left_inside == "F": permissible_start = int(item[0][0].name) + 10 required_start_absolute = int(item[2][0][1].name) +18 else: print("error on left") right_inside = item[2][-1][1].id[-1] right_outside = item[0][1].id[-1] if right_outside == "F" and right_inside == "R": permissible_end = int(item[0][1].name) + 19 required_end_absolute = int(item[2][-1][1].name) + 2 elif right_outside == "R" and right_inside == "F": permissible_end = int(item[0][1].name) + 10 required_end_absolute = int(item[2][-1][1].name) + 8 elif right_outside == "R" and right_inside == "R": permissible_end = int(item[0][1].name) + 10 required_end_absolute = int(item[2][-1][1].name) + 2 elif right_outside == "F" and right_inside == "F": permissible_end = int(item[0][1].name) + 19 required_end_absolute = int(item[2][-1][1].name) + 8 else: print("error on right") amp = longscaffold[0][permissible_start:permissible_end] # Bounds that need to be included in PCR product : required_start_relative = required_start_absolute-permissible_start required_end_relative = required_end_absolute - permissible_start amp.dbxrefs=((required_start_relative, required_end_relative)) # Set up some other stuff: amp.name =str(item[0][0].name) amp.id =str(item[0][0].name) amp.description=str(item[1]) amp.seq.alphabet = IUPACAmbiguousDNA() if "NNNNN" in amp.seq: # Exclude if it has runs of Ns None #print amp.name + " contains ns " + str(item[1]) else: amps_in_3MB.append(amp) guide_count.append(item[1]) amps_in_3MB_gen = (i for i in amps_in_3MB) print sum(guide_count[0:144]) def al_primersearch(current_amp): ''' Returns a dict of primer parameters. ''' length_of_required_region = current_amp.dbxrefs[1]-current_amp.dbxrefs[0] start_of_required_region = current_amp.dbxrefs[0] end_of_required_region = current_amp.dbxrefs[1] primeableregionleft_start = str(0) primeableregionleft_length = str(start_of_required_region) primeableregionright_start = str(end_of_required_region) primeableregionright_length = str(len(current_amp)-end_of_required_region) boulder = open("current_amp.boulder", "w") boulder.write("SEQUENCE_ID=" + current_amp.id + "\n") boulder.write("SEQUENCE_TEMPLATE=" + str(current_amp.seq) + "\n") #boulder.write("SEQUENCE_INCLUDED_REGION=" + "0," + str(len(current_amp.seq)) + "\n") #boulder.write("SEQUENCE_TARGET=" + str(current_amp.dbxrefs[0]) + "," + str(current_amp.dbxrefs[1] - current_amp.dbxrefs[0]) + "\n") boulder.write("SEQUENCE_PRIMER_PAIR_OK_REGION_LIST=" + primeableregionleft_start + "," + primeableregionleft_length+","\ +primeableregionright_start+"," + primeableregionright_length + "\n") boulder.write("PRIMER_PRODUCT_SIZE_RANGE=" +str(length_of_required_region) + "-" + str(len(current_amp)) + "\n") boulder.write("PRIMER_PRODUCT_OPT_SIZE=" + str(length_of_required_region) + "\n") #boulder.write("P3_FILE_FLAG=1\n") boulder.write("=\n") boulder.close() primer_output = subprocess.check_output(["primer3_core", "current_amp.boulder",\ "-p3_settings_file=primer3_global_parameters.txt"]) primerdict = {} for item in primer_output.split("\n")[0:-3]: val = item.split("=")[1] try: val = float(val) except: pass primerdict[item.split("=")[0]]=val return primerdict def al_screen_primer(primer): ''' Input is a primer as a string. ''' currfile = open("currprimer.fa", "w") currfile.write(">" + str(primer) + "\n") currfile.write(str(primer)) currfile.close() blastn_cline = NcbiblastnCommandline(query="currprimer.fa", db="xl71", \ task = "blastn-short",outfmt=5, out="primerblast.tmp", max_target_seqs=100, num_threads = 8) blastn_cline result = blastn_cline() badprimer = 0 # Parse data result_handle = open("primerblast.tmp") blast_record = NCBIXML.read(result_handle) # if there were multiple queries, use NCBIXML.parse(result_handle) # How many matches are there with more than 14 or matching bases? match14 = 0 for x in blast_record.alignments: for y in x.hsps: if y.positives > 14: match14 = match14 + 1 match15 = 0 for x in blast_record.alignments: for y in x.hsps: if y.positives > 15: match15 = match15 + 1 #print(primer.description) #print(match14) #print(match15) # Set a cutoff of if match14 > 40: badprimer = 1 elif match15 > 10: badprimer = 1 return badprimer def al_collect_good_primers(template, primerdict): i = 0 badlist = [] try: while i < primerdict["PRIMER_PAIR_NUM_RETURNED"]: bad = 0 leftprimer = primerdict[str("PRIMER_LEFT_" + str(i) + "_SEQUENCE")] leftprimer_start = str(primerdict[str("PRIMER_LEFT_"+ str(i))].split(",")[0]) leftprimer_length = str(primerdict[str("PRIMER_LEFT_"+ str(i))].split(",")[1]) leftprimer_gc = str(primerdict[str("PRIMER_LEFT_"+ str(i) + "_GC_PERCENT")]) leftprimer_tm = str(primerdict[str("PRIMER_LEFT_"+ str(i) + "_TM")]) rightprimer = primerdict[str("PRIMER_RIGHT_" + str(i) + "_SEQUENCE")] rightprimer_start = str(primerdict[str("PRIMER_RIGHT_"+ str(i))].split(",")[0]) rightprimer_length = str(primerdict[str("PRIMER_RIGHT_"+ str(i))].split(",")[1]) rightprimer_gc = str(primerdict[str("PRIMER_RIGHT_"+ str(i) + "_GC_PERCENT")]) rightprimer_tm = str(primerdict[str("PRIMER_RIGHT_"+ str(i) + "_TM")]) product_len = int(rightprimer_start) + int(rightprimer_length) - int(leftprimer_start) left_bad = al_screen_primer(leftprimer) right_bad = al_screen_primer(rightprimer) #print bad if left_bad == 0 and right_bad == 0: with open("primerlist.txt", "a") as primerlist: primerlist.write("%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n" % \ (template.name,leftprimer,leftprimer_start,leftprimer_length,leftprimer_tm,leftprimer_gc,\ rightprimer,rightprimer_start,rightprimer_length,rightprimer_tm,rightprimer_gc,\ len(template.seq),str(product_len),template.description)) primerlist.close() print("Success!") break if left_bad ==1: print("iteration" + str(i) + "left primer" + leftprimer + "is bad") if right_bad == 1: print("iteration" + str(i) + "right primer" + rightprimer + "is bad") i = i +1 if left_bad ==1 and right_bad ==1 and i ==primerdict["PRIMER_PAIR_NUM_RETURNED"]: with open("primerlist.txt", "a") as primerlist: primerlist.write("All the primers were bad for this amplicon!\n") primerlist.close() except: with open("primerlist.txt", "a") as primerlist: primerlist.write("Primer3 failed to find any primers for this amplicon! " + primerdict["SEQUENCE_PRIMER_PAIR_OK_REGION_LIST"] + "\n") primerlist.close() print("Primer3 failed to find any primers for this amplicon! " + primerdict["SEQUENCE_PRIMER_PAIR_OK_REGION_LIST"] + "\n") print sys.exc_info() amps_in_3MB[0] #with open("primerlist.txt", "w") as primerlist: #primerlist.write("Sequence_id\tforward_seq\tforward_start\tforward_length\tforward_tm\tforward_gc\treverse_seq\treverse_start\treverse_length\treverse_tm\treverse_gc\tinput_seq_length\tPCR_product_length\tGuides_Contained\n") #primerlist.close() for item in amps_in_3MB: current_amp = item primerdict = al_primersearch(current_amp) al_collect_good_primers(item, primerdict) goodruns_dict = {} for item in goodruns: goodruns_dict[int(item[0][0].name)] = item print(longscaffold[0][19991274:19991274+3230].seq) goodruns_dict[19991274] Explanation: Summarize priming rules: Left edge: Bad Good Permissible priming distance: permissible_start (from start of prior bad) to required_start (from start of first good) F R 10, 14 R R 1, 14 R F 1, 18 F F 10, 18 Right edge: Good Bad Permissible priming distance: required_end (after start of last good) to permissible_end (after start of next bad) R F 2,19 F R 8,10 R R 2,10 F F 8,19 End of explanation amps_in_3MB[:2] for item in amps_in_3MB: item.id = str(item.id) len(amps_in_3MB) from random import random pooled_PCR_guides = al_digesttarget(amps_in_3MB) pooled_PCR_guides = [item for item in pooled_PCR_guides] SeqIO.write(pooled_PCR_guides, "pooled_PCR_guides.fasta", "fasta") print len(pooled_PCR_guides) #Now that we have the individual guides in the amps used, pick out which real PCR products we made: ## Make a list of amplicon starts and lengths: import pandas as pd pcr_starts = pd.read_table("used_pcrs.csv", header = None) pcr_starts.columns = ["amplicon", "start", "length"] pcr_starts bounds = [] for row in [item for item in pcr_starts.itertuples()]: bounds.append((row[0]+1, row[1] + row[2], row[1]+row[2]+row[3])) bounds pickle.dump(bounds, open("boundaries of 144 pcrs used in 3mb region within scaffold102974.pkl", "wb")) print min([item[1] for item in bounds]) max([item[2] for item in bounds]) guides_used = [] for item in pooled_PCR_guides: for boundary in bounds: scaffold_position = int(item.name) + int(item.dbxrefs[0]) if scaffold_position >= boundary[1] and scaffold_position <= boundary[2]: guides_used.append(item) len(guides_used) %run "../Sequence Analysis/al_funcs.ipynb" allscores = [] for guide in guides_used: oneguide = al_scoreguide_density(guide, "xl71", xl71genomedict) allscores.append(oneguide) for item allscores[2][2][1][1][1] for item in allscores[0:10000]: for subitem in item: try: if subitem[2][1][1] == 'Scaffold27036': print(subitem[2][1]) except: None subitem[2][1][1] scaffold_hit_dict = {} scaffold_hit_dict_t1 = {} for item in allscores: for hit in item[2]: #print hit try: scaffold_hit_dict[hit[1][1]] = scaffold_hit_dict[hit[1][1]] + hit[0] scaffold_hit_dict_t1[hit[1][1]] = scaffold_hit_dict_t1[hit[1][1]] + hit[1][0] except: scaffold_hit_dict[hit[1][1]] = hit[0] scaffold_hit_dict_t1[hit[1][1]] = hit[1][0] sorted_scores = sorted(scaffold_hit_dict.items(), key=itemgetter(1), reverse=True) len(sorted_scores) sorted_scores[1] sorted_scores_t1 = sorted(scaffold_hit_dict_t1.items(), key=lambda (k,v): operator.itemgetter(1)(k), reverse=True) sorted_scores_t1 #pickle.dump(sorted_scores, open("used_guides_scores.pkl", "wb" )) out = xl71genomedict['Scaffold58878'] out.id = "0" out SeqIO.write(out, "Scaffold58878.fasta", "fasta") for item in scaffold_hit_dict.items()[0:4]: print item print len(xl71genomedict[str(item[0])]) SeqIO.write(xl71genomedict["Scaffold102974"][18121082:18221082], "Scaffold102974subset.fasta", "fasta") # write some in silico pcr to predict how much of scaffold 27036 will be amplified with primers used filename = "primer.fasta" primers = pd.read_table("20141112-primerlist.csv") primers = list(primers.itertuples()) def blast_primer(item): p = [] pF = p.append(SeqRecord(seq = Seq(item[2], IUPACAmbiguousDNA()), id=str(str(item[1]) + "F"), name = "", description = "")) pR = p.append(SeqRecord(seq = Seq(item[3], IUPACAmbiguousDNA()), id=str(str(item[1]) + "R"), name = "", description = "")) SeqIO.write(p, "primer.fasta", "fasta") blastn_cline = NcbiblastnCommandline(query=filename, db="xl71", \ task = "blastn-short",outfmt=5, out=filename + ".blast", max_target_seqs=100, num_threads = 7, evalue = 100) #timeit.timeit(blastn_cline, number =1) blastn_cline() result_handle = open(filename + ".blast") hits = NCBIXML.parse(result_handle) hits = [item for item in hits] return hits def parse_primer_hits(hits): priming_dict = {} for item in hits: for align in item.alignments: for spot in align.hsps: if spot.positives >12: # Could actually calculate Tm here try: priming_dict[align.title].append((spot.sbjct_start, spot.sbjct_end, item.query)) except: priming_dict[align.title] = [(spot.sbjct_start, spot.sbjct_end, item.query)] return(priming_dict) ''' In silico PCR strategy: Given a list of scaffolds and places that either of a pair of primers bind: - Find where two primers bind within e.g. 10kb of each other - For those pairs, ask if they're in the opposite orientation - If they are, log this to a result with the location and length of the PCR product For future: def screen_hits(priming_dict): viableproducts = [] for item in priming_dict.iteritems(): #Get the intervals between binding sites on a scaffold if len(item[1]) > 1: k = sorted([j[0] for j in item[1:][0]], reverse = True) interval = [] for index,n in list(enumerate(k))[:-1]: interval.append(k[index] - k[index+1]) # Next, check if primers are binding in opposed orientations: # Pick out which sites are close together; test if they're going in the same direction k = list(enumerate(sorted([j for j in item[1:][0]], reverse = True))) for index, n in enumerate(interval): # Goes through intervals, picks out primers generating that interval if n < 12000: orientation1 = k[index][1][0] - k[index][1][1] # First primer has same index as interval list position orientation2 = k[index+1][1][0] - k[index+1][1][1] # Second primer has index+1 of interval list position # Test for the sign (=direction) of the start/end subtraction for primer, if it's the same for both there's no pcr product. samesign = all(item >= 0 for item in (orientation1, orientation2)) or all(item < 0 for item in (orientation1, orientation2)) #print "\n" #if samesign == True: #print item #print n #print "No PCR product" if samesign == False: #print item[0] #print n #print "Yes PCR product" print item[0] if item[0][0] != "gnl|BL_ORD_ID|3307 Scaffold102974": viableproducts.append((item, n)) return viableproducts all_viable = [] for item in primers: hits = blast_primer(item) priming_dict = parse_primer_hits(hits) viableproducts = screen_hits(priming_dict) all_viable.append(viableproducts) all_viable c = Counter([item[0][0][0] for item[0] in all_viable]) boop = (item for item in all_viable) from itertools import chain ch = list(chain.from_iterable(all_viable)) len(all_viable) len(list(ch)) c = Counter([item[0][0] for item in ch]) list(c.iteritems()) Explanation: Next up: look at the guides generated from the selected amplicons and rank non-target scaffolds for recognition. End of explanation
12,879
Given the following text description, write Python code to implement the functionality described below step by step Description: Example Q3 Step1: Creating the Default Filter Pipeline Step2: The PipelineManager is analogous to the ChannelLibrary insomuchas it provides the user with an interface to programmatically modify the filter pipeline, and to save and load different versions of the pipeline. Step3: Pipelines are fairly predictable, and will provide some subset of the functionality of demodulating, integrating, average, and writing to file. Some of these can be done on hardware, some in software. The PipelineManager can guess what the user wants for a particular qubit by inspecting which equipment has been assigned to it using the set_measure command for the ChannelLibrary. For example, this ChannelLibrary has defined X6-1000M cards for readout, and the description of this instrument indicates that the highest level available stream is integrated. Thus, the PipelineManager automatically inserts the remaining averager and writer. Step4: Sometimes, for debugging purposes, one may wish to add multiple pipelines per qubit. Additional pipelines can be added explicitly by running Step5: Step6: We can print the properties of a single node Step7: We can print the properties of individual filters or subgraphs Step8: Dictionary access is provided to allow drilling down into the pipelines. One can use the specific label of a filter or simple its type in this access mode Step9: Here uncommitted changes are shown. This can be rectified in the standard way Step10: Programmatic Modification of the Pipeline Some simple convenience functions allow the use to easily specify complex pipeline structures. Step11: Note the name change. We refer to the pipeline by the stream type of the first element. Step12: Step13: As with the ChannelLibrary we can list save, list, and load versions of the filter pipeline. Step14: Step15: Pipeline examples
Python Code: from QGL import * cl = ChannelLibrary(":memory:") # Create five qubits and supporting hardware for i in range(5): q1 = cl.new_qubit(f"q{i}") cl.new_APS2(f"BBNAPS2-{2*i+1}", address=f"192.168.5.{101+2*i}") cl.new_APS2(f"BBNAPS2-{2*i+2}", address=f"192.168.5.{102+2*i}") cl.new_X6(f"X6_{i}", address=0) cl.new_source(f"Holz{2*i+1}", "HolzworthHS9000", f"HS9004A-009-{2*i}", power=-30) cl.new_source(f"Holz{2*i+2}", "HolzworthHS9000", f"HS9004A-009-{2*i+1}", power=-30) cl.set_control(cl[f"q{i}"], cl[f"BBNAPS2-{2*i+1}"], generator=cl[f"Holz{2*i+1}"]) cl.set_measure(cl[f"q{i}"], cl[f"BBNAPS2-{2*i+2}"], cl[f"X6_{i}"][1], generator=cl[f"Holz{2*i+2}"]) cl.set_master(cl["BBNAPS2-1"], cl["BBNAPS2-1"].ch("m2")) cl.commit() Explanation: Example Q3: Managing the Filter Pipeline This example notebook shows how to use the PipelineManager to modify the signal processing on qubit data. © Raytheon BBN Technologies 2018 We initialize a slightly more advanced channel library: End of explanation from auspex.qubit import * Explanation: Creating the Default Filter Pipeline End of explanation pl = PipelineManager() Explanation: The PipelineManager is analogous to the ChannelLibrary insomuchas it provides the user with an interface to programmatically modify the filter pipeline, and to save and load different versions of the pipeline. End of explanation pl.create_default_pipeline() pl.show_pipeline() Explanation: Pipelines are fairly predictable, and will provide some subset of the functionality of demodulating, integrating, average, and writing to file. Some of these can be done on hardware, some in software. The PipelineManager can guess what the user wants for a particular qubit by inspecting which equipment has been assigned to it using the set_measure command for the ChannelLibrary. For example, this ChannelLibrary has defined X6-1000M cards for readout, and the description of this instrument indicates that the highest level available stream is integrated. Thus, the PipelineManager automatically inserts the remaining averager and writer. End of explanation pl.add_qubit_pipeline("q1", "demodulated") pl.show_pipeline() Explanation: Sometimes, for debugging purposes, one may wish to add multiple pipelines per qubit. Additional pipelines can be added explicitly by running: End of explanation pl.ls() Explanation: End of explanation pl["q1 integrated"].print() Explanation: We can print the properties of a single node End of explanation pl.print("q1 integrated") Explanation: We can print the properties of individual filters or subgraphs: End of explanation pl["q1 integrated"]["Average"]["Write"].filename = "new.h5" pl.print("q1 integrated") Explanation: Dictionary access is provided to allow drilling down into the pipelines. One can use the specific label of a filter or simple its type in this access mode: End of explanation cl.commit() pl.print("q1 integrated") Explanation: Here uncommitted changes are shown. This can be rectified in the standard way: End of explanation pl.commit() pl.save_as("simple") pl["q1 demodulated"].clear_pipeline() pl["q1 demodulated"].stream_type = "raw" pl.recreate_pipeline() # pl["q1"]["blub"].show_pipeline() pl.show_pipeline() Explanation: Programmatic Modification of the Pipeline Some simple convenience functions allow the use to easily specify complex pipeline structures. End of explanation pl["q1 raw"].show_pipeline() Explanation: Note the name change. We refer to the pipeline by the stream type of the first element. End of explanation pl["q1 raw"].add(Display(label="Raw Plot")) pl["q1 raw"]["Demodulate"].add(Average(label="Demod Average")).add(Display(label="Demod Plot")) pl.show_pipeline() Explanation: End of explanation pl.session.commit() pl.save_as("custom") pl.ls() pl.load("simple") pl.show_pipeline() Explanation: As with the ChannelLibrary we can list save, list, and load versions of the filter pipeline. End of explanation pl.ls() Explanation: End of explanation # a basic pipeline that uses 'raw' data a the beginning of the data processing def create_standard_pipeline(): pl = PipelineManager() pl.create_default_pipeline(qubits=(cl['q2'],cl['q3'])) for ql in ['q2', 'q3']: qb = cl[ql] pl[ql].clear_pipeline() pl[ql].stream_type = "raw" pl[ql].create_default_pipeline(buffers=False) pl[ql].if_freq = qb.measure_chan.autodyne_freq pl[ql]["Demodulate"].frequency = qb.measure_chan.autodyne_freq pl[ql]["Demodulate"]["Integrate"].simple_kernel = True pl[ql]["Demodulate"]["Integrate"].box_car_start = 3e-7 pl[ql]["Demodulate"]["Integrate"].box_car_stop = 1.3e-6 #pl[ql]["Demodulate"]["Integrate"].add(Write(label="RR-Writer", groupname=ql+"-int")) pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0)) pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average") return pl # if you only want to save data integrated with the single-shot filter def create_integrated_pipeline(save_rr=False, plotting=True): pl = PipelineManager() pl.create_default_pipeline(qubits=(cl['q2'],cl['q3'])) for ql in ['q2', 'q3']: qb = cl[ql] pl[ql].clear_pipeline() pl[ql].stream_type = "integrated" pl[ql].create_default_pipeline(buffers=False) pl[ql].kernel = f"{ql.upper()}_SSF_kernel.txt" if save_rr: pl[ql].add(Write(label="RR-Writer", groupname=ql+"-rr")) if plotting: pl[ql]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0)) pl[ql]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average") return pl # create to single-shot fidelity pipelines for two qubits def create_fidelity_pipeline(): pl = PipelineManager() pl.create_default_pipeline(qubits=(cl['q2'],cl['q3'])) for ql in ['q2', 'q3']: qb = cl[ql] pl[ql].clear_pipeline() pl[ql].stream_type = "raw" pl[ql].create_default_pipeline(buffers=False) pl[ql].if_freq = qb.measure_chan.autodyne_freq pl[ql]["Demodulate"].frequency = qb.measure_chan.autodyne_freq pl[ql].add(FidelityKernel(save_kernel=True, logistic_regression=False, set_threshold=True, label=f"Q{ql[-1]}_SSF")) pl[ql]["Demodulate"]["Integrate"].simple_kernel = True pl[ql]["Demodulate"]["Integrate"].box_car_start = 3e-7 pl[ql]["Demodulate"]["Integrate"].box_car_stop = 1.3e-6 #pl[ql]["Demodulate"]["Integrate"].add(Write(label="RR-Writer", groupname=ql+"-int")) pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0)) pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average") return pl # optionally save the demoded data def create_RR_pipeline(plot=False, write_demods=False): pl = PipelineManager() pl.create_default_pipeline(qubits=(cl['q2'],cl['q3'])) for ql in ['q2', 'q3']: qb = cl[ql] pl[ql].clear_pipeline() pl[ql].stream_type = "raw" pl[ql].create_default_pipeline(buffers=False) pl[ql].if_freq = qb.measure_chan.autodyne_freq pl[ql]["Demodulate"].frequency = qb.measure_chan.autodyne_freq if write_demods: pl[ql]["Demodulate"].add(Write(label="demod-writer", groupname=ql+"-demod")) pl[ql]["Demodulate"]["Integrate"].simple_kernel = True pl[ql]["Demodulate"]["Integrate"].box_car_start = 3e-7 pl[ql]["Demodulate"]["Integrate"].box_car_stop = 1.3e-6 pl[ql]["Demodulate"]["Integrate"].add(Write(label="RR-Writer", groupname=ql+"-int")) if plot: pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0)) pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average") return pl # save everything... using data buffers instead of writing to file def create_full_pipeline(buffers=True): pl = PipelineManager() pl.create_default_pipeline(qubits=(cl['q2'],cl['q3']), buffers=True) for ql in ['q2', 'q3']: qb = cl[ql] pl[ql].clear_pipeline() pl[ql].stream_type = "raw" pl[ql].create_default_pipeline(buffers=buffers) if buffers: pl[ql].add(Buffer(label="raw_buffer")) else: pl[ql].add(Write(label="raw-write", groupname=ql+"-raw")) pl[ql].if_freq = qb.measure_chan.autodyne_freq pl[ql]["Demodulate"].frequency = qb.measure_chan.autodyne_freq if buffers: pl[ql]["Demodulate"].add(Buffer(label="demod_buffer")) else: pl[ql]["Demodulate"].add(Write(label="demod_write", groupname=ql+"-demod")) pl[ql]["Demodulate"]["Integrate"].simple_kernel = True pl[ql]["Demodulate"]["Integrate"].box_car_start = 3e-7 pl[ql]["Demodulate"]["Integrate"].box_car_stop = 1.6e-6 if buffers: pl[ql]["Demodulate"]["Integrate"].add(Buffer(label="integrator_buffer")) else: pl[ql]["Demodulate"]["Integrate"].add(Write(label="int_write", groupname=ql+"-integrated")) pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0)) pl[ql]["Demodulate"]["Integrate"]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average") return pl # A more complicated pipeline with a correlator # These have to be coded more manually because the correlator needs all the correlated channels specified. # Note that for tomography you're going to want to save the data variance as well, though this can be calculated # after the fact if you save the raw shots (save_rr). def create_tomo_pipeline(save_rr=False, plotting=True): pl = PipelineManager() pl.create_default_pipeline(qubits=(cl['q2'],cl['q3'])) for ql in ['q2', 'q3']: qb = cl[ql] pl[ql].clear_pipeline() pl[ql].stream_type = "integrated" pl[ql].create_default_pipeline(buffers=False) pl[ql].kernel = f"{ql.upper()}_SSF_kernel.txt" pl[ql]["Average"].add(Write(label='var'), connector_out='final_variance') pl[ql]["Average"]["var"].groupname = ql + '-main' pl[ql]["Average"]["var"].datasetname = 'variance' if save_rr: pl[ql].add(Write(label="RR-Writer", groupname=ql+"-rr")) if plotting: pl[ql]["Average"].add(Display(label=ql+" - Final Average", plot_dims=0)) pl[ql]["Average"].add(Display(label=ql+" - Partial Average", plot_dims=0), connector_out="partial_average") # needed for two-qubit state reconstruction pl.add_correlator(pl['q2'], pl['q3']) pl['q2']['Correlate'].add(Average(label='corr')) pl['q2']['Correlate']['Average'].add(Write(label='corr_write')) pl['q2']['Correlate']['Average'].add(Write(label='corr_var'), connector_out='final_variance') pl['q2']['Correlate']['Average']['corr_write'].groupname = 'correlate' pl['q2']['Correlate']['Average']['corr_var'].groupname = 'correlate' pl['q2']['Correlate']['Average']['corr_var'].datasetname = 'variance' return pl Explanation: Pipeline examples: Below are some examples of how more complicated pipelines can be constructed. Defining these as functions allows for quickly changing the structure of the data pipeline depending on the experiment being done. It also improves reproducibility and documents pipeline parameters. For example, to change the pipeline and check its construction, python pl = create_tomo_pipeline(save_rr=True) pl.show_pipeline() Hopefully the examples below will show you some of the more advanced things that can be done with the data pipelines in Auspex. End of explanation
12,880
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: colvolve given image uisng python or scipy
Python Code:: from scipy.signal import convolve2d import numpy as np from scipy.stats import norm import matplotlib.pyplot as plt gray = np.mean(image, axis =2) x = np.linspace(-6, 6 , 40) fx = norm.pdf(x, loc= 0, scale =1) filt = np.outer(fx, fx) output = convolve2d(gray, filt) plt.imshow(output)
12,881
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. Step3: Explore the Data Play around with view_sentence_range to view different parts of the data. Step6: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing Step8: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. Step10: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step12: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU Step15: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below Step18: Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. Step21: Encoding Implement encoding_layer() to create a Encoder RNN layer Step24: Decoding - Training Create a training decoding layer Step27: Decoding - Inference Create inference decoder Step30: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note Step33: Build the Neural Network Apply the functions you implemented above to Step34: Neural Network Training Hyperparameters Tune the following parameters Step36: Build the Graph Build the graph using the neural network you implemented. Step40: Batch and pad the source and target sequences Step43: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. Step45: Save Parameters Save the batch_size and save_path parameters for inference. Step47: Checkpoint Step50: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. Step52: Translate This will translate translate_sentence from English to French.
Python Code: DON'T MODIFY ANYTHING IN THIS CELL import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation view_sentence_range = (10, 20) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) source_id_text = [] target_id_text = [] for sentence in source_text.split('\n'): ids = [] for w in sentence.split(): ids.append(source_vocab_to_int[w]) source_id_text.append(ids) for sentence in target_text.split('\n'): ids = [] for w in sentence.split(): ids.append(target_vocab_to_int[w]) ids.append(target_vocab_to_int['<EOS>']) target_id_text.append(ids) return source_id_text, target_id_text DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_text_to_ids(text_to_ids) Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation DON'T MODIFY ANYTHING IN THIS CELL helper.preprocess_and_save_data(source_path, target_path, text_to_ids) Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation def model_inputs(): Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) inputs = tf.placeholder(tf.int32, shape = [None, None], name = "input") targets = tf.placeholder(tf.int32, shape = [None, None]) learning_rate = tf.placeholder(tf.float32) keep_prob = tf.placeholder(tf.float32, name = "keep_prob") target_seq_len = tf.placeholder(tf.int32, shape = [None], name = "target_sequence_length") max_target_len = tf.reduce_max(target_seq_len, name = "max_target_len") src_seq_len = tf.placeholder(tf.int32, shape = [None], name = "source_sequence_length") return inputs, targets, learning_rate, keep_prob, target_seq_len, max_target_len, src_seq_len DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_model_inputs(model_inputs) Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoder_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Target sequence length placeholder named "target_sequence_length" with rank 1 Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0. Source sequence length placeholder named "source_sequence_length" with rank 1 Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) End of explanation def process_decoder_input(target_data, target_vocab_to_int, batch_size): Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data rev = tf.reverse(target_data, axis = [1]) # reverse each row to put last element first sliced = tf.slice(rev, [0, 1], [-1, -1]) # slice to strip the last element of each row (now in first position) off unrev = tf.reverse(sliced, axis = [1]) # reverse rows to restore original order go = tf.constant(target_vocab_to_int['<GO>'], dtype = tf.int32, shape = [batch_size, 1]) concat = tf.concat([go, unrev], axis = 1) return concat DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_process_encoding_input(process_decoder_input) Explanation: Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. End of explanation from imp import reload reload(tests) # Taken from the github link about stacked LSTM cell above def lstm_cell(rnn_size): return tf.contrib.rnn.BasicLSTMCell(rnn_size) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) embedded_inputs = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) cells = tf.contrib.rnn.MultiRNNCell([lstm_cell(rnn_size) for _ in range(num_layers)]) rnn = tf.contrib.rnn.DropoutWrapper(cells, output_keep_prob = keep_prob) outputs, state = tf.nn.dynamic_rnn(rnn, embedded_inputs, dtype = tf.float32) return outputs, state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_encoding_layer(encoding_layer) Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer: * Embed the encoder input using tf.contrib.layers.embed_sequence * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper * Pass cell and embedded input to tf.nn.dynamic_rnn() End of explanation def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id trainhelper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, trainhelper, encoder_state, output_layer = output_layer) outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished = True, maximum_iterations = max_summary_length) return outputs DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_train(decoding_layer_train) Explanation: Decoding - Training Create a training decoding layer: * Create a tf.contrib.seq2seq.TrainingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size]) helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer) outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished = True, maximum_iterations = max_target_sequence_length) return outputs DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_infer(decoding_layer_infer) Explanation: Decoding - Inference Create inference decoder: * Create a tf.contrib.seq2seq.GreedyEmbeddingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) cells = tf.contrib.rnn.MultiRNNCell([lstm_cell(rnn_size) for _ in range(num_layers)]) rnn = tf.contrib.rnn.DropoutWrapper(cells, output_keep_prob = keep_prob) output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) with tf.variable_scope("decode"): training_decoder_output = decoding_layer_train(encoder_state, rnn, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) with tf.variable_scope("decode", reuse = True): inference_decoder_output = decoding_layer_infer(encoder_state, rnn, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_decoder_output, inference_decoder_output DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer(decoding_layer) Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) enc_outputs, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) train_dec_output, inference_dec_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return train_dec_output, inference_dec_output DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_seq2seq_model(seq2seq_model) Explanation: Build the Neural Network Apply the functions you implemented above to: Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size). Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function. Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function. End of explanation # Number of Epochs epochs = 10 # Batch Size batch_size = 256 # 32 # RNN Size rnn_size = 512 # Number of Layers num_layers = 1 # Embedding Size encoding_embedding_size = 256 # 16 decoding_embedding_size = 256 # 16 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.5 display_step = 100 Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability Set display_step to state how many steps between each debug output statement End of explanation DON'T MODIFY ANYTHING IN THIS CELL save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation DON'T MODIFY ANYTHING IN THIS CELL def pad_sentence_batch(sentence_batch, pad_int): Pad sentences with <PAD> so that each sentence of a batch has the same length max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): Batch targets, sources, and the lengths of their sentences together for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths Explanation: Batch and pad the source and target sequences End of explanation DON'T MODIFY ANYTHING IN THIS CELL def get_accuracy(target, logits): Calculate accuracy max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Save parameters for checkpoint helper.save_params(save_path) Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() Explanation: Checkpoint End of explanation def sentence_to_seq(sentence, vocab_to_int): Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids sentence = sentence.lower() unknown_id = vocab_to_int['<UNK>'] ids = [vocab_to_int.get(w, unknown_id) for w in sentence.split()] return ids DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_sentence_to_seq(sentence_to_seq) Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation translate_sentence = 'he saw a yellow old truck .' DON'T MODIFY ANYTHING IN THIS CELL translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) Explanation: Translate This will translate translate_sentence from English to French. End of explanation
12,882
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: 1. Write a function that returns the Pythagorean triplet (a, b, c) given any two of the arguments a, b or c. For example, triplet(a=3, c=5) will return (3, 4, 5). Step3: 2. The $n^\text{th}$ Catalan number is given by $$ C_n = \prod_{k=2}^n \frac{n+k}{k} $$ for $n \ge 0$. Write a function to calculate the $n^\text{th}$ Catalan number - it should take a single argument $n$ and return $C_n$. Use this function to generate a list of the first 10 odd Catalan numbers (i.e., n = 1, 3, 5, ...). Step4: 3. Rewrite the above function as a lambda function. This is tricky but introduces you to the functional style that is very useful in advanced programming. Some imports are done for you as a hint. Step6: 4. Write a class called Matrix that can be initialized with an iterable of numbers, together with the number of rows r and the number of columns c. That is, you should be able to create a matrix M by calling M = Matrix([1,2,3,4], 2, 2) (assume the matrix is row ordered). The Matrix class will have a single additional method called at that takes a tuple (r, c) and returns the value at row r and column c. For example, M.at(0,1) will return 2. Internally, the data should be stored as a list and the at method will use indexing to find the right value to return.
Python Code: import numpy as np def triplet(a=None, b=None, c=None): Returns the Pythagoraen tripler (a, b, c) given any two arguments. Assumes but does not check that two named argumets are called. Retruns None if no triplet possible with given arguments. if a is None: q, r = divmod(np.sqrt(c**2 - b**2), 1) if r == 0: return int(q), b, c elif b is None: q, r = divmod(np.sqrt(c**2 - a**2), 1) if r == 0: return a, int(q), c else: q, r= divmod(np.sqrt(a**2 + b**2), 1) if r == 0: return a, b, int(q) triplet(a=3, c=5) triplet(a=3, b=5) Explanation: 1. Write a function that returns the Pythagorean triplet (a, b, c) given any two of the arguments a, b or c. For example, triplet(a=3, c=5) will return (3, 4, 5). End of explanation def catalan(n): Returns the nth Catalan number. ans = 1 for k in range(2, n+1): ans *= (n+k)/k return ans [catalan(n) for n in range(1, 10, 2)] Explanation: 2. The $n^\text{th}$ Catalan number is given by $$ C_n = \prod_{k=2}^n \frac{n+k}{k} $$ for $n \ge 0$. Write a function to calculate the $n^\text{th}$ Catalan number - it should take a single argument $n$ and return $C_n$. Use this function to generate a list of the first 10 odd Catalan numbers (i.e., n = 1, 3, 5, ...). End of explanation from functools import reduce from operator import mul catalan_L = lambda n: reduce(mul, ((n+k)/k for k in range(2, n+1)), 1) [catalan_L(n) for n in range(1, 10, 2)] Explanation: 3. Rewrite the above function as a lambda function. This is tricky but introduces you to the functional style that is very useful in advanced programming. Some imports are done for you as a hint. End of explanation class Matrix: Simple class to represent a matrix. def __init__(self, xs, rows, cols): self.xs = list(xs) self.rows = rows self.cols = cols def at(self, i, j): idx = i*self.cols + j return self.xs[idx] M = Matrix([1,2,3,4], 2, 2) M.at(0,1) Explanation: 4. Write a class called Matrix that can be initialized with an iterable of numbers, together with the number of rows r and the number of columns c. That is, you should be able to create a matrix M by calling M = Matrix([1,2,3,4], 2, 2) (assume the matrix is row ordered). The Matrix class will have a single additional method called at that takes a tuple (r, c) and returns the value at row r and column c. For example, M.at(0,1) will return 2. Internally, the data should be stored as a list and the at method will use indexing to find the right value to return. End of explanation
12,883
Given the following text description, write Python code to implement the functionality described below step by step Description: Outline Glossary Positional Astronomy Next Step1: Import section specific modules
Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline from IPython.display import HTML HTML('../style/course.css') #apply general CSS Explanation: Outline Glossary Positional Astronomy Next: Equatorial Coordinates Import standard modules: End of explanation from IPython.display import HTML HTML('../style/code_toggle.html') Explanation: Import section specific modules: End of explanation
12,884
Given the following text description, write Python code to implement the functionality described below step by step Description: Memory-efficient embeddings for recommendation systems Author Step1: Prepare the data Download and process data Step2: Create train and eval data splits Step3: Define dataset metadata and hyperparameters Step4: Train and evaluate the model Step5: Experiment 1 Step6: Implement the baseline model Step7: Notice that the number of trainable parameters is 623,744 Step8: Experiment 2 Step9: Implement Mixed Dimension embedding as a layer In the mixed dimension embedding technique, we train embedding vectors with full dimensions for the frequently queried items, while train embedding vectors with reduced dimensions for less frequent items, plus a projection weights matrix to bring low dimension embeddings to the full dimensions. More precisely, we define blocks of items of similar frequencies. For each block, a block_vocab_size X block_embedding_dim embedding table and block_embedding_dim X full_embedding_dim projection weights matrix are created. Note that, if block_embedding_dim equals full_embedding_dim, the projection weights matrix becomes an identity matrix. Embeddings for a given batch of item indices are generated via the following steps Step10: Implement the memory-efficient model In this experiment, we are going to use the Quotient-Remainder technique to reduce the size of the user embeddings, and the Mixed Dimension technique to reduce the size of the movie embeddings. While in the paper, an alpha-power rule is used to determined the dimensions of the embedding of each block, we simply set the number of blocks and the dimensions of embeddings of each block based on the histogram visualization of movies popularity. Step11: You can see that we can group the movies into three blocks, and assign them 64, 32, and 16 embedding dimensions, respectively. Feel free to experiment with different number of blocks and dimensions. Step12: Notice that the number of trainable parameters is 117,968, which is more than 5x less than the number of parameters in the baseline model.
Python Code: import os import math from zipfile import ZipFile from urllib.request import urlretrieve import numpy as np import pandas as pd import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.layers import StringLookup import matplotlib.pyplot as plt Explanation: Memory-efficient embeddings for recommendation systems Author: Khalid Salama<br> Date created: 2021/02/15<br> Last modified: 2021/02/15<br> Description: Using compositional & mixed-dimension embeddings for memory-efficient recommendation models. Introduction This example demonstrates two techniques for building memory-efficient recommendation models by reducing the size of the embedding tables, without sacrificing model effectiveness: Quotient-remainder trick, by Hao-Jun Michael Shi et al., which reduces the number of embedding vectors to store, yet produces unique embedding vector for each item without explicit definition. Mixed Dimension embeddings, by Antonio Ginart et al., which stores embedding vectors with mixed dimensions, where less popular items have reduced dimension embeddings. We use the 1M version of the Movielens dataset. The dataset includes around 1 million ratings from 6,000 users on 4,000 movies. Setup End of explanation urlretrieve("http://files.grouplens.org/datasets/movielens/ml-1m.zip", "movielens.zip") ZipFile("movielens.zip", "r").extractall() ratings_data = pd.read_csv( "ml-1m/ratings.dat", sep="::", names=["user_id", "movie_id", "rating", "unix_timestamp"], ) ratings_data["movie_id"] = ratings_data["movie_id"].apply(lambda x: f"movie_{x}") ratings_data["user_id"] = ratings_data["user_id"].apply(lambda x: f"user_{x}") ratings_data["rating"] = ratings_data["rating"].apply(lambda x: float(x)) del ratings_data["unix_timestamp"] print(f"Number of users: {len(ratings_data.user_id.unique())}") print(f"Number of movies: {len(ratings_data.movie_id.unique())}") print(f"Number of ratings: {len(ratings_data.index)}") Explanation: Prepare the data Download and process data End of explanation random_selection = np.random.rand(len(ratings_data.index)) <= 0.85 train_data = ratings_data[random_selection] eval_data = ratings_data[~random_selection] train_data.to_csv("train_data.csv", index=False, sep="|", header=False) eval_data.to_csv("eval_data.csv", index=False, sep="|", header=False) print(f"Train data split: {len(train_data.index)}") print(f"Eval data split: {len(eval_data.index)}") print("Train and eval data files are saved.") Explanation: Create train and eval data splits End of explanation csv_header = list(ratings_data.columns) user_vocabulary = list(ratings_data.user_id.unique()) movie_vocabulary = list(ratings_data.movie_id.unique()) target_feature_name = "rating" learning_rate = 0.001 batch_size = 128 num_epochs = 3 base_embedding_dim = 64 Explanation: Define dataset metadata and hyperparameters End of explanation def get_dataset_from_csv(csv_file_path, batch_size=128, shuffle=True): return tf.data.experimental.make_csv_dataset( csv_file_path, batch_size=batch_size, column_names=csv_header, label_name=target_feature_name, num_epochs=1, header=False, field_delim="|", shuffle=shuffle, ) def run_experiment(model): # Compile the model. model.compile( optimizer=keras.optimizers.Adam(learning_rate), loss=tf.keras.losses.MeanSquaredError(), metrics=[keras.metrics.MeanAbsoluteError(name="mae")], ) # Read the training data. train_dataset = get_dataset_from_csv("train_data.csv", batch_size) # Read the test data. eval_dataset = get_dataset_from_csv("eval_data.csv", batch_size, shuffle=False) # Fit the model with the training data. history = model.fit(train_dataset, epochs=num_epochs, validation_data=eval_dataset,) return history Explanation: Train and evaluate the model End of explanation def embedding_encoder(vocabulary, embedding_dim, num_oov_indices=0, name=None): return keras.Sequential( [ StringLookup( vocabulary=vocabulary, mask_token=None, num_oov_indices=num_oov_indices ), layers.Embedding( input_dim=len(vocabulary) + num_oov_indices, output_dim=embedding_dim ), ], name=f"{name}_embedding" if name else None, ) Explanation: Experiment 1: baseline collaborative filtering model Implement embedding encoder End of explanation def create_baseline_model(): # Receive the user as an input. user_input = layers.Input(name="user_id", shape=(), dtype=tf.string) # Get user embedding. user_embedding = embedding_encoder( vocabulary=user_vocabulary, embedding_dim=base_embedding_dim, name="user" )(user_input) # Receive the movie as an input. movie_input = layers.Input(name="movie_id", shape=(), dtype=tf.string) # Get embedding. movie_embedding = embedding_encoder( vocabulary=movie_vocabulary, embedding_dim=base_embedding_dim, name="movie" )(movie_input) # Compute dot product similarity between user and movie embeddings. logits = layers.Dot(axes=1, name="dot_similarity")( [user_embedding, movie_embedding] ) # Convert to rating scale. prediction = keras.activations.sigmoid(logits) * 5 # Create the model. model = keras.Model( inputs=[user_input, movie_input], outputs=prediction, name="baseline_model" ) return model baseline_model = create_baseline_model() baseline_model.summary() Explanation: Implement the baseline model End of explanation history = run_experiment(baseline_model) plt.plot(history.history["loss"]) plt.plot(history.history["val_loss"]) plt.title("model loss") plt.ylabel("loss") plt.xlabel("epoch") plt.legend(["train", "eval"], loc="upper left") plt.show() Explanation: Notice that the number of trainable parameters is 623,744 End of explanation class QREmbedding(keras.layers.Layer): def __init__(self, vocabulary, embedding_dim, num_buckets, name=None): super(QREmbedding, self).__init__(name=name) self.num_buckets = num_buckets self.index_lookup = StringLookup( vocabulary=vocabulary, mask_token=None, num_oov_indices=0 ) self.q_embeddings = layers.Embedding(num_buckets, embedding_dim,) self.r_embeddings = layers.Embedding(num_buckets, embedding_dim,) def call(self, inputs): # Get the item index. embedding_index = self.index_lookup(inputs) # Get the quotient index. quotient_index = tf.math.floordiv(embedding_index, self.num_buckets) # Get the reminder index. remainder_index = tf.math.floormod(embedding_index, self.num_buckets) # Lookup the quotient_embedding using the quotient_index. quotient_embedding = self.q_embeddings(quotient_index) # Lookup the remainder_embedding using the remainder_index. remainder_embedding = self.r_embeddings(remainder_index) # Use multiplication as a combiner operation return quotient_embedding * remainder_embedding Explanation: Experiment 2: memory-efficient model Implement Quotient-Remainder embedding as a layer The Quotient-Remainder technique works as follows. For a set of vocabulary and embedding size embedding_dim, instead of creating a vocabulary_size X embedding_dim embedding table, we create two num_buckets X embedding_dim embedding tables, where num_buckets is much smaller than vocabulary_size. An embedding for a given item index is generated via the following steps: Compute the quotient_index as index // num_buckets. Compute the remainder_index as index % num_buckets. Lookup quotient_embedding from the first embedding table using quotient_index. Lookup remainder_embedding from the second embedding table using remainder_index. Return quotient_embedding * remainder_embedding. This technique not only reduces the number of embedding vectors needs to be stored and trained, but also generates a unique embedding vector for each item of size embedding_dim. Note that q_embedding and r_embedding can be combined using other operations, like Add and Concatenate. End of explanation class MDEmbedding(keras.layers.Layer): def __init__( self, blocks_vocabulary, blocks_embedding_dims, base_embedding_dim, name=None ): super(MDEmbedding, self).__init__(name=name) self.num_blocks = len(blocks_vocabulary) # Create vocab to block lookup. keys = [] values = [] for block_idx, block_vocab in enumerate(blocks_vocabulary): keys.extend(block_vocab) values.extend([block_idx] * len(block_vocab)) self.vocab_to_block = tf.lookup.StaticHashTable( tf.lookup.KeyValueTensorInitializer(keys, values), default_value=-1 ) self.block_embedding_encoders = [] self.block_embedding_projectors = [] # Create block embedding encoders and projectors. for idx in range(self.num_blocks): vocabulary = blocks_vocabulary[idx] embedding_dim = blocks_embedding_dims[idx] block_embedding_encoder = embedding_encoder( vocabulary, embedding_dim, num_oov_indices=1 ) self.block_embedding_encoders.append(block_embedding_encoder) if embedding_dim == base_embedding_dim: self.block_embedding_projectors.append(layers.Lambda(lambda x: x)) else: self.block_embedding_projectors.append( layers.Dense(units=base_embedding_dim) ) def call(self, inputs): # Get block index for each input item. block_indicies = self.vocab_to_block.lookup(inputs) # Initialize output embeddings to zeros. embeddings = tf.zeros(shape=(tf.shape(inputs)[0], base_embedding_dim)) # Generate embeddings from blocks. for idx in range(self.num_blocks): # Lookup embeddings from the current block. block_embeddings = self.block_embedding_encoders[idx](inputs) # Project embeddings to base_embedding_dim. block_embeddings = self.block_embedding_projectors[idx](block_embeddings) # Create a mask to filter out embeddings of items that do not belong to the current block. mask = tf.expand_dims(tf.cast(block_indicies == idx, tf.dtypes.float32), 1) # Set the embeddings for the items not belonging to the current block to zeros. block_embeddings = block_embeddings * mask # Add the block embeddings to the final embeddings. embeddings += block_embeddings return embeddings Explanation: Implement Mixed Dimension embedding as a layer In the mixed dimension embedding technique, we train embedding vectors with full dimensions for the frequently queried items, while train embedding vectors with reduced dimensions for less frequent items, plus a projection weights matrix to bring low dimension embeddings to the full dimensions. More precisely, we define blocks of items of similar frequencies. For each block, a block_vocab_size X block_embedding_dim embedding table and block_embedding_dim X full_embedding_dim projection weights matrix are created. Note that, if block_embedding_dim equals full_embedding_dim, the projection weights matrix becomes an identity matrix. Embeddings for a given batch of item indices are generated via the following steps: For each block, lookup the block_embedding_dim embedding vectors using indices, and project them to the full_embedding_dim. If an item index does not belong to a given block, an out-of-vocabulary embedding is returned. Each block will return a batch_size X full_embedding_dim tensor. A mask is applied to the embeddings returned from each block in order to convert the out-of-vocabulary embeddings to vector of zeros. That is, for each item in the batch, a single non-zero embedding vector is returned from the all block embeddings. Embeddings retrieved from the blocks are combined using sum to produce the final batch_size X full_embedding_dim tensor. End of explanation movie_frequencies = ratings_data["movie_id"].value_counts() movie_frequencies.hist(bins=10) Explanation: Implement the memory-efficient model In this experiment, we are going to use the Quotient-Remainder technique to reduce the size of the user embeddings, and the Mixed Dimension technique to reduce the size of the movie embeddings. While in the paper, an alpha-power rule is used to determined the dimensions of the embedding of each block, we simply set the number of blocks and the dimensions of embeddings of each block based on the histogram visualization of movies popularity. End of explanation sorted_movie_vocabulary = list(movie_frequencies.keys()) movie_blocks_vocabulary = [ sorted_movie_vocabulary[:400], # high popularity movies block sorted_movie_vocabulary[400:1700], # normal popularity movies block sorted_movie_vocabulary[1700:], # low popularity movies block ] movie_blocks_embedding_dims = [64, 32, 16] user_embedding_num_buckets = len(user_vocabulary) // 50 def create_memory_efficient_model(): # Take the user as an input. user_input = layers.Input(name="user_id", shape=(), dtype=tf.string) # Get user embedding. user_embedding = QREmbedding( vocabulary=user_vocabulary, embedding_dim=base_embedding_dim, num_buckets=user_embedding_num_buckets, name="user_embedding", )(user_input) # Take the movie as an input. movie_input = layers.Input(name="movie_id", shape=(), dtype=tf.string) # Get embedding. movie_embedding = MDEmbedding( blocks_vocabulary=movie_blocks_vocabulary, blocks_embedding_dims=movie_blocks_embedding_dims, base_embedding_dim=base_embedding_dim, name="movie_embedding", )(movie_input) # Compute dot product similarity between user and movie embeddings. logits = layers.Dot(axes=1, name="dot_similarity")( [user_embedding, movie_embedding] ) # Convert to rating scale. prediction = keras.activations.sigmoid(logits) * 5 # Create the model. model = keras.Model( inputs=[user_input, movie_input], outputs=prediction, name="baseline_model" ) return model memory_efficient_model = create_memory_efficient_model() memory_efficient_model.summary() Explanation: You can see that we can group the movies into three blocks, and assign them 64, 32, and 16 embedding dimensions, respectively. Feel free to experiment with different number of blocks and dimensions. End of explanation history = run_experiment(memory_efficient_model) plt.plot(history.history["loss"]) plt.plot(history.history["val_loss"]) plt.title("model loss") plt.ylabel("loss") plt.xlabel("epoch") plt.legend(["train", "eval"], loc="upper left") plt.show() Explanation: Notice that the number of trainable parameters is 117,968, which is more than 5x less than the number of parameters in the baseline model. End of explanation
12,885
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Introduction</a></span></li><li><span><a href="#Setup" data-toc-modified-id="Setup-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Setup</a></span><ul class="toc-item"><li><span><a href="#Setup---Debug" data-toc-modified-id="Setup---Debug-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Setup - Debug</a></span></li><li><span><a href="#Setup---Imports" data-toc-modified-id="Setup---Imports-2.2"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>Setup - Imports</a></span></li><li><span><a href="#Setup---working-folder-paths" data-toc-modified-id="Setup---working-folder-paths-2.3"><span class="toc-item-num">2.3&nbsp;&nbsp;</span>Setup - working folder paths</a></span></li><li><span><a href="#Setup---logging" data-toc-modified-id="Setup---logging-2.4"><span class="toc-item-num">2.4&nbsp;&nbsp;</span>Setup - logging</a></span></li><li><span><a href="#Setup---virtualenv-jupyter-kernel" data-toc-modified-id="Setup---virtualenv-jupyter-kernel-2.5"><span class="toc-item-num">2.5&nbsp;&nbsp;</span>Setup - virtualenv jupyter kernel</a></span></li><li><span><a href="#Setup---Initialize-Django" data-toc-modified-id="Setup---Initialize-Django-2.6"><span class="toc-item-num">2.6&nbsp;&nbsp;</span>Setup - Initialize Django</a></span></li><li><span><a href="#Setup---Initialize-LoggingHelper" data-toc-modified-id="Setup---Initialize-LoggingHelper-2.7"><span class="toc-item-num">2.7&nbsp;&nbsp;</span>Setup - Initialize LoggingHelper</a></span></li><li><span><a href="#Setup---initialize-ProquestHNPNewspaper" data-toc-modified-id="Setup---initialize-ProquestHNPNewspaper-2.8"><span class="toc-item-num">2.8&nbsp;&nbsp;</span>Setup - initialize ProquestHNPNewspaper</a></span><ul class="toc-item"><li><span><a href="#load-from-database" data-toc-modified-id="load-from-database-2.8.1"><span class="toc-item-num">2.8.1&nbsp;&nbsp;</span>load from database</a></span></li><li><span><a href="#set-up-manually" data-toc-modified-id="set-up-manually-2.8.2"><span class="toc-item-num">2.8.2&nbsp;&nbsp;</span>set up manually</a></span></li></ul></li></ul></li><li><span><a href="#Find-articles-to-be-loaded" data-toc-modified-id="Find-articles-to-be-loaded-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Find articles to be loaded</a></span><ul class="toc-item"><li><span><a href="#Uncompress-files" data-toc-modified-id="Uncompress-files-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Uncompress files</a></span></li><li><span><a href="#Work-with-uncompressed-files" data-toc-modified-id="Work-with-uncompressed-files-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Work with uncompressed files</a></span></li><li><span><a href="#parse-and-load-XML-files" data-toc-modified-id="parse-and-load-XML-files-3.3"><span class="toc-item-num">3.3&nbsp;&nbsp;</span>parse and load XML files</a></span></li><li><span><a href="#build-list-of-all-ObjectTypes" data-toc-modified-id="build-list-of-all-ObjectTypes-3.4"><span class="toc-item-num">3.4&nbsp;&nbsp;</span>build list of all ObjectTypes</a></span></li><li><span><a href="#map-files-to-types" data-toc-modified-id="map-files-to-types-3.5"><span class="toc-item-num">3.5&nbsp;&nbsp;</span>map files to types</a></span><ul class="toc-item"><li><span><a href="#explore-all-known-object-types" data-toc-modified-id="explore-all-known-object-types-3.5.1"><span class="toc-item-num">3.5.1&nbsp;&nbsp;</span>explore all known object types</a></span></li><li><span><a href="#files-in-archive-CSM_20170929191926_00001---1994" data-toc-modified-id="files-in-archive-CSM_20170929191926_00001---1994-3.5.2"><span class="toc-item-num">3.5.2&nbsp;&nbsp;</span>files in archive CSM_20170929191926_00001 - 1994</a></span></li></ul></li></ul></li><li><span><a href="#TODO" data-toc-modified-id="TODO-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>TODO</a></span></li></ul></div> Introduction Back to Table of Contents This is a notebook that expands on the OpenCalais code in the file article_coding.py, also in this folder. It includes more sections on selecting publications you want to submit to OpenCalais as an example. It is intended to be copied and re-used. Setup Back to Table of Contents Setup - Debug Back to Table of Contents Step1: Setup - Imports Back to Table of Contents Step2: Setup - working folder paths Back to Table of Contents What data are we looking at? Step3: Setup - logging Back to Table of Contents configure logging for this notebook's kernel (If you do not run this cell, you'll get the django application's logging configuration. Step4: Setup - virtualenv jupyter kernel Back to Table of Contents If you are using a virtualenv, make sure that you Step5: Setup - Initialize LoggingHelper Back to Table of Contents Create a LoggingHelper instance to use to log debug and also print at the same time. Preconditions Step6: Setup - initialize ProquestHNPNewspaper Back to Table of Contents Create an initialize an instance of ProquestHNPNewspaper for this paper. load from database Back to Table of Contents Step7: set up manually Back to Table of Contents Step8: If desired, add to database. Step9: Find articles to be loaded Back to Table of Contents Specify which folder of XML files should be loaded into system, then process all files within the folder. The compressed archives from proquest_hnp just contain publication XML files, no containing folder. To process Step10: For each *.zip file in the paper's source folder Step11: Work with uncompressed files Back to Table of Contents Change working directories to the uncompressed paper path. Step12: parse and load XML files Back to Table of Contents Load one of the files into memory and see what we can do with it. Beautiful Soup? Looks like the root element is "Record", then the high-level type of the article is "ObjectType". ObjectType values Step13: Processing 5752 files in /mnt/hgfs/projects/phd/proquest_hnp/uncompressed/BostonGlobe/BG_20171002210239_00001 ----&gt; XML file count Step14: Example output Step15: explore all known object types Back to Table of Contents Look at all known object types to see which contain actual news content. Step16: files in archive CSM_20170929191926_00001 - 1994 Back to Table of Contents Archive details
Python Code: debug_flag = False Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Introduction</a></span></li><li><span><a href="#Setup" data-toc-modified-id="Setup-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Setup</a></span><ul class="toc-item"><li><span><a href="#Setup---Debug" data-toc-modified-id="Setup---Debug-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Setup - Debug</a></span></li><li><span><a href="#Setup---Imports" data-toc-modified-id="Setup---Imports-2.2"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>Setup - Imports</a></span></li><li><span><a href="#Setup---working-folder-paths" data-toc-modified-id="Setup---working-folder-paths-2.3"><span class="toc-item-num">2.3&nbsp;&nbsp;</span>Setup - working folder paths</a></span></li><li><span><a href="#Setup---logging" data-toc-modified-id="Setup---logging-2.4"><span class="toc-item-num">2.4&nbsp;&nbsp;</span>Setup - logging</a></span></li><li><span><a href="#Setup---virtualenv-jupyter-kernel" data-toc-modified-id="Setup---virtualenv-jupyter-kernel-2.5"><span class="toc-item-num">2.5&nbsp;&nbsp;</span>Setup - virtualenv jupyter kernel</a></span></li><li><span><a href="#Setup---Initialize-Django" data-toc-modified-id="Setup---Initialize-Django-2.6"><span class="toc-item-num">2.6&nbsp;&nbsp;</span>Setup - Initialize Django</a></span></li><li><span><a href="#Setup---Initialize-LoggingHelper" data-toc-modified-id="Setup---Initialize-LoggingHelper-2.7"><span class="toc-item-num">2.7&nbsp;&nbsp;</span>Setup - Initialize LoggingHelper</a></span></li><li><span><a href="#Setup---initialize-ProquestHNPNewspaper" data-toc-modified-id="Setup---initialize-ProquestHNPNewspaper-2.8"><span class="toc-item-num">2.8&nbsp;&nbsp;</span>Setup - initialize ProquestHNPNewspaper</a></span><ul class="toc-item"><li><span><a href="#load-from-database" data-toc-modified-id="load-from-database-2.8.1"><span class="toc-item-num">2.8.1&nbsp;&nbsp;</span>load from database</a></span></li><li><span><a href="#set-up-manually" data-toc-modified-id="set-up-manually-2.8.2"><span class="toc-item-num">2.8.2&nbsp;&nbsp;</span>set up manually</a></span></li></ul></li></ul></li><li><span><a href="#Find-articles-to-be-loaded" data-toc-modified-id="Find-articles-to-be-loaded-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Find articles to be loaded</a></span><ul class="toc-item"><li><span><a href="#Uncompress-files" data-toc-modified-id="Uncompress-files-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Uncompress files</a></span></li><li><span><a href="#Work-with-uncompressed-files" data-toc-modified-id="Work-with-uncompressed-files-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Work with uncompressed files</a></span></li><li><span><a href="#parse-and-load-XML-files" data-toc-modified-id="parse-and-load-XML-files-3.3"><span class="toc-item-num">3.3&nbsp;&nbsp;</span>parse and load XML files</a></span></li><li><span><a href="#build-list-of-all-ObjectTypes" data-toc-modified-id="build-list-of-all-ObjectTypes-3.4"><span class="toc-item-num">3.4&nbsp;&nbsp;</span>build list of all ObjectTypes</a></span></li><li><span><a href="#map-files-to-types" data-toc-modified-id="map-files-to-types-3.5"><span class="toc-item-num">3.5&nbsp;&nbsp;</span>map files to types</a></span><ul class="toc-item"><li><span><a href="#explore-all-known-object-types" data-toc-modified-id="explore-all-known-object-types-3.5.1"><span class="toc-item-num">3.5.1&nbsp;&nbsp;</span>explore all known object types</a></span></li><li><span><a href="#files-in-archive-CSM_20170929191926_00001---1994" data-toc-modified-id="files-in-archive-CSM_20170929191926_00001---1994-3.5.2"><span class="toc-item-num">3.5.2&nbsp;&nbsp;</span>files in archive CSM_20170929191926_00001 - 1994</a></span></li></ul></li></ul></li><li><span><a href="#TODO" data-toc-modified-id="TODO-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>TODO</a></span></li></ul></div> Introduction Back to Table of Contents This is a notebook that expands on the OpenCalais code in the file article_coding.py, also in this folder. It includes more sections on selecting publications you want to submit to OpenCalais as an example. It is intended to be copied and re-used. Setup Back to Table of Contents Setup - Debug Back to Table of Contents End of explanation import datetime import glob import logging import lxml import os import six import xml import xmltodict import zipfile Explanation: Setup - Imports Back to Table of Contents End of explanation # paper identifier paper_identifier = "ChristianScienceMonitor" archive_identifier = None # source source_paper_folder = "/mnt/hgfs/projects/phd/proquest_hnp/proquest_hnp/data" source_paper_path = "{}/{}".format( source_paper_folder, paper_identifier ) # uncompressed uncompressed_paper_folder = "/mnt/hgfs/projects/phd/proquest_hnp/uncompressed" uncompressed_paper_path = "{}/{}".format( uncompressed_paper_folder, paper_identifier ) # make sure an identifier is set before you make a path here. if ( ( archive_identifier is not None ) and ( archive_identifier != "" ) ): # identifier is set. source_archive_file = "{}.zip".format( archive_identifier ) source_archive_path = "{}/{}".format( source_paper_path, source_archive_file ) uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, archive_identifier ) #-- END check to see if archive_identifier present. --# %pwd # current working folder current_working_folder = "/home/jonathanmorgan/work/django/research/work/phd_work/data/article_loading/proquest_hnp/{}".format( paper_identifier ) current_datetime = datetime.datetime.now() current_date_string = current_datetime.strftime( "%Y-%m-%d-%H-%M-%S" ) Explanation: Setup - working folder paths Back to Table of Contents What data are we looking at? End of explanation logging_file_name = "{}/research-data_load-{}-{}.log.txt".format( current_working_folder, paper_identifier, current_date_string ) logging.basicConfig( level = logging.DEBUG, format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s', filename = logging_file_name, filemode = 'w' # set to 'a' if you want to append, rather than overwrite each time. ) Explanation: Setup - logging Back to Table of Contents configure logging for this notebook's kernel (If you do not run this cell, you'll get the django application's logging configuration. End of explanation # init django django_init_folder = "/home/jonathanmorgan/work/django/research/work/phd_work" django_init_path = "django_init.py" if( ( django_init_folder is not None ) and ( django_init_folder != "" ) ): # add folder to front of path. django_init_path = "{}/{}".format( django_init_folder, django_init_path ) #-- END check to see if django_init folder. --# %run $django_init_path # context_text imports from context_text.article_coding.article_coding import ArticleCoder from context_text.article_coding.article_coding import ArticleCoding from context_text.article_coding.open_calais_v2.open_calais_v2_article_coder import OpenCalaisV2ArticleCoder from context_text.collectors.newsbank.newspapers.GRPB import GRPB from context_text.collectors.newsbank.newspapers.DTNB import DTNB from context_text.models import Article from context_text.models import Article_Subject from context_text.models import Newspaper from context_text.shared.context_text_base import ContextTextBase # context_text_proquest_hnp from context_text_proquest_hnp.proquest_hnp_newspaper_helper import ProquestHNPNewspaperHelper Explanation: Setup - virtualenv jupyter kernel Back to Table of Contents If you are using a virtualenv, make sure that you: have installed your virtualenv as a kernel. choose the kernel for your virtualenv as the kernel for your notebook (Kernel --> Change kernel). Since I use a virtualenv, need to get that activated somehow inside this notebook. One option is to run ../dev/wsgi.py in this notebook, to configure the python environment manually as if you had activated the sourcenet virtualenv. To do this, you'd make a code cell that contains: %run ../dev/wsgi.py This is sketchy, however, because of the changes it makes to your Python environment within the context of whatever your current kernel is. I'd worry about collisions with the actual Python 3 kernel. Better, one can install their virtualenv as a separate kernel. Steps: activate your virtualenv: workon research in your virtualenv, install the package ipykernel. pip install ipykernel use the ipykernel python program to install the current environment as a kernel: python -m ipykernel install --user --name &lt;env_name&gt; --display-name "&lt;display_name&gt;" sourcenet example: python -m ipykernel install --user --name sourcenet --display-name "research (Python 3)" More details: http://ipython.readthedocs.io/en/stable/install/kernel_install.html Setup - Initialize Django Back to Table of Contents First, initialize my dev django project, so I can run code in this notebook that references my django models and can talk to the database using my project's settings. End of explanation # python_utilities from python_utilities.logging.logging_helper import LoggingHelper # init my_logging_helper = LoggingHelper() my_logging_helper.set_logger_name( "proquest_hnp-article-loading-{}".format( paper_identifier ) ) log_message = None Explanation: Setup - Initialize LoggingHelper Back to Table of Contents Create a LoggingHelper instance to use to log debug and also print at the same time. Preconditions: Must be run after Django is initialized, since python_utilities is in the django path. End of explanation my_paper = ProquestHNPNewspaperHelper() paper_instance = my_paper.initialize_from_database( paper_identifier ) my_paper.source_all_papers_folder = source_paper_folder my_paper.destination_all_papers_folder = uncompressed_paper_folder print( my_paper ) print( paper_instance ) Explanation: Setup - initialize ProquestHNPNewspaper Back to Table of Contents Create an initialize an instance of ProquestHNPNewspaper for this paper. load from database Back to Table of Contents End of explanation my_paper = ProquestHNPNewspaperHelper() my_paper.paper_identifier = paper_identifier my_paper.source_all_papers_folder = source_paper_folder my_paper.source_paper_path = source_paper_path my_paper.destination_all_papers_folder = uncompressed_paper_folder my_paper.destination_paper_path = uncompressed_paper_path my_paper.paper_start_year = 1908 my_paper.paper_end_year = 1994 my_newspaper = Newspaper.objects.get( id = 8 ) my_paper.newspaper = my_newspaper Explanation: set up manually Back to Table of Contents End of explanation phnp_newspaper_instance = my_paper.create_PHNP_newspaper() print( phnp_newspaper_instance ) Explanation: If desired, add to database. End of explanation # create folder to hold the results of decompressing paper's zip files. did_uncomp_paper_folder_exist = my_paper.make_dest_paper_folder() Explanation: Find articles to be loaded Back to Table of Contents Specify which folder of XML files should be loaded into system, then process all files within the folder. The compressed archives from proquest_hnp just contain publication XML files, no containing folder. To process: uncompresed paper folder ( &lt;paper_folder&gt; ) - make a folder in /mnt/hgfs/projects/phd/proquest_hnp/uncompressed for the paper whose data you are working with, named the same as the paper's folder in /mnt/hgfs/projects/phd/proquest_hnp/proquest_hnp/data. for example, for the Boston Globe, name it "BostonGlobe". uncompressed archive folder ( &lt;archive_folder&gt; ) - inside a given paper's folder in uncompressed, for each archive file, create a folder named the same as the archive file, but with no ".zip" at the end. For example, for the file "BG_20171002210239_00001.zip", make a folder named "BG_20171002210239_00001". path should be "&lt;paper_folder&gt;/&lt;archive_name_no_zip&gt;. unzip the archive into this folder: unzip &lt;path_to_zip&gt; -d &lt;archive_folder&gt; Uncompress files Back to Table of Contents See if the uncompressed paper folder exists. If not, set flag and create it. End of explanation # decompress the files my_paper.uncompress_paper_zip_files() Explanation: For each *.zip file in the paper's source folder: parse file name from path returned by glob. parse the part before ".zip" from the file name. This is referred to subsequently as the "archive identifier". check if folder named the same as the "archive identifier" is present. If no: create it. then, uncompress the archive into it. If yes: output a message. Don't want to uncompress if it was already uncompressed once. End of explanation %cd $uncompressed_paper_path %ls Explanation: Work with uncompressed files Back to Table of Contents Change working directories to the uncompressed paper path. End of explanation # loop over files in the current archive folder path. object_type_to_count_map = my_paper.process_archive_object_types( uncompressed_archive_path ) Explanation: parse and load XML files Back to Table of Contents Load one of the files into memory and see what we can do with it. Beautiful Soup? Looks like the root element is "Record", then the high-level type of the article is "ObjectType". ObjectType values: Advertisement ... Good options for XML parser: lxml.etree - https://stackoverflow.com/questions/12290091/reading-xml-file-and-fetching-its-attributes-value-in-python xmltodict - https://docs.python-guide.org/scenarios/xml/ beautifulsoup using lxml End of explanation xml_folder_list = glob.glob( "{}/*".format( uncompressed_paper_path ) ) print( "folder_list: {}".format( xml_folder_list ) ) # build map of all object types for a paper to the overall counts of each paper_object_type_to_count_map = my_paper.process_paper_object_types() Explanation: Processing 5752 files in /mnt/hgfs/projects/phd/proquest_hnp/uncompressed/BostonGlobe/BG_20171002210239_00001 ----&gt; XML file count: 5752 Counters: - Processed 5752 files - No Record: 0 - No ObjectType: 0 - No ObjectType value: 0 ObjectType values and occurrence counts: - A|d|v|e|r|t|i|s|e|m|e|n|t: 1902 - Article|Feature: 1792 - N|e|w|s: 53 - Commentary|Editorial: 36 - G|e|n|e|r|a|l| |I|n|f|o|r|m|a|t|i|o|n: 488 - S|t|o|c|k| |Q|u|o|t|e: 185 - Advertisement|Classified Advertisement: 413 - E|d|i|t|o|r|i|a|l| |C|a|r|t|o|o|n|/|C|o|m|i|c: 31 - Correspondence|Letter to the Editor: 119 - Front Matter|Table of Contents: 193 - O|b|i|t|u|a|r|y: 72 - F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y: 107 - I|m|a|g|e|/|P|h|o|t|o|g|r|a|p|h: 84 - Marriage Announcement|News: 6 - I|l|l|u|s|t|r|a|t|i|o|n: 91 - R|e|v|i|e|w: 133 - C|r|e|d|i|t|/|A|c|k|n|o|w|l|e|d|g|e|m|e|n|t: 30 - News|Legal Notice: 17 build list of all ObjectTypes Back to Table of Contents Loop over all folders in the paper path. For each folder, grab all files in the folder. For each file, parse XML, then get the ObjectType value and if it isn't already in map of obect types to counts, add it. Increment count. From command line, in the uncompressed BostonGlobe folder: find . -type f -iname "*.xml" | wc -l resulted in 11,374,500 articles. That is quite a few. End of explanation news_object_type_list = [] news_object_type_list.append( 'Article|Feature' ) news_object_type_list.append( 'Feature|Article' ) news_object_type_list.append( 'F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y' ) Explanation: Example output: XML file count: 5752 Counters: - Processed 5752 files - No Record: 0 - No ObjectType: 0 - No ObjectType value: 0 ObjectType values and occurrence counts: - A|d|v|e|r|t|i|s|e|m|e|n|t: 2114224 - Feature|Article: 5271887 - I|m|a|g|e|/|P|h|o|t|o|g|r|a|p|h: 249942 - O|b|i|t|u|a|r|y: 625143 - G|e|n|e|r|a|l| |I|n|f|o|r|m|a|t|i|o|n: 1083164 - S|t|o|c|k| |Q|u|o|t|e: 202776 - N|e|w|s: 140274 - I|l|l|u|s|t|r|a|t|i|o|n: 106925 - F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y: 386421 - E|d|i|t|o|r|i|a|l| |C|a|r|t|o|o|n|/|C|o|m|i|c: 78993 - Editorial|Commentary: 156342 - C|r|e|d|i|t|/|A|c|k|n|o|w|l|e|d|g|e|m|e|n|t: 68356 - Classified Advertisement|Advertisement: 291533 - R|e|v|i|e|w: 86889 - Table of Contents|Front Matter: 69798 - Letter to the Editor|Correspondence: 202071 - News|Legal Notice: 24053 - News|Marriage Announcement: 41314 - B|i|r|t|h| |N|o|t|i|c|e: 926 - News|Military/War News: 3 - U|n|d|e|f|i|n|e|d: 5 - Article|Feature: 137526 - Front Matter|Table of Contents: 11195 - Commentary|Editorial: 3386 - Marriage Announcement|News: 683 - Correspondence|Letter to the Editor: 7479 - Legal Notice|News: 1029 - Advertisement|Classified Advertisement: 12163 map files to types Back to Table of Contents Choose a directory, then loop over the files in the directory to build a map of types to lists of file names. End of explanation # get list of all object types master_object_type_list = my_paper.get_all_object_types() print( "Object Types: {}".format( master_object_type_list ) ) # directory to work in. uncompressed_archive_folder = "CSM_20170929191926_00001" uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, uncompressed_archive_folder ) print( 'Uncompressed archive folder: {}'.format( uncompressed_archive_path ) ) # build map of file types to lists of files of that type in specified folder. object_type_to_file_path_map = my_paper.map_archive_folder_files_to_types( uncompressed_archive_path ) # which types do we want to preview? #types_to_output = news_object_type_list types_to_output = [ "Advertisement|Classified Advertisement" ] types_to_output = [ "A|d|v|e|r|t|i|s|e|m|e|n|t" ] types_to_output = [ 'Advertisement|Classified Advertisement' ] types_to_output = [ 'Article|Feature' ] types_to_output = [ 'B|i|r|t|h| |N|o|t|i|c|e' ] types_to_output = [ 'Classified Advertisement|Advertisement' ] types_to_output = [ 'Commentary|Editorial' ] types_to_output = [ 'Correspondence|Letter to the Editor' ] types_to_output = [ 'C|r|e|d|i|t|/|A|c|k|n|o|w|l|e|d|g|e|m|e|n|t' ] types_to_output = [ 'E|d|i|t|o|r|i|a|l| |C|a|r|t|o|o|n|/|C|o|m|i|c' ] types_to_output = [ 'Editorial|Commentary' ] types_to_output = [ 'Feature|Article' ] types_to_output = [ 'Front Matter|Table of Contents' ] types_to_output = [ 'F|r|o|n|t| |P|a|g|e|/|C|o|v|e|r| |S|t|o|r|y' ] types_to_output = [ 'G|e|n|e|r|a|l| |I|n|f|o|r|m|a|t|i|o|n' ] types_to_output = [ 'I|l|l|u|s|t|r|a|t|i|o|n' ] types_to_output = [ 'I|m|a|g|e|/|P|h|o|t|o|g|r|a|p|h' ] types_to_output = [ 'Legal Notice|News' ] types_to_output = [ 'Letter to the Editor|Correspondence' ] types_to_output = [ 'Marriage Announcement|News' ] types_to_output = [ 'N|e|w|s' ] types_to_output = [ 'News|Legal Notice' ] types_to_output = [ 'News|Marriage Announcement' ] types_to_output = [ 'News|Military/War News' ] types_to_output = [ 'O|b|i|t|u|a|r|y' ] types_to_output = [ 'R|e|v|i|e|w' ] types_to_output = [ 'S|t|o|c|k| |Q|u|o|t|e' ] types_to_output = [ 'Table of Contents|Front Matter' ] types_to_output = [ 'Table Of Contents|Front Matter' ] types_to_output = [ 'U|n|d|e|f|i|n|e|d' ] # declare variables xml_file_path_list = None xml_file_path_count = None xml_file_path_example_list = None xml_file_path = None xml_file = None xml_dict = None xml_string = None # loop over types for object_type in types_to_output: # print type and count xml_file_path_list = object_type_to_file_path_map.get( object_type, [] ) xml_file_path_count = len( xml_file_path_list ) xml_file_path_example_list = xml_file_path_list[ : 10 ] print( "\n- {} - {} files:".format( object_type, xml_file_path_count ) ) for xml_file_path in xml_file_path_example_list: print( "----> {}".format( xml_file_path ) ) # try to parse the file with open( xml_file_path ) as xml_file: # parse XML xml_dict = xmltodict.parse( xml_file.read() ) #-- END with open( xml_file_path ) as xml_file: --# # pretty-print xml_string = xmltodict.unparse( xml_dict, pretty = True ) # output print( xml_string ) #-- END loop over example file paths. --# #-- END loop over object types. --# Explanation: explore all known object types Back to Table of Contents Look at all known object types to see which contain actual news content. End of explanation # directory to work in. uncompressed_archive_folder = "CSM_20170929191926_00001" uncompressed_archive_path = "{}/{}".format( uncompressed_paper_path, uncompressed_archive_folder ) print( 'Uncompressed archive folder: {}'.format( uncompressed_archive_path ) ) # build map of file types to lists of files of that type in specified folder. object_type_to_file_path_map = my_paper.map_archive_folder_files_to_types( uncompressed_archive_path ) # which types do we want to preview? types_to_output = news_object_type_list # declare variables xml_file_path_list = None xml_file_path_count = None xml_file_path_example_list = None xml_file_path = None xml_file = None xml_dict = None xml_string = None # loop over types for object_type in types_to_output: # print type and count xml_file_path_list = object_type_to_file_path_map.get( object_type, [] ) xml_file_path_count = len( xml_file_path_list ) xml_file_path_example_list = xml_file_path_list[ : 10 ] print( "\n- {} - {} files:".format( object_type, xml_file_path_count ) ) for xml_file_path in xml_file_path_example_list: print( "----> {}".format( xml_file_path ) ) # try to parse the file with open( xml_file_path ) as xml_file: # parse XML xml_dict = xmltodict.parse( xml_file.read() ) #-- END with open( xml_file_path ) as xml_file: --# # pretty-print xml_string = xmltodict.unparse( xml_dict, pretty = True ) # output print( xml_string ) #-- END loop over example file paths. --# #-- END loop over object types. --# Explanation: files in archive CSM_20170929191926_00001 - 1994 Back to Table of Contents Archive details: ID: 903 Newspaper: 3 - ChristianScienceMonitor - Christian Science Monitor, The archive_identifier: CSM_20170929191926_00001 min_date: 1994-01-03 max_date: 1994-12-30 path: /mnt/hgfs/projects/phd/proquest_hnp/uncompressed/ChristianScienceMonitor/CSM_20170929191926_00001 End of explanation
12,886
Given the following text description, write Python code to implement the functionality described below step by step Description: What is Serial Dependence? In earlier lessons, we investigated properties of time series that were most easily modeled as time dependent properties, that is, with features we could derive directly from the time index. Some time series properties, however, can only be modeled as serially dependent properties, that is, using as features past values of the target series. The structure of these time series may not be apparent from a plot over time; plotted against past values, however, the structure becomes clear -- as we see in the figure below below. <figure style="padding Step1: By lagging a time series, we can make its past values appear contemporaneous with the values we are trying to predict (in the same row, in other words). This makes lagged series useful as features for modeling serial dependence. To forecast the US unemployment rate series, we could use y_lag_1 and y_lag_2 as features to predict the target y. This would forecast the future unemployment rate as a function of the unemployment rate in the prior two months. Lag plots A lag plot of a time series shows its values plotted against its lags. Serial dependence in a time series will often become apparent by looking at a lag plot. We can see from this lag plot of US Unemployment that there is a strong and apparently linear relationship between the current unemployment rate and past rates. <figure style="padding Step2: Our Flu Trends data shows irregular cycles instead of a regular seasonality Step3: The lag plots indicate that the relationship of FluVisits to its lags is mostly linear, while the partial autocorrelations suggest the dependence can be captured using lags 1, 2, 3, and 4. We can lag a time series in Pandas with the shift method. For this problem, we'll fill in the missing values the lagging creates with 0.0. Step4: In previous lessons, we were able to create forecasts for as many steps as we liked beyond the training data. When using lag features, however, we are limited to forecasting time steps whose lagged values are available. Using a lag 1 feature on Monday, we can't make a forecast for Wednesday because the lag 1 value needed is Tuesday which hasn't happened yet. We'll see strategies for handling this problem in Lesson 6. For this example, we'll just use a values from a test set. Step5: Looking just at the forecast values, we can see how our model needs a time step to react to sudden changes in the target series. This is a common limitation of models using only lags of the target series as features. Step6: To improve the forecast we could try to find leading indicators, time series that could provide an "early warning" for changes in flu cases. For our second approach then we'll add to our training data the popularity of some flu-related search terms as measured by Google Trends. Plotting the search phrase 'FluCough' against the target 'FluVisits' suggests such search terms could be useful as leading indicators Step7: The dataset contains 129 such terms, but we'll just use a few. Step8: Our forecasts are a bit rougher, but our model appears to be better able to anticipate sudden increases in flu visits, suggesting that the several time series of search popularity were indeed effective as leading indicators.
Python Code: #$HIDE_INPUT$ import pandas as pd # Federal Reserve dataset: https://www.kaggle.com/federalreserve/interest-rates reserve = pd.read_csv( "../input/ts-course-data/reserve.csv", parse_dates={'Date': ['Year', 'Month', 'Day']}, index_col='Date', ) y = reserve.loc[:, 'Unemployment Rate'].dropna().to_period('M') df = pd.DataFrame({ 'y': y, 'y_lag_1': y.shift(1), 'y_lag_2': y.shift(2), }) df.head() Explanation: What is Serial Dependence? In earlier lessons, we investigated properties of time series that were most easily modeled as time dependent properties, that is, with features we could derive directly from the time index. Some time series properties, however, can only be modeled as serially dependent properties, that is, using as features past values of the target series. The structure of these time series may not be apparent from a plot over time; plotted against past values, however, the structure becomes clear -- as we see in the figure below below. <figure style="padding: 1em;"> <img src="https://i.imgur.com/X0sSnwp.png" width=800, alt=""> <figcaption style="textalign: center; font-style: italic"><center>These two series have serial dependence, but not time dependence. Points on the right have coordinates <code>(value at time t-1, value at time t)</code>. </center></figcaption> </figure> With trend and seasonality, we trained models to fit curves to plots like those on the left in the figure above -- the models were learning time dependence. The goal in this lesson is to train models to fit curves to plots like those on the right -- we want them to learn serial dependence. Cycles One especially common way for serial dependence to manifest is in cycles. Cycles are patterns of growth and decay in a time series associated with how the value in a series at one time depends on values at previous times, but not necessarily on the time step itself. Cyclic behavior is characteristic of systems that can affect themselves or whose reactions persist over time. Economies, epidemics, animal populations, volcano eruptions, and similar natural phenomena often display cyclic behavior. <figure style="padding: 1em;"> <img src="https://i.imgur.com/CC3TkAf.png" width=800, alt=""> <figcaption style="textalign: center; font-style: italic"><center>Four time series with cyclic behavior. </center></figcaption> </figure> What distinguishes cyclic behavior from seasonality is that cycles are not necessarily time dependent, as seasons are. What happens in a cycle is less about the particular date of occurence, and more about what has happened in the recent past. The (at least relative) independence from time means that cyclic behavior can be much more irregular than seasonality. Lagged Series and Lag Plots To investigate possible serial dependence (like cycles) in a time series, we need to create "lagged" copies of the series. Lagging a time series means to shift its values forward one or more time steps, or equivalently, to shift the times in its index backward one or more steps. In either case, the effect is that the observations in the lagged series will appear to have happened later in time. This shows the monthly unemployment rate in the US (y) together with its first and second lagged series (y_lag_1 and y_lag_2, respectively). Notice how the values of the lagged series are shifted forward in time. End of explanation #$HIDE_INPUT$ from pathlib import Path from warnings import simplefilter import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from scipy.signal import periodogram from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from statsmodels.graphics.tsaplots import plot_pacf simplefilter("ignore") # Set Matplotlib defaults plt.style.use("seaborn-whitegrid") plt.rc("figure", autolayout=True, figsize=(11, 4)) plt.rc( "axes", labelweight="bold", labelsize="large", titleweight="bold", titlesize=16, titlepad=10, ) plot_params = dict( color="0.75", style=".-", markeredgecolor="0.25", markerfacecolor="0.25", ) %config InlineBackend.figure_format = 'retina' def lagplot(x, y=None, lag=1, standardize=False, ax=None, **kwargs): from matplotlib.offsetbox import AnchoredText x_ = x.shift(lag) if standardize: x_ = (x_ - x_.mean()) / x_.std() if y is not None: y_ = (y - y.mean()) / y.std() if standardize else y else: y_ = x corr = y_.corr(x_) if ax is None: fig, ax = plt.subplots() scatter_kws = dict( alpha=0.75, s=3, ) line_kws = dict(color='C3', ) ax = sns.regplot(x=x_, y=y_, scatter_kws=scatter_kws, line_kws=line_kws, lowess=True, ax=ax, **kwargs) at = AnchoredText( f"{corr:.2f}", prop=dict(size="large"), frameon=True, loc="upper left", ) at.patch.set_boxstyle("square, pad=0.0") ax.add_artist(at) ax.set(title=f"Lag {lag}", xlabel=x_.name, ylabel=y_.name) return ax def plot_lags(x, y=None, lags=6, nrows=1, lagplot_kwargs={}, **kwargs): import math kwargs.setdefault('nrows', nrows) kwargs.setdefault('ncols', math.ceil(lags / nrows)) kwargs.setdefault('figsize', (kwargs['ncols'] * 2, nrows * 2 + 0.5)) fig, axs = plt.subplots(sharex=True, sharey=True, squeeze=False, **kwargs) for ax, k in zip(fig.get_axes(), range(kwargs['nrows'] * kwargs['ncols'])): if k + 1 <= lags: ax = lagplot(x, y, lag=k + 1, ax=ax, **lagplot_kwargs) ax.set_title(f"Lag {k + 1}", fontdict=dict(fontsize=14)) ax.set(xlabel="", ylabel="") else: ax.axis('off') plt.setp(axs[-1, :], xlabel=x.name) plt.setp(axs[:, 0], ylabel=y.name if y is not None else x.name) fig.tight_layout(w_pad=0.1, h_pad=0.1) return fig data_dir = Path("../input/ts-course-data") flu_trends = pd.read_csv(data_dir / "flu-trends.csv") flu_trends.set_index( pd.PeriodIndex(flu_trends.Week, freq="W"), inplace=True, ) flu_trends.drop("Week", axis=1, inplace=True) ax = flu_trends.FluVisits.plot(title='Flu Trends', **plot_params) _ = ax.set(ylabel="Office Visits") Explanation: By lagging a time series, we can make its past values appear contemporaneous with the values we are trying to predict (in the same row, in other words). This makes lagged series useful as features for modeling serial dependence. To forecast the US unemployment rate series, we could use y_lag_1 and y_lag_2 as features to predict the target y. This would forecast the future unemployment rate as a function of the unemployment rate in the prior two months. Lag plots A lag plot of a time series shows its values plotted against its lags. Serial dependence in a time series will often become apparent by looking at a lag plot. We can see from this lag plot of US Unemployment that there is a strong and apparently linear relationship between the current unemployment rate and past rates. <figure style="padding: 1em;"> <img src="https://i.imgur.com/Hvrboya.png" width=600, alt=""> <figcaption style="textalign: center; font-style: italic"><center>Lag plot of US Unemployment with autocorrelations indicated. </center></figcaption> </figure> The most commonly used measure of serial dependence is known as autocorrelation, which is simply the correlation a time series has with one of its lags. US Unemployment has an autocorrelation of 0.99 at lag 1, 0.98 at lag 2, and so on. Choosing lags When choosing lags to use as features, it generally won't be useful to include every lag with a large autocorrelation. In US Unemployment, for instance, the autocorrelation at lag 2 might result entirely from "decayed" information from lag 1 -- just correlation that's carried over from the previous step. If lag 2 doesn't contain anything new, there would be no reason to include it if we already have lag 1. The partial autocorrelation tells you the correlation of a lag accounting for all of the previous lags -- the amount of "new" correlation the lag contributes, so to speak. Plotting the partial autocorrelation can help you choose which lag features to use. In the figure below, lag 1 through lag 6 fall outside the intervals of "no correlation" (in blue), so we might choose lags 1 through lag 6 as features for US Unemployment. (Lag 11 is likely a false positive.) <figure style="padding: 1em;"> <img src="https://i.imgur.com/6nTe94E.png" width=600, alt=""> <figcaption style="textalign: center; font-style: italic"><center>Partial autocorrelations of US Unemployment through lag 12 with 95% confidence intervals of no correlation. </center></figcaption> </figure> A plot like that above is known as a correlogram. The correlogram is for lag features essentially what the periodogram is for Fourier features. Finally, we need to be mindful that autocorrelation and partial autocorrelation are measures of linear dependence. Because real-world time series often have substantial non-linear dependences, it's best to look at a lag plot (or use some more general measure of dependence, like mutual information) when choosing lag features. The Sunspots series has lags with non-linear dependence which we might overlook with autocorrelation. <figure style="padding: 1em;"> <img src="https://i.imgur.com/Q38UVOu.png" width=350, alt=""> <figcaption style="textalign: center; font-style: italic"><center>Lag plot of the <em>Sunspots</em> series. </center></figcaption> </figure> Non-linear relationships like these can either be transformed to be linear or else learned by an appropriate algorithm. Example - Flu Trends The Flu Trends dataset contains records of doctor's visits for the flu for weeks between 2009 and 2016. Our goal is to forecast the number of flu cases for the coming weeks. We will take two approaches. In the first we'll forecast doctor's visits using lag features. Our second approach will be to forecast doctor's visits using lags of another set of time series: flu-related search terms as captured by Google Trends. End of explanation _ = plot_lags(flu_trends.FluVisits, lags=12, nrows=2) _ = plot_pacf(flu_trends.FluVisits, lags=12) Explanation: Our Flu Trends data shows irregular cycles instead of a regular seasonality: the peak tends to occur around the new year, but sometimes earlier or later, sometimes larger or smaller. Modeling these cycles with lag features will allow our forecaster to react dynamically to changing conditions instead of being constrained to exact dates and times as with seasonal features. Let's take a look at the lag and autocorrelation plots first: End of explanation def make_lags(ts, lags): return pd.concat( { f'y_lag_{i}': ts.shift(i) for i in range(1, lags + 1) }, axis=1) X = make_lags(flu_trends.FluVisits, lags=4) X = X.fillna(0.0) Explanation: The lag plots indicate that the relationship of FluVisits to its lags is mostly linear, while the partial autocorrelations suggest the dependence can be captured using lags 1, 2, 3, and 4. We can lag a time series in Pandas with the shift method. For this problem, we'll fill in the missing values the lagging creates with 0.0. End of explanation #$HIDE_INPUT$ # Create target series and data splits y = flu_trends.FluVisits.copy() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=60, shuffle=False) # Fit and predict model = LinearRegression() # `fit_intercept=True` since we didn't use DeterministicProcess model.fit(X_train, y_train) y_pred = pd.Series(model.predict(X_train), index=y_train.index) y_fore = pd.Series(model.predict(X_test), index=y_test.index) #$HIDE_INPUT$ ax = y_train.plot(**plot_params) ax = y_test.plot(**plot_params) ax = y_pred.plot(ax=ax) _ = y_fore.plot(ax=ax, color='C3') Explanation: In previous lessons, we were able to create forecasts for as many steps as we liked beyond the training data. When using lag features, however, we are limited to forecasting time steps whose lagged values are available. Using a lag 1 feature on Monday, we can't make a forecast for Wednesday because the lag 1 value needed is Tuesday which hasn't happened yet. We'll see strategies for handling this problem in Lesson 6. For this example, we'll just use a values from a test set. End of explanation #$HIDE_INPUT$ ax = y_test.plot(**plot_params) _ = y_fore.plot(ax=ax, color='C3') Explanation: Looking just at the forecast values, we can see how our model needs a time step to react to sudden changes in the target series. This is a common limitation of models using only lags of the target series as features. End of explanation #$HIDE_INPUT$ ax = flu_trends.plot( y=["FluCough", "FluVisits"], secondary_y="FluCough", ) Explanation: To improve the forecast we could try to find leading indicators, time series that could provide an "early warning" for changes in flu cases. For our second approach then we'll add to our training data the popularity of some flu-related search terms as measured by Google Trends. Plotting the search phrase 'FluCough' against the target 'FluVisits' suggests such search terms could be useful as leading indicators: flu-related searches tend to become more popular in the weeks prior to office visits. End of explanation search_terms = ["FluContagious", "FluCough", "FluFever", "InfluenzaA", "TreatFlu", "IHaveTheFlu", "OverTheCounterFlu", "HowLongFlu"] # Create three lags for each search term X0 = make_lags(flu_trends[search_terms], lags=3) # Create four lags for the target, as before X1 = make_lags(flu_trends['FluVisits'], lags=4) # Combine to create the training data X = pd.concat([X0, X1], axis=1).fillna(0.0) Explanation: The dataset contains 129 such terms, but we'll just use a few. End of explanation #$HIDE_INPUT$ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=60, shuffle=False) model = LinearRegression() model.fit(X_train, y_train) y_pred = pd.Series(model.predict(X_train), index=y_train.index) y_fore = pd.Series(model.predict(X_test), index=y_test.index) ax = y_test.plot(**plot_params) _ = y_fore.plot(ax=ax, color='C3') Explanation: Our forecasts are a bit rougher, but our model appears to be better able to anticipate sudden increases in flu visits, suggesting that the several time series of search popularity were indeed effective as leading indicators. End of explanation
12,887
Given the following text description, write Python code to implement the functionality described below step by step Description: http Step1: Complex 1-n correspondance Step2: https
Python Code: keyEN = ['red', 'yellow', 'green', 'blue', 'black'] keyFR1 = ['rouge', 'jaune', 'vert', 'bleu', 'noir'] keyFR2 = ['jaune', 'vert', 'bleu', 'noir', 'rouge'] keyDE = ['gelb', 'gruen', 'blau', 'schwartz', 'rot'] dataENFR = pd.DataFrame({'keyEN' : keyEN, 'keyFR' : keyFR1}) dataENFR dataFRDE = pd.DataFrame({'keyFR' : keyFR2, 'keyDE' : keyDE}) dataFRDE simpleMerge = pd.merge(dataENFR, dataFRDE, on='keyFR', how='outer') simpleMerge Explanation: http://pandas.pydata.org/pandas-docs/version/0.13.1/merging.html Simple simple 1-1 correspondance End of explanation users = ['Tom', 'Tom', 'Tom', 'Bill', 'Bill', 'Bill', 'Bill', 'Jack', 'Bob', 'Jim'] sessionsUsers = ['sessionTom1', 'sessionTom2', 'sessionTom3', 'sessionBill1', 'sessionBill2', 'sessionBill3', 'sessionBill4', 'sessionJack', 'sessionBob', 'sessionJim'] sessionsChapters = [ 'sessionTom1', 'sessionTom1', 'sessionTom1', 'sessionTom2', 'sessionTom2', 'sessionTom3', 'sessionBill1', 'sessionBill2', 'sessionBill2', 'sessionBill3', 'sessionBill3', 'sessionBill3', 'sessionBill4', 'sessionBill4', 'sessionBill4', 'sessionBill4', 'sessionJack', 'sessionJack', 'sessionJack', 'sessionBob', 'sessionJim'] chaptersSessions = ['1', '2', '3', '1', '2', '1', '1', '2', '3', '4', '5', '6', '5', '6', '5', '6', '9', '10', '11', '10', '1'] times = 100 * np.random.rand(len(chaptersSessions)) times.sort() times Explanation: Complex 1-n correspondance End of explanation dataUsers = pd.DataFrame({'users' : users, 'sessions' : sessionsUsers}) #dataUsers dataChapters = pd.DataFrame({'sessions' : sessionsChapters, 'chapters' : chaptersSessions, 'times' : times}) #dataChapters complexMerge = pd.merge(dataUsers, dataChapters, on='sessions', how='outer') complexMerge usersChapters = complexMerge.drop('sessions', 1) usersChapters.groupby('users').max() Explanation: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.rand.html End of explanation
12,888
Given the following text description, write Python code to implement the functionality described below step by step Description: The input data Here we use the (not publically available) data for Chicago, which has we believe accurate geocoding. Step1: Just the south side Step2: Covariance For KDE applications, it is interesting to think about the covariance matrix of the data. More precisely, the common approach to KDE is to first apply a Whitening transformation to the data, fit a KDE with a radially symmetric kernel, and then transform back. Step3: Using a masked grid When we consider "coverage levels" we naturally want to intersect our grid with the outline of the geographic region of interest.
Python Code: %matplotlib inline import matplotlib.pyplot as plt import matplotlib import descartes import os import numpy as np import descartes import open_cp.sources.chicago as chicago import open_cp.naive import open_cp.geometry import open_cp.plot datadir = os.path.join("//media", "disk", "Data") #datadir = os.path.join("..", "..", "..", "..", "..", "Data") chicago.set_data_directory(datadir) south_side = chicago.get_side("South") points = chicago.load(os.path.join(datadir, "chicago_two.csv"), {"BURGLARY"}, type="all_other") points.number_data_points, points.time_range fig, ax = plt.subplots(ncols=2, figsize=(18,10)) ax[0].scatter(*points.coords, marker="+", alpha=0.05, color="Black") ax[0].set_aspect(1) ax[0].set_title("All Burglary crime") pred = open_cp.naive.CountingGridKernel(250) pred.data = points risk = pred.predict() matrix = np.ma.masked_where(risk.intensity_matrix==0, risk.intensity_matrix) mappable = ax[1].pcolor(*risk.mesh_data(), matrix, cmap="Blues", edgecolor="none", linewidth=1) ax[1].set_title("Count in each 250m square cell") cbar = fig.colorbar(mappable, orientation="vertical", ax=ax[1]) cbar.set_label("Total crime count in region") Explanation: The input data Here we use the (not publically available) data for Chicago, which has we believe accurate geocoding. End of explanation cdict = {'red': [(0.0, 1.0, 1.0), (1.0, 1.0, 1.0)], 'green': [(0.0, 1.0, 1.0), (1.0, 0.0, 0.0)], 'blue': [(0.0, 0.2, 0.2), (1.0, 0.2, 0.2)]} yellow_to_red = matplotlib.colors.LinearSegmentedColormap("yellow_to_red", cdict) points_south_side = open_cp.geometry.intersect_timed_points(points, south_side) points_south_side.number_data_points, points_south_side.time_range fig, ax = plt.subplots(ncols=2, figsize=(18,10)) ax[0].scatter(*points_south_side.coords, marker="+", alpha=0.05, color="Black") ax[0].set_aspect(1) ax[0].set_title("All Burglary crime") pred = open_cp.naive.CountingGridKernel(250) pred.data = points_south_side risk = pred.predict() matrix = np.ma.masked_where(risk.intensity_matrix==0, risk.intensity_matrix) mappable = ax[1].pcolor(*risk.mesh_data(), matrix, cmap=yellow_to_red, edgecolor="black", linewidth=0.5) ax[1].set_title("Count in each 250m square cell") cax = fig.add_axes([0.9, 0.2, 0.01, 0.7]) cbar = fig.colorbar(mappable, orientation="vertical", cax=cax) cbar.set_label("Total crime count in region") xmin, ymin, xmax, ymax = south_side.bounds for a in ax: a.add_patch(descartes.PolygonPatch(south_side, fc="none", ec="Black")) a.set(xlim=[xmin-500, xmax+500], ylim=[ymin-500, ymax+500]) a.set_aspect(1) Explanation: Just the south side End of explanation import scipy.linalg S = np.cov(points.coords) S = scipy.linalg.inv(S) S = scipy.linalg.cholesky(S) x = points.coords - np.mean(points.coords, axis=1)[:,None] x = np.dot(S, x) fig, ax = plt.subplots(figsize=(8,8)) ax.scatter(*x) ax.set_aspect(1) ax.set(title="'Whitened' points") S S = np.cov(points_south_side.coords) S = scipy.linalg.inv(S) S = scipy.linalg.cholesky(S) x = points_south_side.coords - np.mean(points_south_side.coords, axis=1)[:,None] x = np.dot(S, x) fig, ax = plt.subplots(figsize=(8,8)) ax.scatter(*x) ax.set_aspect(1) ax.set(title="'Whitened' points for South Side") S Explanation: Covariance For KDE applications, it is interesting to think about the covariance matrix of the data. More precisely, the common approach to KDE is to first apply a Whitening transformation to the data, fit a KDE with a radially symmetric kernel, and then transform back. End of explanation grid = open_cp.data.Grid(xsize=250, ysize=250, xoffset=0, yoffset=0) masked_grid = open_cp.geometry.mask_grid_by_intersection(south_side, grid) grid = open_cp.data.Grid(xsize=250, ysize=250, xoffset=125, yoffset=125) masked_grid_off = open_cp.geometry.mask_grid_by_intersection(south_side, grid) fig, ax = plt.subplots(ncols=2, figsize=(16,8)) for a in ax: a.add_patch(descartes.PolygonPatch(south_side, fc="none", ec="Black")) a.add_patch(descartes.PolygonPatch(south_side, fc="Blue", ec="none", alpha=0.2)) xmin, ymin, xmax, ymax = south_side.bounds a.set(xlim=[xmin-500,xmax+500], ylim=[ymin-500,ymax+500]) pc = open_cp.plot.patches_from_grid(masked_grid) ax[0].add_collection(matplotlib.collections.PatchCollection(pc, facecolor="None", edgecolor="black")) pc = open_cp.plot.patches_from_grid(masked_grid_off) ax[1].add_collection(matplotlib.collections.PatchCollection(pc, facecolor="None", edgecolor="black")) None Explanation: Using a masked grid When we consider "coverage levels" we naturally want to intersect our grid with the outline of the geographic region of interest. End of explanation
12,889
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction In this blog post, I want to show you a graph-based way to split up a class into several independent ones. We take a small example class from Michael Feathers' book "Working effectively with legacy code" and use Neo4j's Awesome Procedures On Cypher (APOC). Hint Step1: Cleaning existing data Step2: Adding new usage relationship We want to look at the usage dependencies between methods and fields. In the predefined schema, there is a distinction between reading and writing access to fields for each method. We set up a new relationship to signal just the usage of a field of a particular class by adding a new relationshop USES. Step3: We do the same for the dependency between methods. Here, we can just add the relationship USES based on the existing INVOKE relationship type. Step4: Next, we calculate for each usage of a method Step5: Now we have to move the information of the called items to the relationship.
Python Code: %load_ext cypher Explanation: Introduction In this blog post, I want to show you a graph-based way to split up a class into several independent ones. We take a small example class from Michael Feathers' book "Working effectively with legacy code" and use Neo4j's Awesome Procedures On Cypher (APOC). Hint: To run the notebook version of this blog post, you need to install the ipython-cypher extension. ```java class Reservation { private int duration; private int dailyRate; private Date date; private Customer customer; private List fees = new ArrayList(); public Reservation(Customer customer, int duration, int dailyRate, Date date) { this.customer = customer; this.duration = duration; this.dailyRate = dailyRate; this.date = date; } public void extend(int additionalDays) { duration += additionalDays; } public void extendForWeek() { int weekRemainder = RentalCalendar.weekRemainderFor(date); final int DAYS_PER_WEEK = 7; extend(weekRemainder); dailyRate = RateCalculator.computeWeekly( customer.getRateCode()) / DAYS_PER_WEEK; } public void addFee(FeeRider rider) { fees.add(rider); } int getAdditionalFees() { int total = 0; for (Iterator it = fees.iterator(); it.hasNext(); ) { total += ((FeeRider) (it.next())).getAmount(); } return total; } int getPrincipalFee() { return dailyRate * RateCalculator.rateBase(customer) * duration; } public int getTotalFee() { return getPrincipalFee() + getAdditionalFees(); } } ``` End of explanation %%cypher MATCH ()-[u:USES]->(), (n:NewClass)-[s:SHOULD_DECLARE]->() DELETE u,s,n Explanation: Cleaning existing data End of explanation %%cypher MATCH (c:Class {name : "Reservation"}), (c)-[:DECLARES]->(m:Method), (c)-[:DECLARES]->(f:Field), (m)-[:READS|WRITES]->(f) WHERE NOT (m:Constructor) MERGE (m)-[u:USES]->(f) RETURN m.name as method, type(u) as relType, f.name as field Explanation: Adding new usage relationship We want to look at the usage dependencies between methods and fields. In the predefined schema, there is a distinction between reading and writing access to fields for each method. We set up a new relationship to signal just the usage of a field of a particular class by adding a new relationshop USES. End of explanation %%cypher MATCH (c:Class {name : "Reservation"}), (c)-[:DECLARES]->(m:Method), (c)-[:DECLARES]->(m2:Method), (m)-[:INVOKES]->(m2:Method) WHERE NOT (m:Constructor) MERGE (m)-[u:USES]->(m2) RETURN m.name as caller, type(u) as relType, m2.name as callee Explanation: We do the same for the dependency between methods. Here, we can just add the relationship USES based on the existing INVOKE relationship type. End of explanation %%cypher MATCH (m:Method)-[u:USES]->() WITH m, COUNT(u) as weight SET m.weight = weight RETURN m.name as method, weight %%cypher MATCH (m)-[u:USES]->(m2:Method) WITH m2, COUNT(u) as weight SET m2.weight = weight RETURN m2.name as callee, weight Explanation: Next, we calculate for each usage of a method End of explanation %%cypher MATCH (caller)-[r:USES]->(callee) SET r.weight = callee.weight RETURN count(r) %%cypher CALL apoc.algo.community(25,null,'group','USES','OUTGOING','weight',10000) %%cypher MATCH (m:Method)-[:USES]->(f:Field)<-[:USES]-(m2:Method) WHERE m.group <> m2.group WITH m.group as newGroupId, m2.group as oldGroupId MATCH (n:Method) WHERE n.group = oldGroupId SET n.group = [newGroupId, oldGroupId] SET n.merged = true RETURN DISTINCT(n.name), n.group; %%cypher MATCH (m:Method)-[:USES]->(:Field) WHERE NOT EXISTS(m.merged) WITH m, m.group as groupId SET m.merged = false RETURN m.name, m.group; %%cypher MATCH (m:Method)-[:USES]->(f) MERGE (c:NewClass { name: m.group}) MERGE (c)-[:SHOULD_DECLARE]->(m) MERGE (c)-[:SHOULD_DECLARE]->(f) RETURN c.name as newClass, m.name as method, f.name as field Explanation: Now we have to move the information of the called items to the relationship. End of explanation
12,890
Given the following text description, write Python code to implement the functionality described below step by step Description: Async optimization Loop Tim Head, February 2017. Step1: Bayesian optimization is used to tune parameters for walking robots or other experiments that are not a simple (expensive) function call. They often follow a pattern a bit like this Step2: Here a quick plot to visualize what the function looks like Step3: Now we setup the Optimizer class. The arguments follow the meaning and naming of the *_minimize() functions. An important difference is that you do not pass the objective function to the optimizer. Step4: To obtain a suggestion for the point at which to evaluate the objective you call the ask() method of opt Step5: In a real world use case you would probably go away and use this parameter in your experiment and come back a while later with the result. In this example we can simply evaluate the objective function and report the value back to the optimizer Step6: Like *_minimize() the first few points are random suggestions as there is no data yet with which to fit a surrogate model. Step7: We can now plot the random suggestions and the first model that has been fit Step8: Let us sample a few more points and plot the optimizer again Step9: By using the Optimizer class directly you get control over the optimization loop. You can also pickle your Optimizer instance if you want to end the process running it and resume it later. This is handy if your experiment takes a very long time and you want to shutdown your computer in the meantime
Python Code: import numpy as np np.random.seed(1234) %matplotlib inline import matplotlib.pyplot as plt plt.set_cmap("viridis") Explanation: Async optimization Loop Tim Head, February 2017. End of explanation from skopt.learning import ExtraTreesRegressor from skopt import Optimizer noise_level = 0.1 # Our 1D toy problem, this is the function we are trying to # minimize def objective(x, noise_level=noise_level): return np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) + np.random.randn() * noise_level Explanation: Bayesian optimization is used to tune parameters for walking robots or other experiments that are not a simple (expensive) function call. They often follow a pattern a bit like this: 1. ask for a new set of parameters 1. walk to the experiment and program in the new parameters 1. observe the outcome of running the experiment 1. walk back to your laptop and tell the optimizer about the outcome 1. go to step 1 A setup like this is difficult to implement with the *_minimize() function interface. This is why scikit-optimize has a ask-and-tell interface that you can use when you want to control the execution of the optimization loop. This notenook demonstrates how to use the ask and tell interface. The Setup We will use a simple 1D problem to illustrate the API. This is a little bit artificial as you normally would not use the ask-and-tell interface if you had a function you can call to evaluate the objective. End of explanation # Plot f(x) + contours x = np.linspace(-2, 2, 400).reshape(-1, 1) fx = np.array([objective(x_i, noise_level=0.0) for x_i in x]) plt.plot(x, fx, "r--", label="True (unknown)") plt.fill(np.concatenate([x, x[::-1]]), np.concatenate(([fx_i - 1.9600 * noise_level for fx_i in fx], [fx_i + 1.9600 * noise_level for fx_i in fx[::-1]])), alpha=.2, fc="r", ec="None") plt.legend() plt.grid() plt.show() Explanation: Here a quick plot to visualize what the function looks like: End of explanation opt = Optimizer([(-2.0, 2.0)], "ET", acq_optimizer="sampling") Explanation: Now we setup the Optimizer class. The arguments follow the meaning and naming of the *_minimize() functions. An important difference is that you do not pass the objective function to the optimizer. End of explanation next_x = opt.ask() print(next_x) Explanation: To obtain a suggestion for the point at which to evaluate the objective you call the ask() method of opt: End of explanation f_val = objective(next_x) opt.tell(next_x, f_val) Explanation: In a real world use case you would probably go away and use this parameter in your experiment and come back a while later with the result. In this example we can simply evaluate the objective function and report the value back to the optimizer: End of explanation for i in range(9): next_x = opt.ask() f_val = objective(next_x) opt.tell(next_x, f_val) Explanation: Like *_minimize() the first few points are random suggestions as there is no data yet with which to fit a surrogate model. End of explanation from skopt.acquisition import gaussian_ei def plot_optimizer(opt, x, fx): model = opt.models[-1] x_model = opt.space.transform(x.tolist()) # Plot true function. plt.plot(x, fx, "r--", label="True (unknown)") plt.fill(np.concatenate([x, x[::-1]]), np.concatenate([fx - 1.9600 * noise_level, fx[::-1] + 1.9600 * noise_level]), alpha=.2, fc="r", ec="None") # Plot Model(x) + contours y_pred, sigma = model.predict(x_model, return_std=True) plt.plot(x, y_pred, "g--", label=r"$\mu(x)$") plt.fill(np.concatenate([x, x[::-1]]), np.concatenate([y_pred - 1.9600 * sigma, (y_pred + 1.9600 * sigma)[::-1]]), alpha=.2, fc="g", ec="None") # Plot sampled points plt.plot(opt.Xi, opt.yi, "r.", markersize=8, label="Observations") acq = gaussian_ei(x_model, model, y_opt=np.min(opt.yi)) # shift down to make a better plot acq = 4*acq - 2 plt.plot(x, acq, "b", label="EI(x)") plt.fill_between(x.ravel(), -2.0, acq.ravel(), alpha=0.3, color='blue') # Adjust plot layout plt.grid() plt.legend(loc='best') plot_optimizer(opt, x, fx) Explanation: We can now plot the random suggestions and the first model that has been fit: End of explanation for i in range(10): next_x = opt.ask() f_val = objective(next_x) opt.tell(next_x, f_val) plot_optimizer(opt, x, fx) Explanation: Let us sample a few more points and plot the optimizer again: End of explanation import pickle with open('my-optimizer.pkl', 'wb') as f: pickle.dump(opt, f) with open('my-optimizer.pkl', 'rb') as f: opt_restored = pickle.load(f) Explanation: By using the Optimizer class directly you get control over the optimization loop. You can also pickle your Optimizer instance if you want to end the process running it and resume it later. This is handy if your experiment takes a very long time and you want to shutdown your computer in the meantime: End of explanation
12,891
Given the following text description, write Python code to implement the functionality described below step by step Description: Variational Auto Encoders Reference Step1: Fashion MNIST Step2: Standard full-connected VAE model Let's define a VAE model with fully connected MLPs for the encoder and decoder networks. Step3: Encoder Step4: The VAE stochastic latent variable <img src="./images/vae_3.svg" width="600px" /> We use the reparametrization trick to define a random variable z that is conditioned on the input image x as follows Step5: Decoder Step6: By default the decoder outputs has random weights and output noise Step7: The generated image is completely univariate noise Step8: Note that the model has not yet converged even after 50 epochs. Furthermore it's is not overfitting significantly either. We chose a very low value for the latent dimension. It is likely that using the higher dimensional space could lead to a model either to optimize that would better fit the training set. By sampling a random latent vector from the prior distribution and feeding it to the decoder we can effectively sample from the image model trained by the VAE Step9: Use Ctrl-Enter several times to sample from various random locations in the 2D latent space. The generated pictures are blurry but capture of the global organization of pixels required to represent samples from the 10 fashion item categories. The spatial structure has been learned and is only present in the decoder weights. 2D plot of the image classes in the latent space We can also use the encoder to set the visualize the distribution of the test set in the 2D latent space of the VAE model. In the following the colors show the true class labels from the test samples. Note that the VAE is an unsupervised model Step10: Exercises One can see that the class labels 5, 7 and 9 are grouped in a cluster of the latent space. Use matplotlib to display some samples from each of those 3 classes and discover why they have been grouped together by the VAE model. Similarly Step11: 2D panel view of samples from the VAE manifold The following linearly spaced coordinates on the unit square were transformed through the inverse CDF (ppf) of the Gaussian to produce values of the latent variables z. This makes it possible to use a square arangement of panels that spans the gaussian prior of the latent space. Step12: Anomaly detection Let's rebuild a new VAE which encodes 9 of the 10 classes, and see if we can build a measure that shows wether the data is an anomaly We'll call standard classes the first 9 classes, and anomalies the last class (class n°9, which is "ankle boots") Step13: Is this method a good anomaly detection method? Let's compare the distribution of reconstruction errors from - standard test set images - class 9 images - random noise What can you interpret from this graph? Step14: Convolutional Variational Auto Encoder Step15: Exercise Step16: The stochastic latent variable is the same as for the fully-connected model. Step17: Decoder The decoder is also convolutional but instead of downsampling the spatial dimensions from (28, 28) to 2 latent dimensions, it starts from the latent space to upsample a (28, 28) dimensions using strided Conv2DTranspose layers. Here again BatchNormalization layers are inserted after the convolution to make optimization converge faster. Step18: This new decoder encodes some a priori knowledge on the local dependencies between pixel values in the "deconv" architectures. Depending on the randomly initialized weights, the generated images can show some local spatial structure. Try to re-execute the above two cells several times to try to see the kind of local structure that stem from the "deconv" architecture it-self for different random initializations of the weights. Again, let's now plug everything to together to get convolutional version of a full VAE model Step19: 2D plot of the image classes in the latent space We find again a similar organization of the latent space. Compared to the fully-connected VAE space, the different class labels seem slightly better separated. This could be a consequence of the slightly better fit we obtain from the convolutional models. Step20: 2D panel view of samples from the VAE manifold The following linearly spaced coordinates on the unit square were transformed through the inverse CDF (ppf) of the Gaussian to produce values of the latent variables z. This makes it possible to use a square arangement of panels that spans the gaussian prior of the latent space. Step21: Semi-supervised learning Let's reuse our encoder trained on many unlabeled samples to design a supervised model that can only use supervision from a small subset of samples with labels. To keep things simple we will just build a small supervised model on top of the latent representation defined by our encoder. We assume that we only have access to a small labeled subset with 50 examples per class (instead of 5000 examples per class in the full Fashion MNIST training set) Step22: Exercise
Python Code: import tensorflow as tf import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm from tensorflow.keras.layers import Input, Dense, Lambda, Flatten, Reshape, Conv2D, Conv2DTranspose from tensorflow.keras.models import Model from tensorflow.keras import metrics from tensorflow.keras.datasets import fashion_mnist Explanation: Variational Auto Encoders Reference: Adapted from the Keras example Auto-Encoding Variational Bayes https://arxiv.org/abs/1312.6114 End of explanation (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() plt.figure(figsize=(16, 8)) for i in range(0, 18): plt.subplot(3, 6, i + 1) plt.imshow(x_train[i], cmap="gray") plt.axis("off") plt.show() y_train[0:10] x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. Explanation: Fashion MNIST End of explanation x_train_standard = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) x_test_standard = x_test.reshape((len(x_test), np.prod(x_test.shape[1:]))) x_train_standard.shape, x_test_standard.shape Explanation: Standard full-connected VAE model Let's define a VAE model with fully connected MLPs for the encoder and decoder networks. End of explanation original_dim = 784 latent_dim = 2 intermediate_dim = 256 def make_encoder(original_dim, intermediate_dim, latent_dim): x = Input(shape=(original_dim,)) hidden = Dense(intermediate_dim, activation='relu')(x) z_mean = Dense(latent_dim)(hidden) z_log_var = Dense(latent_dim)(hidden) return Model(inputs=x, outputs=[z_mean, z_log_var], name="mlp_encoder") encoder = make_encoder(original_dim, intermediate_dim, latent_dim) Explanation: Encoder End of explanation def sampling_func(inputs): z_mean, z_log_var = inputs batch_size = tf.shape(z_mean)[0] epsilon = tf.random.normal(shape=(batch_size, latent_dim), mean=0., stddev=1.) return z_mean + tf.exp(z_log_var / 2) * epsilon sampling_layer = Lambda(sampling_func, output_shape=(latent_dim,), name="latent_sampler") Explanation: The VAE stochastic latent variable <img src="./images/vae_3.svg" width="600px" /> We use the reparametrization trick to define a random variable z that is conditioned on the input image x as follows: $$ z \sim \mathcal{N}(\mu_z(x), \sigma_z(x)) $$ The reparametrization tricks defines $z$ has follows: $$ z = \mu_z(x) + \sigma_z(x) \cdot \epsilon$$ with: $$ \epsilon \sim \mathcal{N}(0, 1) $$ This way the dependency to between $z$ and $x$ is deterministic and differentiable. The randomness of $z$ only stems from $\epsilon$ only for a given $x$. Note that in practice the output of the encoder network parameterizes $log(\sigma^2_z(x)$ instead of $\sigma_z(x)$. Taking the exponential of $log(\sigma^2_z(x)$ ensures the positivity of the standard deviation from the raw output of the network: End of explanation def make_decoder(latent_dim, intermediate_dim, original_dim): decoder_input = Input(shape=(latent_dim,)) x = Dense(intermediate_dim, activation='relu')(decoder_input) x = Dense(original_dim, activation='sigmoid')(x) return Model(decoder_input, x, name="mlp_decoder") decoder = make_decoder(latent_dim, intermediate_dim, original_dim) Explanation: Decoder End of explanation random_z_from_prior = np.random.normal(loc=0, scale=1, size=(1, latent_dim)) generated = decoder.predict(random_z_from_prior) plt.imshow(generated.reshape(28, 28), cmap=plt.cm.gray) plt.axis('off'); Explanation: By default the decoder outputs has random weights and output noise: End of explanation def make_vae(input_shape, encoder, decoder, sampling_layer): # Build de model architecture by assembling the encoder, # stochastic latent variable and decoder: x = Input(shape=input_shape, name="input") z_mean, z_log_var = encoder(x) z = sampling_layer([z_mean, z_log_var]) x_decoded_mean = decoder(z) vae = Model(x, x_decoded_mean) # Define the VAE loss xent_loss = original_dim * metrics.binary_crossentropy( Flatten()(x), Flatten()(x_decoded_mean)) kl_loss = - 0.5 * tf.reduce_sum(1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var), axis=-1) vae_loss = tf.reduce_mean(xent_loss + kl_loss) vae.add_loss(vae_loss) vae.compile(optimizer='adam') return vae vae = make_vae((original_dim,), encoder, decoder, sampling_layer=sampling_layer) vae.summary() vae.fit(x_train_standard, epochs=50, batch_size=100, validation_data=(x_test_standard, None)) # vae.save_weights("standard_weights.h5") vae.load_weights("standard_weights.h5") Explanation: The generated image is completely univariate noise: there is no apparent spatial depenedencies between the pixel values. This reflects the lack of prior structure in the randomly initialized fully-connected decoder network. Let's now the plug the encoder and decoder via the stochastic latent variable $z$ to get the full VAE architecture. The loss function is the negative ELBO of the variational inference problem: End of explanation random_z_from_prior = np.random.normal(size=(1, latent_dim)).astype("float32") generated = decoder(random_z_from_prior).numpy() plt.imshow(generated.reshape(28, 28), cmap=plt.cm.gray) plt.axis('off'); Explanation: Note that the model has not yet converged even after 50 epochs. Furthermore it's is not overfitting significantly either. We chose a very low value for the latent dimension. It is likely that using the higher dimensional space could lead to a model either to optimize that would better fit the training set. By sampling a random latent vector from the prior distribution and feeding it to the decoder we can effectively sample from the image model trained by the VAE: End of explanation id_to_labels = {0: "T-shirt/top", 1: "Trouser", 2: "Pullover", 3: "Dress", 4: "Coat", 5: "Sandal", 6: "Shirt", 7: "Sneaker", 8: "Bag", 9: "Ankle boot"} x_test_encoded, x_test_encoded_log_var = encoder(x_test_standard) plt.figure(figsize=(7, 6)) plt.scatter(x_test_encoded[:, 0], x_test_encoded[:, 1], c=y_test, cmap=plt.cm.tab10) cb = plt.colorbar() cb.set_ticks(list(id_to_labels.keys())) cb.set_ticklabels(list(id_to_labels.values())) cb.update_ticks() plt.show() Explanation: Use Ctrl-Enter several times to sample from various random locations in the 2D latent space. The generated pictures are blurry but capture of the global organization of pixels required to represent samples from the 10 fashion item categories. The spatial structure has been learned and is only present in the decoder weights. 2D plot of the image classes in the latent space We can also use the encoder to set the visualize the distribution of the test set in the 2D latent space of the VAE model. In the following the colors show the true class labels from the test samples. Note that the VAE is an unsupervised model: it did not use any label information during training. However we can observe that the 2D latent space is largely structured around the categories of images used in the training set. End of explanation # %load solutions/class_5_7_9.py # %load solutions/class_0_4_6.py # %load solutions/shape_marginal_latent_distribution.py Explanation: Exercises One can see that the class labels 5, 7 and 9 are grouped in a cluster of the latent space. Use matplotlib to display some samples from each of those 3 classes and discover why they have been grouped together by the VAE model. Similarly: can you qualitatively explain with matplotlib why class 0, 4 and 6 seem to be hard to disentangle in this 2D latent space discovered by the VAE model? One can observe that the global 2D shape of the encoded dataset is approximately spherical with values with a maximum radius of size 3. Where can you explain where the shape of this marginal latent distribution come from? End of explanation n = 15 # figure with 15x15 panels digit_size = 28 figure = np.zeros((digit_size * n, digit_size * n)) grid_x = norm.ppf(np.linspace(0.05, 0.95, n)).astype(np.float32) grid_y = norm.ppf(np.linspace(0.05, 0.95, n)).astype(np.float32) for i, yi in enumerate(grid_x): for j, xi in enumerate(grid_y): z_sample = np.array([[xi, yi]]) x_decoded = decoder(z_sample).numpy() digit = x_decoded[0].reshape(digit_size, digit_size) figure[i * digit_size: (i + 1) * digit_size, j * digit_size: (j + 1) * digit_size] = digit plt.figure(figsize=(10, 10)) plt.imshow(figure, cmap='Greys_r') plt.show() Explanation: 2D panel view of samples from the VAE manifold The following linearly spaced coordinates on the unit square were transformed through the inverse CDF (ppf) of the Gaussian to produce values of the latent variables z. This makes it possible to use a square arangement of panels that spans the gaussian prior of the latent space. End of explanation valid_indexes_train = y_train != 9 valid_indexes_test = y_test != 9 x_train_9 = x_train[valid_indexes_train] x_test_9 = x_test[valid_indexes_test] x_train_standard_9 = x_train_9.reshape((len(x_train_9), np.prod(x_train_9.shape[1:]))) x_test_standard_9 = x_test_9.reshape((len(x_test_9), np.prod(x_test_9.shape[1:]))) print(x_train_standard_9.shape, x_test_standard_9.shape) anomalies_indexes = y_test == 9 anomalies = x_test_standard[anomalies_indexes] # rebuild a new encoder, decoder, and train them on the limited dataset encoder = make_encoder(original_dim, intermediate_dim, latent_dim) decoder = make_decoder(latent_dim, intermediate_dim, original_dim) vae_9 = make_vae((original_dim,), encoder, decoder, sampling_layer=sampling_layer) vae_9.fit(x_train_standard_9, epochs=50, batch_size=100, validation_data=(x_test_standard_9, None)) # vae_9.save_weights("standard_weights_9.h5") vae_9.load_weights("standard_weights_9.h5") # For simplicity, we will do our sampling with numpy not with Keras or tensorflow def sampling_func_numpy(inputs): z_mean, z_log_var = inputs batch_size = np.shape(z_mean)[0] epsilon = np.random.normal(size=(batch_size, latent_dim), loc=0., scale=1.).astype("float32") return z_mean + np.exp(z_log_var / 2) * epsilon # Compute the reconstruction error: encode, sample, then decode. # To ensure we get a stable result, we'll run the sampling nb_sampling times def compute_reconstruction_error(img, nb_sampling=10): if len(img.shape) == 1: img = np.expand_dims(img, 0) batch_size = np.shape(img)[0] img_encoded_mean_and_var = encoder(img) img_encoded_samples = [sampling_func_numpy(img_encoded_mean_and_var) for x in range(nb_sampling)] # stack all samples img_encoded_samples = np.vstack(img_encoded_samples) reconstructed_samples = decoder(img_encoded_samples).numpy() # unstack all samples split_samples = reconstructed_samples.reshape(nb_sampling, batch_size, img.shape[-1]) errors = np.linalg.norm(split_samples - img, axis=-1) return np.mean(errors, axis=0) errors_test = compute_reconstruction_error(x_test_standard_9) errors_anomalies = compute_reconstruction_error(anomalies) noise = np.random.uniform(size=(1000, 784), low=0.0, high=1.0) errors_random = compute_reconstruction_error(noise.astype(np.float32)) # most anomalous in test set indexes = np.argsort(errors_test)[-18:] plt.figure(figsize=(16, 8)) for i in range(0, 18): plt.subplot(3, 6, i + 1) plt.imshow(x_test_9[indexes][i], cmap="gray") plt.axis("off") plt.show() # It shows weird shaped tops, or very complex shoes which are difficult to reconstruct # most normal in anomalies test set indexes = np.argsort(errors_anomalies)[0:18] plt.figure(figsize=(16, 8)) for i in range(0, 18): plt.subplot(3, 6, i + 1) plt.imshow(x_test[anomalies_indexes][indexes][i], cmap="gray") plt.axis("off") plt.show() # Indeed most of them do not look like ankle boot (they could belong to other shoes categories)! # most anomalous in anomalies test set indexes = np.argsort(errors_anomalies)[-18:] plt.figure(figsize=(16, 8)) for i in range(0, 18): plt.subplot(3, 6, i + 1) plt.imshow(x_test[anomalies_indexes][indexes][i], cmap="gray") plt.axis("off") plt.show() Explanation: Anomaly detection Let's rebuild a new VAE which encodes 9 of the 10 classes, and see if we can build a measure that shows wether the data is an anomaly We'll call standard classes the first 9 classes, and anomalies the last class (class n°9, which is "ankle boots") End of explanation fig = plt.figure() ax = fig.add_subplot(111) bins = np.linspace(0, 12, 30) a1 = ax.hist(np.random.choice(errors_test, 1000, replace=False), bins=bins, color="blue", alpha=0.5,) a2 = ax.hist(errors_anomalies, bins=bins, color="red", alpha=0.5) a3 = ax.hist(errors_random, bins=bins, color="green", alpha=0.5) plt.legend(('standard (classes 0 to 8)', 'ankle boots (class 9)', 'random pixels (white noise)')) plt.show() Explanation: Is this method a good anomaly detection method? Let's compare the distribution of reconstruction errors from - standard test set images - class 9 images - random noise What can you interpret from this graph? End of explanation x_train_conv = np.expand_dims(x_train, -1) x_test_conv = np.expand_dims(x_test, -1) x_train_conv.shape, x_test_conv.shape Explanation: Convolutional Variational Auto Encoder End of explanation from tensorflow.keras.layers import BatchNormalization img_rows, img_cols, img_chns = 28, 28, 1 filters = 32 kernel_size = 3 intermediate_dim = 128 latent_dim = 2 def make_conv_encoder(img_rows, img_cols, img_chns, latent_dim, intermediate_dim): inp = x = Input(shape=(img_rows, img_cols, img_chns)) # TODO: write me! return Model(inputs=inp, outputs=[z_mean, z_log_var], name='convolutional_encoder') conv_encoder = make_conv_encoder(img_rows, img_cols, img_chns, latent_dim, intermediate_dim) print(conv_encoder.summary()) conv_encoder.predict(x_train_conv[:1]) # %load solutions/conv_encoder.py Explanation: Exercise: write an encoder that uses a series of convolutional layers, with maxpooling or strided convolutions and Batch norm to encode the 2D, gray-level images into 2D latent vectors: End of explanation sampling_layer = Lambda(sampling_func, output_shape=(latent_dim,), name="latent_sampler") Explanation: The stochastic latent variable is the same as for the fully-connected model. End of explanation def make_conv_decoder(latent_dim, intermediate_dim, original_dim, spatial_size=7, filters=16): decoder_input = Input(shape=(latent_dim,)) x = Dense(intermediate_dim, activation='relu')(decoder_input) x = Dense(filters * spatial_size * spatial_size, activation='relu')(x) x = Reshape((spatial_size, spatial_size, filters))(x) # First up-sampling: x = Conv2DTranspose(filters, kernel_size=3, padding='same', strides=(2, 2), activation='relu')(x) x = BatchNormalization()(x) x = Conv2DTranspose(filters, kernel_size=3, padding='same', strides=1, activation='relu')(x) x = BatchNormalization()(x) # Second up-sampling: x = Conv2DTranspose(filters, kernel_size=3, strides=(2, 2), padding='valid', activation='relu')(x) x = BatchNormalization()(x) # Ouput 1 channel of gray pixels values between 0 and 1: x = Conv2D(1, kernel_size=2, padding='valid', activation='sigmoid')(x) return Model(decoder_input, x, name='convolutional_decoder') conv_decoder = make_conv_decoder(latent_dim, intermediate_dim, original_dim, spatial_size=7, filters=filters) print(conv_decoder.summary()) generated = conv_decoder.predict(np.random.normal(size=(1, latent_dim))) plt.imshow(generated.reshape(28, 28), cmap=plt.cm.gray) plt.axis('off'); Explanation: Decoder The decoder is also convolutional but instead of downsampling the spatial dimensions from (28, 28) to 2 latent dimensions, it starts from the latent space to upsample a (28, 28) dimensions using strided Conv2DTranspose layers. Here again BatchNormalization layers are inserted after the convolution to make optimization converge faster. End of explanation input_shape = (img_rows, img_cols, img_chns) vae = make_vae(input_shape, conv_encoder, conv_decoder, sampling_layer) vae.summary() vae.fit(x_train_conv, epochs=15, batch_size=100, validation_data=(x_test_conv, None)) # vae.save_weights("convolutional_weights.h5") vae.load_weights("convolutional_weights.h5") generated = conv_decoder.predict(np.random.normal(size=(1, latent_dim))) plt.imshow(generated.reshape(28, 28), cmap=plt.cm.gray) plt.axis('off'); Explanation: This new decoder encodes some a priori knowledge on the local dependencies between pixel values in the "deconv" architectures. Depending on the randomly initialized weights, the generated images can show some local spatial structure. Try to re-execute the above two cells several times to try to see the kind of local structure that stem from the "deconv" architecture it-self for different random initializations of the weights. Again, let's now plug everything to together to get convolutional version of a full VAE model: End of explanation x_test_encoded, _ = conv_encoder(x_test_conv) plt.figure(figsize=(7, 6)) plt.scatter(x_test_encoded[:, 0], x_test_encoded[:, 1], c=y_test, cmap=plt.cm.tab10) cb = plt.colorbar() cb.set_ticks(list(id_to_labels.keys())) cb.set_ticklabels(list(id_to_labels.values())) cb.update_ticks() plt.show() Explanation: 2D plot of the image classes in the latent space We find again a similar organization of the latent space. Compared to the fully-connected VAE space, the different class labels seem slightly better separated. This could be a consequence of the slightly better fit we obtain from the convolutional models. End of explanation n = 15 # figure with 15x15 panels digit_size = 28 figure = np.zeros((digit_size * n, digit_size * n)) grid_x = norm.ppf(np.linspace(0.05, 0.95, n)) grid_y = norm.ppf(np.linspace(0.05, 0.95, n)) for i, yi in enumerate(grid_x): for j, xi in enumerate(grid_y): z_sample = np.array([[xi, yi]]) x_decoded = conv_decoder.predict(z_sample) digit = x_decoded[0].reshape(digit_size, digit_size) figure[i * digit_size: (i + 1) * digit_size, j * digit_size: (j + 1) * digit_size] = digit plt.figure(figsize=(10, 10)) plt.imshow(figure, cmap='Greys_r') plt.show() Explanation: 2D panel view of samples from the VAE manifold The following linearly spaced coordinates on the unit square were transformed through the inverse CDF (ppf) of the Gaussian to produce values of the latent variables z. This makes it possible to use a square arangement of panels that spans the gaussian prior of the latent space. End of explanation rng = np.random.RandomState(42) small_x_train = [] small_y_train = [] num_per_class = 50 for c in range(10): class_mask = np.where(y_train==c)[0] idx = rng.choice(class_mask, size=num_per_class, replace=False) small_x_train += [x_train_conv[idx]] small_y_train += [c] * num_per_class small_x_train = np.vstack(small_x_train) small_y_train = np.array(small_y_train) # reshuffle our small dataset perm = rng.permutation(range(small_y_train.shape[0])) small_x_train = small_x_train[perm] small_y_train = small_y_train[perm] small_x_train.shape, small_y_train.shape Explanation: Semi-supervised learning Let's reuse our encoder trained on many unlabeled samples to design a supervised model that can only use supervision from a small subset of samples with labels. To keep things simple we will just build a small supervised model on top of the latent representation defined by our encoder. We assume that we only have access to a small labeled subset with 50 examples per class (instead of 5000 examples per class in the full Fashion MNIST training set): End of explanation # TODO: implement me! # define `small_x_train_encoded` for in the input training data # define a model named `mdl` with its layers and its loss function. # %load solutions/small_classifier.py print(mdl.summary()) mdl.fit(small_x_train_encoded, small_y_train, epochs=30, validation_data=[x_test_encoded, y_test]) from sklearn.metrics import confusion_matrix y_pred = mdl.predict(x_test_encoded).argmax(axis=-1) cnf_matrix = confusion_matrix(y_test, y_pred) print(cnf_matrix) import itertools def plot_confusion_matrix(cm, classes, title='Confusion matrix', cmap=plt.cm.Blues): plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], 'd'), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') class_names = [name for id, name in sorted(id_to_labels.items())] plot_confusion_matrix(cnf_matrix, classes=class_names) Explanation: Exercise: Use conv_encoder to project small_x_train into the latent space; Define a small supervised 10-class classification network and use small_y_train to train it; What test accuracy can you reach? What is the chance level? Suggest what could be changed to improve the quality of our classification on this small labeled dataset. End of explanation
12,892
Given the following text description, write Python code to implement the functionality described below step by step Description: Build simple models to predict pulsar candidates In this notebook we will look at building machine learning models to predict Pulsar Candidate. The data comes from Rob Lyon at Manchester. This data is publically available. For more information check out https Step1: Load dataset Data is a csv file with each column as features and rows as samples of positive and negative candidates Class label is the last column where "1" correspondes to true pulsar candidate and "0" a false candidate Step2: Lets print the feature names Step3: Do a scatter plot Step4: Get the features and labels Step5: Split data to training and validation sets Step6: Lets do the training on different algorithms We will be using the following algorithms k-Nearest Neighbours (KNN) [ https Step7: Scikit Naive Bayes http Step8: Scikit MLP https Step9: Fancy function to print results for model evaluation Step10: Now Lets test each classifier and disply their accuracy
Python Code: # For numerical stuff import pandas as pd # Plotting import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D %matplotlib inline plt.rcParams['figure.figsize'] = (7.0, 7.0) # Some preprocessing utilities from sklearn.cross_validation import train_test_split # Data splitting from sklearn.utils import shuffle # The different classifiers from sklearn.neighbors import KNeighborsClassifier # Nearest Neighbor - Analogizer from sklearn.naive_bayes import GaussianNB # Bayesian Classifier - Bayesian from sklearn.neural_network import MLPClassifier # Neural Network - Connectionist # Model result function from sklearn.metrics import classification_report,accuracy_score Explanation: Build simple models to predict pulsar candidates In this notebook we will look at building machine learning models to predict Pulsar Candidate. The data comes from Rob Lyon at Manchester. This data is publically available. For more information check out https://figshare.com/articles/HTRU2/3080389/1 Lets start with the basic imports End of explanation data = pd.read_csv('Data/pulsar.csv') # Show some information print ('Dataset has %d rows and %d columns including features and labels'%(data.shape[0],data.shape[1])) Explanation: Load dataset Data is a csv file with each column as features and rows as samples of positive and negative candidates Class label is the last column where "1" correspondes to true pulsar candidate and "0" a false candidate End of explanation print (data.columns.values[0:-1]) Explanation: Lets print the feature names End of explanation ax = plt.figure().gca(projection='3d') ax.scatter3D(data['std_pf'], data['mean_dm'], data['mean_int_pf'],c=data['class'],alpha=.25) ax.set_xlabel('std_pf') ax.set_ylabel('mean_dm') ax.set_zlabel('mean_int_pf') Explanation: Do a scatter plot End of explanation # Lets shuffle the rows of the data 10 times for i in range(10): data = shuffle(data) # Now split the dataset into seperate variabels for features and labels features = data.ix[:,data.columns != 'class'].values # All columns except class labels = data['class'].values # Class labels Explanation: Get the features and labels End of explanation # Do a 70 - 30 split of the whole data for training and testing # The last argument specifies the fraction of samples for testing train_data,test_data,train_labels,test_labels = train_test_split(features,labels,test_size=.3) #Print some info print ('Number of training data points : %d'%(train_data.shape[0])) print ('Number of testing data points : %d'%(test_data.shape[0])) Explanation: Split data to training and validation sets End of explanation # K nearest neighbor knn = KNeighborsClassifier() knn.fit(train_data,train_labels) Explanation: Lets do the training on different algorithms We will be using the following algorithms k-Nearest Neighbours (KNN) [ https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm ] Naive Bayes Classifier [ https://en.wikipedia.org/wiki/Naive_Bayes_classifier ] Multilayer Neural Network [ https://en.wikipedia.org/wiki/Multilayer_perceptron ] Lets start with default model parameters for each classifier. Check the link above each block for function definition Scikit KNN http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html End of explanation # Naive Bayes nb = GaussianNB() nb.fit(train_data,train_labels) Explanation: Scikit Naive Bayes http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html End of explanation # MLP mlp = MLPClassifier(solver='sgd',hidden_layer_sizes=(5, 1)) mlp.fit(train_data,train_labels) Explanation: Scikit MLP https://en.wikipedia.org/wiki/Multilayer_perceptron End of explanation # Pretty function to test a model and print accuracy score def evaluate(model,modelname,test_data,test_labels): predictions = model.predict(test_data) # Do the actual prediction print('====================================================') print('Classification Report for %s'%modelname) print('===================================================') print(classification_report(test_labels,predictions,target_names=['Non Pulsar','Pulsar'])) print('\n The model is %f accurate' %(accuracy_score(test_labels,predictions)*100)) print('====================================================\n\n') # Making some stuff easy models =[knn,nb,mlp] model_names =['KNN','Naive Bayes','Neural Network'] Explanation: Fancy function to print results for model evaluation End of explanation for i in range(0,3): evaluate(models[i],model_names[i],test_data,test_labels) Explanation: Now Lets test each classifier and disply their accuracy End of explanation
12,893
Given the following text description, write Python code to implement the functionality described below step by step Description: 201 - Engineering Text Features Using mmlspark Modules and Spark SQL Again, try to predict Amazon book ratings greater than 3 out of 5, this time using the TextFeaturizer module which is a composition of several text analytics APIs that are native to Spark. Step1: Use TextFeaturizer to generate our features column. We remove stop words, and use TF-IDF to generate 2²⁰ sparse features. Step2: Change the label so that we can predict whether the rating is greater than 3 using a binary classifier. Step3: Train several Logistic Regression models with different regularizations. Step4: Find the model with the best AUC on the test set. Step5: Use the optimized ComputeModelStatistics API to find the model accuracy.
Python Code: import pandas as pd import mmlspark from pyspark.sql.types import IntegerType, StringType, StructType, StructField dataFile = "BookReviewsFromAmazon10K.tsv" textSchema = StructType([StructField("rating", IntegerType(), False), StructField("text", StringType(), False)]) import os, urllib if not os.path.isfile(dataFile): urllib.request.urlretrieve("https://mmlspark.azureedge.net/datasets/"+dataFile, dataFile) data = spark.createDataFrame(pd.read_csv(dataFile, sep="\t", header=None), textSchema) data.limit(10).toPandas() Explanation: 201 - Engineering Text Features Using mmlspark Modules and Spark SQL Again, try to predict Amazon book ratings greater than 3 out of 5, this time using the TextFeaturizer module which is a composition of several text analytics APIs that are native to Spark. End of explanation from mmlspark.TextFeaturizer import TextFeaturizer textFeaturizer = TextFeaturizer() \ .setInputCol("text").setOutputCol("features") \ .setUseStopWordsRemover(True).setUseIDF(True).setMinDocFreq(5).setNumFeatures(1 << 16).fit(data) processedData = textFeaturizer.transform(data) processedData.limit(5).toPandas() Explanation: Use TextFeaturizer to generate our features column. We remove stop words, and use TF-IDF to generate 2²⁰ sparse features. End of explanation processedData = processedData.withColumn("label", processedData["rating"] > 3) \ .select(["features", "label"]) processedData.limit(5).toPandas() Explanation: Change the label so that we can predict whether the rating is greater than 3 using a binary classifier. End of explanation train, test, validation = processedData.randomSplit([0.60, 0.20, 0.20]) from pyspark.ml.classification import LogisticRegression lrHyperParams = [0.05, 0.1, 0.2, 0.4] logisticRegressions = [LogisticRegression(regParam = hyperParam) for hyperParam in lrHyperParams] from mmlspark.TrainClassifier import TrainClassifier lrmodels = [TrainClassifier(model=lrm, labelCol="label").fit(train) for lrm in logisticRegressions] Explanation: Train several Logistic Regression models with different regularizations. End of explanation from mmlspark import FindBestModel, BestModel bestModel = FindBestModel(evaluationMetric="AUC", models=lrmodels).fit(test) bestModel.write().overwrite().save("model.mml") loadedBestModel = BestModel.load("model.mml") Explanation: Find the model with the best AUC on the test set. End of explanation from mmlspark.ComputeModelStatistics import ComputeModelStatistics predictions = loadedBestModel.transform(validation) metrics = ComputeModelStatistics().transform(predictions) print("Best model's accuracy on validation set = " + "{0:.2f}%".format(metrics.first()["accuracy"] * 100)) Explanation: Use the optimized ComputeModelStatistics API to find the model accuracy. End of explanation
12,894
Given the following text description, write Python code to implement the functionality described below step by step Description: version 1.0.2 + Introduction to Machine Learning with Apache Spark Predicting Movie Ratings One of the most common uses of big data is to predict what users want. This allows Google to show you relevant ads, Amazon to recommend relevant products, and Netflix to recommend movies that you might like. This lab will demonstrate how we can use Apache Spark to recommend movies to a user. We will start with some basic techniques, and then use the Spark MLlib library's Alternating Least Squares method to make more sophisticated predictions. For this lab, we will use a subset dataset of 500,000 ratings we have included for you into your VM (and on Databricks) from the movielens 10M stable benchmark rating dataset. However, the same code you write will work for the full dataset, or their latest dataset of 21 million ratings. In this lab Step3: Part 0 Step4: In this lab we will be examining subsets of the tuples we create (e.g., the top rated movies by users). Whenever we examine only a subset of a large dataset, there is the potential that the result will depend on the order we perform operations, such as joins, or how the data is partitioned across the workers. What we want to guarantee is that we always see the same results for a subset, independent of how we manipulate or store the data. We can do that by sorting before we examine a subset. You might think that the most obvious choice when dealing with an RDD of tuples would be to use the sortByKey() method. However this choice is problematic, as we can still end up with different results if the key is not unique. Note Step6: Even though the two lists contain identical tuples, the difference in ordering sometimes yields a different ordering for the sorted RDD (try running the cell repeatedly and see if the results change or the assertion fails). If we only examined the first two elements of the RDD (e.g., using take(2)), then we would observe different answers - that is a really bad outcome as we want identical input data to always yield identical output. A better technique is to sort the RDD by both the key and value, which we can do by combining the key and value into a single string and then sorting on that string. Since the key is an integer and the value is a unicode string, we can use a function to combine them into a single unicode string (e.g., unicode('%.3f' % key) + ' ' + value) before sorting the RDD using sortBy(). Step7: If we just want to look at the first few elements of the RDD in sorted order, we can use the takeOrdered method with the sortFunction we defined. Step9: Part 1 Step10: (1b) Movies with Highest Average Ratings Now that we have a way to calculate the average ratings, we will use the getCountsAndAverages() helper function with Spark to determine movies with highest average ratings. The steps you should perform are Step11: (1c) Movies with Highest Average Ratings and more than 500 reviews Now that we have an RDD of the movies with highest averge ratings, we can use Spark to determine the 20 movies with highest average ratings and more than 500 reviews. Apply a single RDD transformation to movieNameWithAvgRatingsRDD to limit the results to movies with ratings from more than 500 people. We then use the sortFunction() helper function to sort by the average rating to get the movies in order of their rating (highest rating first). You will end up with an RDD of the form Step12: Using a threshold on the number of reviews is one way to improve the recommendations, but there are many other good ways to improve quality. For example, you could weight ratings by the number of ratings. Part 2 Step14: After splitting the dataset, your training set has about 293,000 entries and the validation and test sets each have about 97,000 entries (the exact number of entries in each dataset varies slightly due to the random nature of the randomSplit() transformation. (2b) Root Mean Square Error (RMSE) In the next part, you will generate a few different models, and will need a way to decide which model is best. We will use the Root Mean Square Error (RMSE) or Root Mean Square Deviation (RMSD) to compute the error of each model. RMSE is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the values actually observed. The RMSD represents the sample standard deviation of the differences between predicted values and observed values. These individual differences are called residuals when the calculations are performed over the data sample that was used for estimation, and are called prediction errors when computed out-of-sample. The RMSE serves to aggregate the magnitudes of the errors in predictions for various times into a single measure of predictive power. RMSE is a good measure of accuracy, but only to compare forecasting errors of different models for a particular variable and not between variables, as it is scale-dependent. The RMSE is the square root of the average value of the square of (actual rating - predicted rating) for all users and movies for which we have the actual rating. Versions of Spark MLlib beginning with Spark 1.4 include a RegressionMetrics modiule that can be used to compute the RMSE. However, since we are using Spark 1.3.1, we will write our own function. Write a function to compute the sum of squared error given predictedRDD and actualRDD RDDs. Both RDDs consist of tuples of the form (UserID, MovieID, Rating) Given two ratings RDDs, x and y of size n, we define RSME as follows Step15: (2c) Using ALS.train() In this part, we will use the MLlib implementation of Alternating Least Squares, ALS.train(). ALS takes a training dataset (RDD) and several parameters that control the model creation process. To determine the best values for the parameters, we will use ALS to train several models, and then we will select the best model and use the parameters from that model in the rest of this lab exercise. The process we will use for determining the best model is as follows Step16: (2d) Testing Your Model So far, we used the trainingRDD and validationRDD datasets to select the best model. Since we used these two datasets to determine what model is best, we cannot use them to test how good the model is - otherwise we would be very vulnerable to overfitting. To decide how good our model is, we need to use the testRDD dataset. We will use the bestRank you determined in part (2c) to create a model for predicting the ratings for the test dataset and then we will compute the RMSE. The steps you should perform are Step17: (2e) Comparing Your Model Looking at the RMSE for the results predicted by the model versus the values in the test set is one way to evalute the quality of our model. Another way to evaluate the model is to evaluate the error from a test set where every rating is the average rating for the training set. The steps you should perform are Step18: You now have code to predict how users will rate movies! Part 3 Step19: The user ID 0 is unassigned, so we will use it for your ratings. We set the variable myUserID to 0 for you. Next, create a new RDD myRatingsRDD with your ratings for at least 10 movie ratings. Each entry should be formatted as (myUserID, movieID, rating) (i.e., each entry should be formatted in the same way as trainingRDD). As in the original dataset, ratings should be between 1 and 5 (inclusive). If you have not seen at least 10 of these movies, you can increase the parameter passed to take() in the above cell until there are 10 movies that you have seen (or you can also guess what your rating would be for movies you have not seen). Step20: (3b) Add Your Movies to Training Dataset Now that you have ratings for yourself, you need to add your ratings to the training dataset so that the model you train will incorporate your preferences. Spark's union() transformation combines two RDDs; use union() to create a new training dataset that includes your ratings and the data in the original training dataset. Step21: (3c) Train a Model with Your Ratings Now, train a model with your ratings added and the parameters you used in in part (2c) Step22: (3d) Check RMSE for the New Model with Your Ratings Compute the RMSE for this new model on the test set. For the prediction step, we reuse testForPredictingRDD, consisting of (UserID, MovieID) pairs that you extracted from testRDD. The RDD has the form Step23: (3e) Predict Your Ratings So far, we have only used the predictAll method to compute the error of the model. Here, use the predictAll to predict what ratings you would give to the movies that you did not already provide ratings for. The steps you should perform are Step24: (3f) Predict Your Ratings We have our predicted ratings. Now we can print out the 25 movies with the highest predicted ratings. The steps you should perform are
Python Code: import sys import os from test_helper import Test baseDir = os.path.join('data') inputPath = os.path.join('cs100', 'lab4', 'small') ratingsFilename = os.path.join(baseDir, inputPath, 'ratings.dat.gz') moviesFilename = os.path.join(baseDir, inputPath, 'movies.dat') Explanation: version 1.0.2 + Introduction to Machine Learning with Apache Spark Predicting Movie Ratings One of the most common uses of big data is to predict what users want. This allows Google to show you relevant ads, Amazon to recommend relevant products, and Netflix to recommend movies that you might like. This lab will demonstrate how we can use Apache Spark to recommend movies to a user. We will start with some basic techniques, and then use the Spark MLlib library's Alternating Least Squares method to make more sophisticated predictions. For this lab, we will use a subset dataset of 500,000 ratings we have included for you into your VM (and on Databricks) from the movielens 10M stable benchmark rating dataset. However, the same code you write will work for the full dataset, or their latest dataset of 21 million ratings. In this lab: Part 0: Preliminaries Part 1: Basic Recommendations Part 2: Collaborative Filtering Part 3: Predictions for Yourself As mentioned during the first Learning Spark lab, think carefully before calling collect() on any datasets. When you are using a small dataset, calling collect() and then using Python to get a sense for the data locally (in the driver program) will work fine, but this will not work when you are using a large dataset that doesn't fit in memory on one machine. Solutions that call collect() and do local analysis that could have been done with Spark will likely fail in the autograder and not receive full credit. Code This assignment can be completed using basic Python and pySpark Transformations and Actions. Libraries other than math are not necessary. With the exception of the ML functions that we introduce in this assignment, you should be able to complete all parts of this homework using only the Spark functions you have used in prior lab exercises (although you are welcome to use more features of Spark if you like!). End of explanation numPartitions = 2 rawRatings = sc.textFile(ratingsFilename).repartition(numPartitions) rawMovies = sc.textFile(moviesFilename) def get_ratings_tuple(entry): Parse a line in the ratings dataset Args: entry (str): a line in the ratings dataset in the form of UserID::MovieID::Rating::Timestamp Returns: tuple: (UserID, MovieID, Rating) items = entry.split('::') return int(items[0]), int(items[1]), float(items[2]) def get_movie_tuple(entry): Parse a line in the movies dataset Args: entry (str): a line in the movies dataset in the form of MovieID::Title::Genres Returns: tuple: (MovieID, Title) items = entry.split('::') return int(items[0]), items[1] ratingsRDD = rawRatings.map(get_ratings_tuple).cache() moviesRDD = rawMovies.map(get_movie_tuple).cache() ratingsCount = ratingsRDD.count() moviesCount = moviesRDD.count() print 'There are %s ratings and %s movies in the datasets' % (ratingsCount, moviesCount) print 'Ratings: %s' % ratingsRDD.take(3) print 'Movies: %s' % moviesRDD.take(3) assert ratingsCount == 487650 assert moviesCount == 3883 assert moviesRDD.filter(lambda (id, title): title == 'Toy Story (1995)').count() == 1 assert (ratingsRDD.takeOrdered(1, key=lambda (user, movie, rating): movie) == [(1, 1, 5.0)]) Explanation: Part 0: Preliminaries We read in each of the files and create an RDD consisting of parsed lines. Each line in the ratings dataset (ratings.dat.gz) is formatted as: UserID::MovieID::Rating::Timestamp Each line in the movies (movies.dat) dataset is formatted as: MovieID::Title::Genres The Genres field has the format Genres1|Genres2|Genres3|... The format of these files is uniform and simple, so we can use Python split() to parse their lines. Parsing the two files yields two RDDS For each line in the ratings dataset, we create a tuple of (UserID, MovieID, Rating). We drop the timestamp because we do not need it for this exercise. For each line in the movies dataset, we create a tuple of (MovieID, Title). We drop the Genres because we do not need them for this exercise. End of explanation tmp1 = [(1, u'alpha'), (2, u'alpha'), (2, u'beta'), (3, u'alpha'), (1, u'epsilon'), (1, u'delta')] tmp2 = [(1, u'delta'), (2, u'alpha'), (2, u'beta'), (3, u'alpha'), (1, u'epsilon'), (1, u'alpha')] oneRDD = sc.parallelize(tmp1) twoRDD = sc.parallelize(tmp2) oneSorted = oneRDD.sortByKey(True).collect() twoSorted = twoRDD.sortByKey(True).collect() print oneSorted print twoSorted assert set(oneSorted) == set(twoSorted) # Note that both lists have the same elements assert twoSorted[0][0] < twoSorted.pop()[0] # Check that it is sorted by the keys assert oneSorted[0:2] != twoSorted[0:2] # Note that the subset consisting of the first two elements does not match Explanation: In this lab we will be examining subsets of the tuples we create (e.g., the top rated movies by users). Whenever we examine only a subset of a large dataset, there is the potential that the result will depend on the order we perform operations, such as joins, or how the data is partitioned across the workers. What we want to guarantee is that we always see the same results for a subset, independent of how we manipulate or store the data. We can do that by sorting before we examine a subset. You might think that the most obvious choice when dealing with an RDD of tuples would be to use the sortByKey() method. However this choice is problematic, as we can still end up with different results if the key is not unique. Note: It is important to use the unicode type instead of the string type as the titles are in unicode characters. Consider the following example, and note that while the sets are equal, the printed lists are usually in different order by value, although they may randomly match up from time to time. You can try running this multiple times. If the last assertion fails, don't worry about it: that was just the luck of the draw. And note that in some environments the results may be more deterministic. End of explanation def sortFunction(tuple): Construct the sort string (does not perform actual sorting) Args: tuple: (rating, MovieName) Returns: sortString: the value to sort with, 'rating MovieName' key = unicode('%.3f' % tuple[0]) value = tuple[1] return (key + ' ' + value) print oneRDD.sortBy(sortFunction, True).collect() print twoRDD.sortBy(sortFunction, True).collect() Explanation: Even though the two lists contain identical tuples, the difference in ordering sometimes yields a different ordering for the sorted RDD (try running the cell repeatedly and see if the results change or the assertion fails). If we only examined the first two elements of the RDD (e.g., using take(2)), then we would observe different answers - that is a really bad outcome as we want identical input data to always yield identical output. A better technique is to sort the RDD by both the key and value, which we can do by combining the key and value into a single string and then sorting on that string. Since the key is an integer and the value is a unicode string, we can use a function to combine them into a single unicode string (e.g., unicode('%.3f' % key) + ' ' + value) before sorting the RDD using sortBy(). End of explanation oneSorted1 = oneRDD.takeOrdered(oneRDD.count(),key=sortFunction) twoSorted1 = twoRDD.takeOrdered(twoRDD.count(),key=sortFunction) print 'one is %s' % oneSorted1 print 'two is %s' % twoSorted1 assert oneSorted1 == twoSorted1 Explanation: If we just want to look at the first few elements of the RDD in sorted order, we can use the takeOrdered method with the sortFunction we defined. End of explanation # TODO: Replace <FILL IN> with appropriate code # First, implement a helper function `getCountsAndAverages` using only Python def getCountsAndAverages(IDandRatingsTuple): Calculate average rating Args: IDandRatingsTuple: a single tuple of (MovieID, (Rating1, Rating2, Rating3, ...)) Returns: tuple: a tuple of (MovieID, (number of ratings, averageRating)) aggr_result = (IDandRatingsTuple[0], (len(IDandRatingsTuple[1]), float(sum(IDandRatingsTuple[1])) / len(IDandRatingsTuple[1]))) return aggr_result # TEST Number of Ratings and Average Ratings for a Movie (1a) Test.assertEquals(getCountsAndAverages((1, (1, 2, 3, 4))), (1, (4, 2.5)), 'incorrect getCountsAndAverages() with integer list') Test.assertEquals(getCountsAndAverages((100, (10.0, 20.0, 30.0))), (100, (3, 20.0)), 'incorrect getCountsAndAverages() with float list') Test.assertEquals(getCountsAndAverages((110, xrange(20))), (110, (20, 9.5)), 'incorrect getCountsAndAverages() with xrange') Explanation: Part 1: Basic Recommendations One way to recommend movies is to always recommend the movies with the highest average rating. In this part, we will use Spark to find the name, number of ratings, and the average rating of the 20 movies with the highest average rating and more than 500 reviews. We want to filter our movies with high ratings but fewer than or equal to 500 reviews because movies with few reviews may not have broad appeal to everyone. (1a) Number of Ratings and Average Ratings for a Movie Using only Python, implement a helper function getCountsAndAverages() that takes a single tuple of (MovieID, (Rating1, Rating2, Rating3, ...)) and returns a tuple of (MovieID, (number of ratings, averageRating)). For example, given the tuple (100, (10.0, 20.0, 30.0)), your function should return (100, (3, 20.0)) End of explanation # TODO: Replace <FILL IN> with appropriate code # From ratingsRDD with tuples of (UserID, MovieID, Rating) create an RDD with tuples of # the (MovieID, iterable of Ratings for that MovieID) movieIDsWithRatingsRDD = (ratingsRDD .map(lambda x:(x[1], x[2])) .groupByKey()) print 'movieIDsWithRatingsRDD: %s\n' % movieIDsWithRatingsRDD.take(3) # Using `movieIDsWithRatingsRDD`, compute the number of ratings and average rating for each movie to # yield tuples of the form (MovieID, (number of ratings, average rating)) movieIDsWithAvgRatingsRDD = movieIDsWithRatingsRDD.map(getCountsAndAverages) print 'movieIDsWithAvgRatingsRDD: %s\n' % movieIDsWithAvgRatingsRDD.take(3) # To `movieIDsWithAvgRatingsRDD`, apply RDD transformations that use `moviesRDD` to get the movie # names for `movieIDsWithAvgRatingsRDD`, yielding tuples of the form # (average rating, movie name, number of ratings) movieNameWithAvgRatingsRDD = (moviesRDD .join(movieIDsWithAvgRatingsRDD).map(lambda x:(x[1][1][1], x[1][0], x[1][1][0]))) print 'movieNameWithAvgRatingsRDD: %s\n' % movieNameWithAvgRatingsRDD.take(3) # TEST Movies with Highest Average Ratings (1b) Test.assertEquals(movieIDsWithRatingsRDD.count(), 3615, 'incorrect movieIDsWithRatingsRDD.count() (expected 3615)') movieIDsWithRatingsTakeOrdered = movieIDsWithRatingsRDD.takeOrdered(3) Test.assertTrue(movieIDsWithRatingsTakeOrdered[0][0] == 1 and len(list(movieIDsWithRatingsTakeOrdered[0][1])) == 993, 'incorrect count of ratings for movieIDsWithRatingsTakeOrdered[0] (expected 993)') Test.assertTrue(movieIDsWithRatingsTakeOrdered[1][0] == 2 and len(list(movieIDsWithRatingsTakeOrdered[1][1])) == 332, 'incorrect count of ratings for movieIDsWithRatingsTakeOrdered[1] (expected 332)') Test.assertTrue(movieIDsWithRatingsTakeOrdered[2][0] == 3 and len(list(movieIDsWithRatingsTakeOrdered[2][1])) == 299, 'incorrect count of ratings for movieIDsWithRatingsTakeOrdered[2] (expected 299)') Test.assertEquals(movieIDsWithAvgRatingsRDD.count(), 3615, 'incorrect movieIDsWithAvgRatingsRDD.count() (expected 3615)') Test.assertEquals(movieIDsWithAvgRatingsRDD.takeOrdered(3), [(1, (993, 4.145015105740181)), (2, (332, 3.174698795180723)), (3, (299, 3.0468227424749164))], 'incorrect movieIDsWithAvgRatingsRDD.takeOrdered(3)') Test.assertEquals(movieNameWithAvgRatingsRDD.count(), 3615, 'incorrect movieNameWithAvgRatingsRDD.count() (expected 3615)') Test.assertEquals(movieNameWithAvgRatingsRDD.takeOrdered(3), [(1.0, u'Autopsy (Macchie Solari) (1975)', 1), (1.0, u'Better Living (1998)', 1), (1.0, u'Big Squeeze, The (1996)', 3)], 'incorrect movieNameWithAvgRatingsRDD.takeOrdered(3)') Explanation: (1b) Movies with Highest Average Ratings Now that we have a way to calculate the average ratings, we will use the getCountsAndAverages() helper function with Spark to determine movies with highest average ratings. The steps you should perform are: Recall that the ratingsRDD contains tuples of the form (UserID, MovieID, Rating). From ratingsRDD create an RDD with tuples of the form (MovieID, Python iterable of Ratings for that MovieID). This transformation will yield an RDD of the form: [(1, &lt;pyspark.resultiterable.ResultIterable object at 0x7f16d50e7c90&gt;), (2, &lt;pyspark.resultiterable.ResultIterable object at 0x7f16d50e79d0&gt;), (3, &lt;pyspark.resultiterable.ResultIterable object at 0x7f16d50e7610&gt;)]. Note that you will only need to perform two Spark transformations to do this step. Using movieIDsWithRatingsRDD and your getCountsAndAverages() helper function, compute the number of ratings and average rating for each movie to yield tuples of the form (MovieID, (number of ratings, average rating)). This transformation will yield an RDD of the form: [(1, (993, 4.145015105740181)), (2, (332, 3.174698795180723)), (3, (299, 3.0468227424749164))]. You can do this step with one Spark transformation We want to see movie names, instead of movie IDs. To moviesRDD, apply RDD transformations that use movieIDsWithAvgRatingsRDD to get the movie names for movieIDsWithAvgRatingsRDD, yielding tuples of the form (average rating, movie name, number of ratings). This set of transformations will yield an RDD of the form: [(1.0, u'Autopsy (Macchie Solari) (1975)', 1), (1.0, u'Better Living (1998)', 1), (1.0, u'Big Squeeze, The (1996)', 3)]. You will need to do two Spark transformations to complete this step: first use the moviesRDD with movieIDsWithAvgRatingsRDD to create a new RDD with Movie names matched to Movie IDs, then convert that RDD into the form of (average rating, movie name, number of ratings). These transformations will yield an RDD that looks like: [(3.6818181818181817, u'Happiest Millionaire, The (1967)', 22), (3.0468227424749164, u'Grumpier Old Men (1995)', 299), (2.882978723404255, u'Hocus Pocus (1993)', 94)] End of explanation # TODO: Replace <FILL IN> with appropriate code # Apply an RDD transformation to `movieNameWithAvgRatingsRDD` to limit the results to movies with # ratings from more than 500 people. We then use the `sortFunction()` helper function to sort by the # average rating to get the movies in order of their rating (highest rating first) movieLimitedAndSortedByRatingRDD = (movieNameWithAvgRatingsRDD .filter(lambda x: (x[2] > 500)) .sortBy(sortFunction, False)) print 'Movies with highest ratings: %s' % movieLimitedAndSortedByRatingRDD.take(20) # TEST Movies with Highest Average Ratings and more than 500 Reviews (1c) Test.assertEquals(movieLimitedAndSortedByRatingRDD.count(), 194, 'incorrect movieLimitedAndSortedByRatingRDD.count()') Test.assertEquals(movieLimitedAndSortedByRatingRDD.take(20), [(4.5349264705882355, u'Shawshank Redemption, The (1994)', 1088), (4.515798462852263, u"Schindler's List (1993)", 1171), (4.512893982808023, u'Godfather, The (1972)', 1047), (4.510460251046025, u'Raiders of the Lost Ark (1981)', 1195), (4.505415162454874, u'Usual Suspects, The (1995)', 831), (4.457256461232604, u'Rear Window (1954)', 503), (4.45468509984639, u'Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (1963)', 651), (4.43953006219765, u'Star Wars: Episode IV - A New Hope (1977)', 1447), (4.4, u'Sixth Sense, The (1999)', 1110), (4.394285714285714, u'North by Northwest (1959)', 700), (4.379506641366224, u'Citizen Kane (1941)', 527), (4.375, u'Casablanca (1942)', 776), (4.363975155279503, u'Godfather: Part II, The (1974)', 805), (4.358816276202219, u"One Flew Over the Cuckoo's Nest (1975)", 811), (4.358173076923077, u'Silence of the Lambs, The (1991)', 1248), (4.335826477187734, u'Saving Private Ryan (1998)', 1337), (4.326241134751773, u'Chinatown (1974)', 564), (4.325383304940375, u'Life Is Beautiful (La Vita \ufffd bella) (1997)', 587), (4.324110671936759, u'Monty Python and the Holy Grail (1974)', 759), (4.3096, u'Matrix, The (1999)', 1250)], 'incorrect sortedByRatingRDD.take(20)') Explanation: (1c) Movies with Highest Average Ratings and more than 500 reviews Now that we have an RDD of the movies with highest averge ratings, we can use Spark to determine the 20 movies with highest average ratings and more than 500 reviews. Apply a single RDD transformation to movieNameWithAvgRatingsRDD to limit the results to movies with ratings from more than 500 people. We then use the sortFunction() helper function to sort by the average rating to get the movies in order of their rating (highest rating first). You will end up with an RDD of the form: [(4.5349264705882355, u'Shawshank Redemption, The (1994)', 1088), (4.515798462852263, u"Schindler's List (1993)", 1171), (4.512893982808023, u'Godfather, The (1972)', 1047)] End of explanation trainingRDD, validationRDD, testRDD = ratingsRDD.randomSplit([6, 2, 2], seed=0L) print 'Training: %s, validation: %s, test: %s\n' % (trainingRDD.count(), validationRDD.count(), testRDD.count()) print trainingRDD.take(3) print validationRDD.take(3) print testRDD.take(3) assert trainingRDD.count() == 292716 assert validationRDD.count() == 96902 assert testRDD.count() == 98032 assert trainingRDD.filter(lambda t: t == (1, 914, 3.0)).count() == 1 assert trainingRDD.filter(lambda t: t == (1, 2355, 5.0)).count() == 1 assert trainingRDD.filter(lambda t: t == (1, 595, 5.0)).count() == 1 assert validationRDD.filter(lambda t: t == (1, 1287, 5.0)).count() == 1 assert validationRDD.filter(lambda t: t == (1, 594, 4.0)).count() == 1 assert validationRDD.filter(lambda t: t == (1, 1270, 5.0)).count() == 1 assert testRDD.filter(lambda t: t == (1, 1193, 5.0)).count() == 1 assert testRDD.filter(lambda t: t == (1, 2398, 4.0)).count() == 1 assert testRDD.filter(lambda t: t == (1, 1035, 5.0)).count() == 1 Explanation: Using a threshold on the number of reviews is one way to improve the recommendations, but there are many other good ways to improve quality. For example, you could weight ratings by the number of ratings. Part 2: Collaborative Filtering In this course, you have learned about many of the basic transformations and actions that Spark allows us to apply to distributed datasets. Spark also exposes some higher level functionality; in particular, Machine Learning using a component of Spark called MLlib. In this part, you will learn how to use MLlib to make personalized movie recommendations using the movie data we have been analyzing. We are going to use a technique called collaborative filtering. Collaborative filtering is a method of making automatic predictions (filtering) about the interests of a user by collecting preferences or taste information from many users (collaborating). The underlying assumption of the collaborative filtering approach is that if a person A has the same opinion as a person B on an issue, A is more likely to have B's opinion on a different issue x than to have the opinion on x of a person chosen randomly. You can read more about collaborative filtering here. The image below (from Wikipedia) shows an example of predicting of the user's rating using collaborative filtering. At first, people rate different items (like videos, images, games). After that, the system is making predictions about a user's rating for an item, which the user has not rated yet. These predictions are built upon the existing ratings of other users, who have similar ratings with the active user. For instance, in the image below the system has made a prediction, that the active user will not like the video. For movie recommendations, we start with a matrix whose entries are movie ratings by users (shown in red in the diagram below). Each column represents a user (shown in green) and each row represents a particular movie (shown in blue). Since not all users have rated all movies, we do not know all of the entries in this matrix, which is precisely why we need collaborative filtering. For each user, we have ratings for only a subset of the movies. With collaborative filtering, the idea is to approximate the ratings matrix by factorizing it as the product of two matrices: one that describes properties of each user (shown in green), and one that describes properties of each movie (shown in blue). We want to select these two matrices such that the error for the users/movie pairs where we know the correct ratings is minimized. The Alternating Least Squares algorithm does this by first randomly filling the users matrix with values and then optimizing the value of the movies such that the error is minimized. Then, it holds the movies matrix constrant and optimizes the value of the user's matrix. This alternation between which matrix to optimize is the reason for the "alternating" in the name. This optimization is what's being shown on the right in the image above. Given a fixed set of user factors (i.e., values in the users matrix), we use the known ratings to find the best values for the movie factors using the optimization written at the bottom of the figure. Then we "alternate" and pick the best user factors given fixed movie factors. For a simple example of what the users and movies matrices might look like, check out the videos from Lecture 8 or the slides from Lecture 8 (2a) Creating a Training Set Before we jump into using machine learning, we need to break up the ratingsRDD dataset into three pieces: A training set (RDD), which we will use to train models A validation set (RDD), which we will use to choose the best model A test set (RDD), which we will use for our experiments To randomly split the dataset into the multiple groups, we can use the pySpark randomSplit() transformation. randomSplit() takes a set of splits and and seed and returns multiple RDDs. End of explanation # TODO: Replace <FILL IN> with appropriate code import math def computeError(predictedRDD, actualRDD): Compute the root mean squared error between predicted and actual Args: predictedRDD: predicted ratings for each movie and each user where each entry is in the form (UserID, MovieID, Rating) actualRDD: actual ratings where each entry is in the form (UserID, MovieID, Rating) Returns: RSME (float): computed RSME value # Transform predictedRDD into the tuples of the form ((UserID, MovieID), Rating) predictedReformattedRDD = predictedRDD.map(lambda x: ((x[0], x[1]), x[2])) # Transform actualRDD into the tuples of the form ((UserID, MovieID), Rating) actualReformattedRDD = actualRDD.map(lambda x: ((x[0], x[1]), x[2])) # Compute the squared error for each matching entry (i.e., the same (User ID, Movie ID) in each # RDD) in the reformatted RDDs using RDD transformtions - do not use collect() squaredErrorsRDD = (predictedReformattedRDD .join(actualReformattedRDD) .map(lambda x: (x, (x[1][0] - x[1][1])**2))) # Compute the total squared error - do not use collect() totalError = squaredErrorsRDD.values().sum() # Count the number of entries for which you computed the total squared error numRatings = squaredErrorsRDD.count() # Using the total squared error and the number of entries, compute the RSME return math.sqrt(float(totalError)/numRatings) # sc.parallelize turns a Python list into a Spark RDD. testPredicted = sc.parallelize([ (1, 1, 5), (1, 2, 3), (1, 3, 4), (2, 1, 3), (2, 2, 2), (2, 3, 4)]) testActual = sc.parallelize([ (1, 2, 3), (1, 3, 5), (2, 1, 5), (2, 2, 1)]) testPredicted2 = sc.parallelize([ (2, 2, 5), (1, 2, 5)]) testError = computeError(testPredicted, testActual) print 'Error for test dataset (should be 1.22474487139): %s' % testError testError2 = computeError(testPredicted2, testActual) print 'Error for test dataset2 (should be 3.16227766017): %s' % testError2 testError3 = computeError(testActual, testActual) print 'Error for testActual dataset (should be 0.0): %s' % testError3 # TEST Root Mean Square Error (2b) Test.assertTrue(abs(testError - 1.22474487139) < 0.00000001, 'incorrect testError (expected 1.22474487139)') Test.assertTrue(abs(testError2 - 3.16227766017) < 0.00000001, 'incorrect testError2 result (expected 3.16227766017)') Test.assertTrue(abs(testError3 - 0.0) < 0.00000001, 'incorrect testActual result (expected 0.0)') Explanation: After splitting the dataset, your training set has about 293,000 entries and the validation and test sets each have about 97,000 entries (the exact number of entries in each dataset varies slightly due to the random nature of the randomSplit() transformation. (2b) Root Mean Square Error (RMSE) In the next part, you will generate a few different models, and will need a way to decide which model is best. We will use the Root Mean Square Error (RMSE) or Root Mean Square Deviation (RMSD) to compute the error of each model. RMSE is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the values actually observed. The RMSD represents the sample standard deviation of the differences between predicted values and observed values. These individual differences are called residuals when the calculations are performed over the data sample that was used for estimation, and are called prediction errors when computed out-of-sample. The RMSE serves to aggregate the magnitudes of the errors in predictions for various times into a single measure of predictive power. RMSE is a good measure of accuracy, but only to compare forecasting errors of different models for a particular variable and not between variables, as it is scale-dependent. The RMSE is the square root of the average value of the square of (actual rating - predicted rating) for all users and movies for which we have the actual rating. Versions of Spark MLlib beginning with Spark 1.4 include a RegressionMetrics modiule that can be used to compute the RMSE. However, since we are using Spark 1.3.1, we will write our own function. Write a function to compute the sum of squared error given predictedRDD and actualRDD RDDs. Both RDDs consist of tuples of the form (UserID, MovieID, Rating) Given two ratings RDDs, x and y of size n, we define RSME as follows: $ RMSE = \sqrt{\frac{\sum_{i = 1}^{n} (x_i - y_i)^2}{n}}$ To calculate RSME, the steps you should perform are: Transform predictedRDD into the tuples of the form ((UserID, MovieID), Rating). For example, tuples like [((1, 1), 5), ((1, 2), 3), ((1, 3), 4), ((2, 1), 3), ((2, 2), 2), ((2, 3), 4)]. You can perform this step with a single Spark transformation. Transform actualRDD into the tuples of the form ((UserID, MovieID), Rating). For example, tuples like [((1, 2), 3), ((1, 3), 5), ((2, 1), 5), ((2, 2), 1)]. You can perform this step with a single Spark transformation. Using only RDD transformations (you only need to perform two transformations), compute the squared error for each matching entry (i.e., the same (UserID, MovieID) in each RDD) in the reformatted RDDs - do not use collect() to perform this step. Note that not every (UserID, MovieID) pair will appear in both RDDs - if a pair does not appear in both RDDs, then it does not contribute to the RMSE. You will end up with an RDD with entries of the form $ (x_i - y_i)^2$ You might want to check out Python's math module to see how to compute these values Using an RDD action (but not collect()), compute the total squared error: $ SE = \sum_{i = 1}^{n} (x_i - y_i)^2 $ Compute n by using an RDD action (but not collect()), to count the number of pairs for which you computed the total squared error Using the total squared error and the number of pairs, compute the RSME. Make sure you compute this value as a float. Note: Your solution must only use transformations and actions on RDDs. Do not call collect() on either RDD. End of explanation # TODO: Replace <FILL IN> with appropriate code from pyspark.mllib.recommendation import ALS validationForPredictRDD = validationRDD.map(lambda x: (x[0], x[1])) seed = 5L iterations = 5 regularizationParameter = 0.1 ranks = [4, 8, 12] errors = [0, 0, 0] err = 0 tolerance = 0.03 minError = float('inf') bestRank = -1 bestIteration = -1 for rank in ranks: model = ALS.train(trainingRDD, rank, seed=seed, iterations=iterations, lambda_=regularizationParameter) predictedRatingsRDD = model.predictAll(validationForPredictRDD) error = computeError(predictedRatingsRDD, validationRDD) errors[err] = error err += 1 print 'For rank %s the RMSE is %s' % (rank, error) if error < minError: minError = error bestRank = rank print 'The best model was trained with rank %s' % bestRank # TEST Using ALS.train (2c) Test.assertEquals(trainingRDD.getNumPartitions(), 2, 'incorrect number of partitions for trainingRDD (expected 2)') Test.assertEquals(validationForPredictRDD.count(), 96902, 'incorrect size for validationForPredictRDD (expected 96902)') Test.assertEquals(validationForPredictRDD.filter(lambda t: t == (1, 1907)).count(), 1, 'incorrect content for validationForPredictRDD') Test.assertTrue(abs(errors[0] - 0.883710109497) < tolerance, 'incorrect errors[0]') Test.assertTrue(abs(errors[1] - 0.878486305621) < tolerance, 'incorrect errors[1]') Test.assertTrue(abs(errors[2] - 0.876832795659) < tolerance, 'incorrect errors[2]') Explanation: (2c) Using ALS.train() In this part, we will use the MLlib implementation of Alternating Least Squares, ALS.train(). ALS takes a training dataset (RDD) and several parameters that control the model creation process. To determine the best values for the parameters, we will use ALS to train several models, and then we will select the best model and use the parameters from that model in the rest of this lab exercise. The process we will use for determining the best model is as follows: Pick a set of model parameters. The most important parameter to ALS.train() is the rank, which is the number of rows in the Users matrix (green in the diagram above) or the number of columns in the Movies matrix (blue in the diagram above). (In general, a lower rank will mean higher error on the training dataset, but a high rank may lead to overfitting.) We will train models with ranks of 4, 8, and 12 using the trainingRDD dataset. Create a model using ALS.train(trainingRDD, rank, seed=seed, iterations=iterations, lambda_=regularizationParameter) with three parameters: an RDD consisting of tuples of the form (UserID, MovieID, rating) used to train the model, an integer rank (4, 8, or 12), a number of iterations to execute (we will use 5 for the iterations parameter), and a regularization coefficient (we will use 0.1 for the regularizationParameter). For the prediction step, create an input RDD, validationForPredictRDD, consisting of (UserID, MovieID) pairs that you extract from validationRDD. You will end up with an RDD of the form: [(1, 1287), (1, 594), (1, 1270)] Using the model and validationForPredictRDD, we can predict rating values by calling model.predictAll() with the validationForPredictRDD dataset, where model is the model we generated with ALS.train(). predictAll accepts an RDD with each entry in the format (userID, movieID) and outputs an RDD with each entry in the format (userID, movieID, rating). Evaluate the quality of the model by using the computeError() function you wrote in part (2b) to compute the error between the predicted ratings and the actual ratings in validationRDD. Which rank produces the best model, based on the RMSE with the validationRDD dataset? Note: It is likely that this operation will take a noticeable amount of time (around a minute in our VM); you can observe its progress on the Spark Web UI. Probably most of the time will be spent running your computeError() function, since, unlike the Spark ALS implementation (and the Spark 1.4 RegressionMetrics module), this does not use a fast linear algebra library and needs to run some Python code for all 100k entries. End of explanation # TODO: Replace <FILL IN> with appropriate code myModel = ALS.train(trainingRDD, bestRank, seed=seed, iterations=iterations, lambda_=regularizationParameter) testForPredictingRDD = testRDD.map(lambda x: (x[0], x[1])) predictedTestRDD = myModel.predictAll(testForPredictingRDD) testRMSE = computeError(testRDD, predictedTestRDD) print 'The model had a RMSE on the test set of %s' % testRMSE # TEST Testing Your Model (2d) Test.assertTrue(abs(testRMSE - 0.87809838344) < tolerance, 'incorrect testRMSE') Explanation: (2d) Testing Your Model So far, we used the trainingRDD and validationRDD datasets to select the best model. Since we used these two datasets to determine what model is best, we cannot use them to test how good the model is - otherwise we would be very vulnerable to overfitting. To decide how good our model is, we need to use the testRDD dataset. We will use the bestRank you determined in part (2c) to create a model for predicting the ratings for the test dataset and then we will compute the RMSE. The steps you should perform are: Train a model, using the trainingRDD, bestRank from part (2c), and the parameters you used in in part (2c): seed=seed, iterations=iterations, and lambda_=regularizationParameter - make sure you include all of the parameters. For the prediction step, create an input RDD, testForPredictingRDD, consisting of (UserID, MovieID) pairs that you extract from testRDD. You will end up with an RDD of the form: [(1, 1287), (1, 594), (1, 1270)] Use myModel.predictAll() to predict rating values for the test dataset. For validation, use the testRDDand your computeError function to compute the RMSE between testRDD and the predictedTestRDD from the model. Evaluate the quality of the model by using the computeError() function you wrote in part (2b) to compute the error between the predicted ratings and the actual ratings in testRDD. End of explanation # TODO: Replace <FILL IN> with appropriate code trainingAvgRating = float(trainingRDD.map(lambda x: x[2]).sum()) / trainingRDD.count() print 'The average rating for movies in the training set is %s' % trainingAvgRating testForAvgRDD = testRDD.map(lambda x: (x[0], x[1], trainingAvgRating)) testAvgRMSE = computeError(testRDD, testForAvgRDD) print 'The RMSE on the average set is %s' % testAvgRMSE # TEST Comparing Your Model (2e) Test.assertTrue(abs(trainingAvgRating - 3.57409571052) < 0.000001, 'incorrect trainingAvgRating (expected 3.57409571052)') Test.assertTrue(abs(testAvgRMSE - 1.12036693569) < 0.000001, 'incorrect testAvgRMSE (expected 1.12036693569)') Explanation: (2e) Comparing Your Model Looking at the RMSE for the results predicted by the model versus the values in the test set is one way to evalute the quality of our model. Another way to evaluate the model is to evaluate the error from a test set where every rating is the average rating for the training set. The steps you should perform are: Use the trainingRDD to compute the average rating across all movies in that training dataset. Use the average rating that you just determined and the testRDD to create an RDD with entries of the form (userID, movieID, average rating). Use your computeError function to compute the RMSE between the testRDD validation RDD that you just created and the testForAvgRDD. End of explanation print 'Most rated movies:' print '(average rating, movie name, number of reviews)' for ratingsTuple in movieLimitedAndSortedByRatingRDD.take(50): print ratingsTuple #a = moviesRDD.join(movieIDsWithAvgRatingsRDD).map(lambda x: (x[0], x[1][0], x[1][1][1], x[1][1][0])).filter(lambda x: (x[3] > 1000 )).filter(lambda x: (x[2] < 4 )).take(50) #for i in a: # print i Explanation: You now have code to predict how users will rate movies! Part 3: Predictions for Yourself The ultimate goal of this lab exercise is to predict what movies to recommend to yourself. In order to do that, you will first need to add ratings for yourself to the ratingsRDD dataset. (3a) Your Movie Ratings To help you provide ratings for yourself, we have included the following code to list the names and movie IDs of the 50 highest-rated movies from movieLimitedAndSortedByRatingRDD which we created in part 1 the lab. End of explanation # TODO: Replace <FILL IN> with appropriate code myUserID = 0 # Note that the movie IDs are the *last* number on each line. A common error was to use the number of ratings as the movie ID. myRatedMovies = [ # The format of each line is (myUserID, movie ID, your rating) # For example, to give the movie "Star Wars: Episode IV - A New Hope (1977)" a five rating, you would add the following line: # (myUserID, 260, 5), (myUserID, 2115, 4.5), # Indiana Jones and the Temple of Doom (myUserID, 480, 4), # Jurassic Park (myUserID, 1377, 3.8), # Batman Returns (myUserID, 648, 4), # Mission Impossible (myUserID, 2571, 4.8), # Matrix (myUserID, 1198, 5), # Raiders of the Lost Ark (myUserID, 1580, 3.6), # Men In Black (myUserID, 1219, 4.5), # Psycho (myUserID, 589, 3.2), # Terminator 2 (myUserID, 1097, 4) # ET ] myRatingsRDD = sc.parallelize(myRatedMovies) print 'My movie ratings: %s' % myRatingsRDD.take(10) Explanation: The user ID 0 is unassigned, so we will use it for your ratings. We set the variable myUserID to 0 for you. Next, create a new RDD myRatingsRDD with your ratings for at least 10 movie ratings. Each entry should be formatted as (myUserID, movieID, rating) (i.e., each entry should be formatted in the same way as trainingRDD). As in the original dataset, ratings should be between 1 and 5 (inclusive). If you have not seen at least 10 of these movies, you can increase the parameter passed to take() in the above cell until there are 10 movies that you have seen (or you can also guess what your rating would be for movies you have not seen). End of explanation # TODO: Replace <FILL IN> with appropriate code trainingWithMyRatingsRDD = trainingRDD.union(myRatingsRDD) print ('The training dataset now has %s more entries than the original training dataset' % (trainingWithMyRatingsRDD.count() - trainingRDD.count())) assert (trainingWithMyRatingsRDD.count() - trainingRDD.count()) == myRatingsRDD.count() Explanation: (3b) Add Your Movies to Training Dataset Now that you have ratings for yourself, you need to add your ratings to the training dataset so that the model you train will incorporate your preferences. Spark's union() transformation combines two RDDs; use union() to create a new training dataset that includes your ratings and the data in the original training dataset. End of explanation # TODO: Replace <FILL IN> with appropriate code myRatingsModel = ALS.train(trainingWithMyRatingsRDD, bestRank, seed=seed, iterations=iterations, lambda_=regularizationParameter) Explanation: (3c) Train a Model with Your Ratings Now, train a model with your ratings added and the parameters you used in in part (2c): bestRank, seed=seed, iterations=iterations, and lambda_=regularizationParameter - make sure you include all of the parameters. End of explanation # TODO: Replace <FILL IN> with appropriate code predictedTestMyRatingsRDD = myRatingsModel.predictAll(testForPredictingRDD) testRMSEMyRatings = computeError(testRDD, predictedTestMyRatingsRDD) print 'The model had a RMSE on the test set of %s' % testRMSEMyRatings Explanation: (3d) Check RMSE for the New Model with Your Ratings Compute the RMSE for this new model on the test set. For the prediction step, we reuse testForPredictingRDD, consisting of (UserID, MovieID) pairs that you extracted from testRDD. The RDD has the form: [(1, 1287), (1, 594), (1, 1270)] Use myRatingsModel.predictAll() to predict rating values for the testForPredictingRDD test dataset, set this as predictedTestMyRatingsRDD For validation, use the testRDDand your computeError function to compute the RMSE between testRDD and the predictedTestMyRatingsRDD from the model. End of explanation # TODO: Replace <FILL IN> with appropriate code # Use the Python list myRatedMovies to transform the moviesRDD into an RDD with entries that are pairs of the form (myUserID, Movie ID) and that does not contain any movies that you have rated. myUnratedMoviesRDD = (moviesRDD .filter(lambda x: x[0] not in [x[1] for x in myRatedMovies]) .map(lambda x: (myUserID, x[0]))) # Use the input RDD, myUnratedMoviesRDD, with myRatingsModel.predictAll() to predict your ratings for the movies predictedRatingsRDD = myRatingsModel.predictAll(myUnratedMoviesRDD) Explanation: (3e) Predict Your Ratings So far, we have only used the predictAll method to compute the error of the model. Here, use the predictAll to predict what ratings you would give to the movies that you did not already provide ratings for. The steps you should perform are: Use the Python list myRatedMovies to transform the moviesRDD into an RDD with entries that are pairs of the form (myUserID, Movie ID) and that does not contain any movies that you have rated. This transformation will yield an RDD of the form: [(0, 1), (0, 2), (0, 3), (0, 4)]. Note that you can do this step with one RDD transformation. For the prediction step, use the input RDD, myUnratedMoviesRDD, with myRatingsModel.predictAll() to predict your ratings for the movies. End of explanation # TODO: Replace <FILL IN> with appropriate code # Transform movieIDsWithAvgRatingsRDD from part (1b), which has the form (MovieID, (number of ratings, average rating)), into and RDD of the form (MovieID, number of ratings) movieCountsRDD = movieIDsWithAvgRatingsRDD.map(lambda x: (x[0], x[1][0])) # Transform predictedRatingsRDD into an RDD with entries that are pairs of the form (Movie ID, Predicted Rating) predictedRDD = predictedRatingsRDD.map(lambda x: (x[1], x[2])) # Use RDD transformations with predictedRDD and movieCountsRDD to yield an RDD with tuples of the form (Movie ID, (Predicted Rating, number of ratings)) predictedWithCountsRDD = (predictedRDD .join(movieCountsRDD)) # Use RDD transformations with PredictedWithCountsRDD and moviesRDD to yield an RDD with tuples of the form (Predicted Rating, Movie Name, number of ratings), for movies with more than 75 ratings ratingsWithNamesRDD = (predictedWithCountsRDD .filter(lambda x: x[1][1] > 75) .join(moviesRDD) .map(lambda x: (x[1][0][0], x[1][1], x[1][0][1]))) predictedHighestRatedMovies = ratingsWithNamesRDD.takeOrdered(20, key=lambda x: -x[0]) print ('My highest rated movies as predicted (for movies with more than 75 reviews):\n%s' % '\n'.join(map(str, predictedHighestRatedMovies))) Explanation: (3f) Predict Your Ratings We have our predicted ratings. Now we can print out the 25 movies with the highest predicted ratings. The steps you should perform are: From Parts (1b) and (1c), we know that we should look at movies with a reasonable number of reviews (e.g., more than 75 reviews). You can experiment with a lower threshold, but fewer ratings for a movie may yield higher prediction errors. Transform movieIDsWithAvgRatingsRDD from Part (1b), which has the form (MovieID, (number of ratings, average rating)), into an RDD of the form (MovieID, number of ratings): [(2, 332), (4, 71), (6, 442)] We want to see movie names, instead of movie IDs. Transform predictedRatingsRDD into an RDD with entries that are pairs of the form (Movie ID, Predicted Rating): [(3456, -0.5501005376936687), (1080, 1.5885892024487962), (320, -3.7952255522487865)] Use RDD transformations with predictedRDD and movieCountsRDD to yield an RDD with tuples of the form (Movie ID, (Predicted Rating, number of ratings)): [(2050, (0.6694097486155939, 44)), (10, (5.29762541533513, 418)), (2060, (0.5055259373841172, 97))] Use RDD transformations with predictedWithCountsRDD and moviesRDD to yield an RDD with tuples of the form (Predicted Rating, Movie Name, number of ratings), for movies with more than 75 ratings. For example: [(7.983121900375243, u'Under Siege (1992)'), (7.9769201864261285, u'Fifth Element, The (1997)')] End of explanation
12,895
Given the following text description, write Python code to implement the functionality described below step by step Description: Logbook Blocking An implementation of the network-based blocking mechanism Step1: Graph Structured Pairwise Comparisons By implementing a graph where person entity nodes are a tuple of (name, email) pairs (an immutable data structure that is hashable), we get structure right off the bat by direct comparison. The number of pariwise comparisons is computed as Step2: Edge structured comparisons only yield nodes so long as the itersection of the node's neighborhoods is empty (that is, two entities can't have an action to the same detail). Step3: Other structural blocking can then be applied. Fuzziness With some blocking in the data structure, we can now begin to do pairwise comparisons. Here, I'll use the fuzzywuzzy tool to produce comparisons for the annotator such that the mean of the fuzzy score for both email and name meets a certain threshold. Step4: Domain Counts
Python Code: %matplotlib inline import os import sys import random import networkx as nx ## Paths from the file PROJECT = os.path.join(os.getcwd(), "..") FIXTURES = os.path.join(PROJECT, "fixtures") DATASET = os.path.join(FIXTURES, 'activity.csv') ## Append the path for the logbook utilities sys.path.append(PROJECT) from logbook.reader import LogReader from logbook.graph import * from logbook.compare import * # Actions to exclude from our graph. # exclude = None exclude=['Subscribed to DDL blog', 'Signed up for new course notifications'] # Load dataset and generate graph dataset = LogReader(DATASET, exclude=exclude) G = graph_from_triples(dataset) print info(G) draw_activity_graph(G, connected=True, iterations=100) Explanation: Logbook Blocking An implementation of the network-based blocking mechanism End of explanation print "Pairwise Comparisons: {}\n\n".format(pairwise_comparisons(G, True)) combos = list(pairwise_comparisons(G, entity='person')) random.shuffle(combos) for idx, pair in enumerate(combos): print "Pair {}:".format(idx + 1) print " {}\n -- vs --\n {}".format(*pair) print if idx >= 4: break Explanation: Graph Structured Pairwise Comparisons By implementing a graph where person entity nodes are a tuple of (name, email) pairs (an immutable data structure that is hashable), we get structure right off the bat by direct comparison. The number of pariwise comparisons is computed as: $$c = \frac{n(n-1)}{2}$$ Where n is the number of nodes in the graph. The graph can be further filtered on entity type as well. Here are a random sample of 5 pairwise node to node comparisons: End of explanation print "Edge Blocked Pairwise Comparisons: {}\n\n".format(edge_blocked_comparisons(G, True)) combos = list(edge_blocked_comparisons(G, entity='person')) random.shuffle(combos) for idx, pair in enumerate(combos): print "Pair {}:".format(idx + 1) print " {}".format(pair[0]) for detail in G.neighbors(pair[0]): print " {}".format(detail) print " -- vs --" print " {}".format(pair[1]) for detail in G.neighbors(pair[1]): print " {}".format(detail) print if idx >= 4: break Explanation: Edge structured comparisons only yield nodes so long as the itersection of the node's neighborhoods is empty (that is, two entities can't have an action to the same detail). End of explanation combos = list(edge_blocked_comparisons(G, entity='person')) combos = filter(lambda pair: fuzzblock(*pair), combos) print "Fuzz/Edge Blocked Pairwise Comparisons: {}\n\n".format(len(combos)) random.shuffle(combos) for idx, pair in enumerate(combos): print "Pair {}:".format(idx + 1) print " {}".format(pair[0]) for detail in G.neighbors(pair[0]): print " {}".format(detail) print " -- vs --" print " {}".format(pair[1]) for detail in G.neighbors(pair[1]): print " {}".format(detail) print if idx >= 100: break Explanation: Other structural blocking can then be applied. Fuzziness With some blocking in the data structure, we can now begin to do pairwise comparisons. Here, I'll use the fuzzywuzzy tool to produce comparisons for the annotator such that the mean of the fuzzy score for both email and name meets a certain threshold. End of explanation from collections import Counter def count_email_domains(): counter = Counter() for triple in dataset: email = triple.entity.email domain = email.split("@")[-1] counter[domain] += 1 return counter domains = count_email_domains() for domain in domains.most_common(): print "{}: {}".format(*domain) Explanation: Domain Counts End of explanation
12,896
Given the following text description, write Python code to implement the functionality described below step by step Description: Example for using the Pvlib model The Pvlib model can be used to determine the feed-in of a photovoltaic module using the pvlib. The pvlib is a python library for simulating the performance of photovoltaic energy systems. For more information check out the documentation of the pvlib. The following example shows you how to use the Pvlib model. Set up Photovoltaic object Get weather data Calculate feed-in Set up Photovoltaic object <a class="anchor" id="photovoltaic_object"></a> To calculate the feed-in using the Pvlib model you have to set up a Photovoltaic object. You can import it as follows Step1: To set up a Photovoltaic system you have to provide all PV system parameters required by the PVlib model. The required parameters can be looked up in the model's documentation. For the Pvlib model these are the azimuth and tilt of the module as well as the albedo or surface type. Furthermore, the name of the module and inverter are needed to obtain technical parameters from the provided module and inverter databases. For an overview of the provided modules and inverters you can use the function get_power_plant_data(). Step2: Now you can set up a PV system to calculate feed-in for, using for example the first module and converter in the databases Step3: Optional power plant parameters Besides the required PV system parameters you can provide optional parameters such as the number of modules per string, etc. Optional PV system parameters are specific to the used model and how to find out about the possible optional parameters is documented in the model's feedin method under power_plant_parameters. In case of the Pvlib model see here. Step4: Get weather data <a class="anchor" id="weather_data"></a> Besides setting up your PV system you have to provide weather data the feed-in is calculated with. This example uses open_FRED weather data. For more information on the data and download see the load_open_fred_weather_data Notebook. Step5: Calculate feed-in <a class="anchor" id="feedin"></a> The feed-in can be calculated by calling the Photovoltaic's feedin method with the weather data. For the Pvlib model you also have to provide the location of the PV system. Step6: Scaled feed-in The PV feed-in can also be automatically scaled by the PV system's area or peak power. The following example shows how to scale feed-in by area. Step7: To scale by the peak power use scaling=peak_power. The PV system area and peak power can be retrieved as follows Step8: Feed-in for PV system with optional parameters In the following example the feed-in is calculated for the PV system with optional system parameters (with 2 modules per string, instead of 1, which is the default). It was chosen to demonstrate the importantance of choosing a suitable converter. Step9: As the above plot shows the feed-in is cut off at 250 W. That is because it is limited by the inverter. So while the area is as expected two times greater as for the PV system without optional parameters, the peak power is only around 1.2 times higher. Step10: If you are only interested in the modules power output without the inverter losses you can have the Pvlib model return the DC feed-in. This is done as follows Step11: Feed-in with optional model parameters In order to change the default calculation configurations of the Pvlib model to e.g. choose a different model to calculate losses or the solar position you can pass further parameters to the feedin method. An overview of which further parameters may be provided is documented under the feedin method's kwargs.
Python Code: from feedinlib import Photovoltaic # suppress warnings import warnings warnings.filterwarnings("ignore") Explanation: Example for using the Pvlib model The Pvlib model can be used to determine the feed-in of a photovoltaic module using the pvlib. The pvlib is a python library for simulating the performance of photovoltaic energy systems. For more information check out the documentation of the pvlib. The following example shows you how to use the Pvlib model. Set up Photovoltaic object Get weather data Calculate feed-in Set up Photovoltaic object <a class="anchor" id="photovoltaic_object"></a> To calculate the feed-in using the Pvlib model you have to set up a Photovoltaic object. You can import it as follows: End of explanation from feedinlib import get_power_plant_data # get modules module_df = get_power_plant_data(dataset='sandiamod') # print the first four modules module_df.iloc[:, 1:5] # get inverter data inverter_df = get_power_plant_data(dataset='cecinverter') # print the first four inverters inverter_df.iloc[:, 1:5] Explanation: To set up a Photovoltaic system you have to provide all PV system parameters required by the PVlib model. The required parameters can be looked up in the model's documentation. For the Pvlib model these are the azimuth and tilt of the module as well as the albedo or surface type. Furthermore, the name of the module and inverter are needed to obtain technical parameters from the provided module and inverter databases. For an overview of the provided modules and inverters you can use the function get_power_plant_data(). End of explanation system_data = { 'module_name': 'Advent_Solar_Ventura_210___2008_', # module name as in database 'inverter_name': 'ABB__MICRO_0_25_I_OUTD_US_208__208V_', # inverter name as in database 'azimuth': 180, 'tilt': 30, 'albedo': 0.2} pv_system = Photovoltaic(**system_data) Explanation: Now you can set up a PV system to calculate feed-in for, using for example the first module and converter in the databases: End of explanation system_data['modules_per_string'] = 2 pv_system_with_optional_parameters = Photovoltaic(**system_data) Explanation: Optional power plant parameters Besides the required PV system parameters you can provide optional parameters such as the number of modules per string, etc. Optional PV system parameters are specific to the used model and how to find out about the possible optional parameters is documented in the model's feedin method under power_plant_parameters. In case of the Pvlib model see here. End of explanation from feedinlib.open_FRED import Weather from feedinlib.open_FRED import defaultdb from shapely.geometry import Point # specify latitude and longitude of PV system location lat = 52.4 lon = 13.5 location = Point(lon, lat) # download weather data for June 2017 open_FRED_weather_data = Weather( start='2017-06-01', stop='2017-07-01', locations=[location], variables="pvlib", **defaultdb()) # get weather data in pvlib format weather_df = open_FRED_weather_data.df(location=location, lib="pvlib") # plot irradiance import matplotlib.pyplot as plt %matplotlib inline weather_df.loc[:, ['dhi', 'ghi']].plot(title='Irradiance') plt.xlabel('Time') plt.ylabel('Irradiance in $W/m^2$'); Explanation: Get weather data <a class="anchor" id="weather_data"></a> Besides setting up your PV system you have to provide weather data the feed-in is calculated with. This example uses open_FRED weather data. For more information on the data and download see the load_open_fred_weather_data Notebook. End of explanation feedin = pv_system.feedin( weather=weather_df, location=(lat, lon)) # plot calculated feed-in import matplotlib.pyplot as plt %matplotlib inline feedin.plot(title='PV feed-in') plt.xlabel('Time') plt.ylabel('Power in W'); Explanation: Calculate feed-in <a class="anchor" id="feedin"></a> The feed-in can be calculated by calling the Photovoltaic's feedin method with the weather data. For the Pvlib model you also have to provide the location of the PV system. End of explanation feedin_scaled = pv_system.feedin( weather=weather_df, location=(lat, lon), scaling='area') Explanation: Scaled feed-in The PV feed-in can also be automatically scaled by the PV system's area or peak power. The following example shows how to scale feed-in by area. End of explanation pv_system.area pv_system.peak_power # plot calculated feed-in import matplotlib.pyplot as plt %matplotlib inline feedin_scaled.plot(title='Scaled PV feed-in') plt.xlabel('Time') plt.ylabel('Power in W'); Explanation: To scale by the peak power use scaling=peak_power. The PV system area and peak power can be retrieved as follows: End of explanation feedin_ac = pv_system_with_optional_parameters.feedin( weather=weather_df, location=(lat, lon)) # plot calculated feed-in import matplotlib.pyplot as plt %matplotlib inline feedin_ac.plot(title='PV feed-in') plt.xlabel('Time') plt.ylabel('Power in W'); Explanation: Feed-in for PV system with optional parameters In the following example the feed-in is calculated for the PV system with optional system parameters (with 2 modules per string, instead of 1, which is the default). It was chosen to demonstrate the importantance of choosing a suitable converter. End of explanation pv_system_with_optional_parameters.peak_power / pv_system.peak_power pv_system_with_optional_parameters.area / pv_system.area Explanation: As the above plot shows the feed-in is cut off at 250 W. That is because it is limited by the inverter. So while the area is as expected two times greater as for the PV system without optional parameters, the peak power is only around 1.2 times higher. End of explanation feedin_dc = pv_system_with_optional_parameters.feedin( weather=weather_df, location=(lat, lon), mode='dc') # plot calculated feed-in import matplotlib.pyplot as plt %matplotlib inline feedin_dc.plot(label='DC', title='AC and DC PV feed-in', legend=True) feedin_ac.plot(label='AC', legend=True) plt.xlabel('Time') plt.ylabel('Power in W'); Explanation: If you are only interested in the modules power output without the inverter losses you can have the Pvlib model return the DC feed-in. This is done as follows: End of explanation feedin_no_loss = pv_system.feedin( weather=weather_df, location=(lat, lon), aoi_model='no_loss') # plot calculated feed-in import matplotlib.pyplot as plt %matplotlib inline feedin_no_loss.iloc[0:96].plot(label='aoi_model = no_loss', legend=True) feedin.iloc[0:96].plot(label='aoi_model = sapm_aoi_loss', legend=True) plt.xlabel('Time') plt.ylabel('Power in W'); Explanation: Feed-in with optional model parameters In order to change the default calculation configurations of the Pvlib model to e.g. choose a different model to calculate losses or the solar position you can pass further parameters to the feedin method. An overview of which further parameters may be provided is documented under the feedin method's kwargs. End of explanation
12,897
Given the following text description, write Python code to implement the functionality described below step by step Description: Биномиальный критерий для доли Step1: Shaken, not stirred Джеймс Бонд говорит, что предпочитает мартини взболтанным, но не смешанным. Проведём слепой тест (blind test) Step2: Односторонняя альтернатива гипотеза $H_1$ Step3: Двусторонняя альтернатива гипотеза $H_1$
Python Code: import numpy as np from scipy import stats %pylab inline Explanation: Биномиальный критерий для доли End of explanation n = 16 F_H0 = stats.binom(n, 0.5) x = np.linspace(0,16,17) pylab.bar(x, F_H0.pmf(x), align = 'center') xlim(-0.5, 16.5) pylab.show() Explanation: Shaken, not stirred Джеймс Бонд говорит, что предпочитает мартини взболтанным, но не смешанным. Проведём слепой тест (blind test): $n$ раз предложим ему пару напитков и выясним, какой из двух он предпочитает. Получаем: * выборка: бинарный вектор длины $n$, где 1 — Джеймс Бонд предпочел взболтанный напиток, 0 — смешанный; * гипотеза $H_0$: Джеймс Бонд не различает 2 вида напитков и выбирает наугад; * статистика $T$: количество единиц в выборке. Если нулевая гипотеза справедлива и Джеймс Бонд действительно выбирает наугад, то мы можем с одинаковой вероятностью получить любой из $2^n$ бинарных векторов длины $n$. Мы могли бы перебрать все такие векторы, посчитать на каждом значение статистики $T$ и получить таким образом её нулевое распределение. Но в данном случае этот этап можно пропустить: мы имеем дело с выборкой, состоящей из 0 и 1, то есть, из распределения Бернулли $Ber(p)$. Нулевая гипотеза выбора наугад соответствует значению $p=\frac1{2}$, то есть, в каждом эксперименте вероятность выбора взболтанного мартини равна $\frac1{2}$. Сумма $n$ одинаково распределённых бернуллиевских случайных величин с параметром $p$ имеет биномиальное распределение $Bin(n, p)$. Следовательно, нулевое распределение статистики $T$ — $Bin\left(n, \frac1{2}\right)$. Пусть $n=16.$ End of explanation pylab.bar(x, F_H0.pmf(x), align = 'center') pylab.bar(np.linspace(12,16,5), F_H0.pmf(np.linspace(12,16,5)), align = 'center', color='red') xlim(-0.5, 16.5) pylab.show() stats.binom_test(12, 16, 0.5, alternative = 'greater') pylab.bar(x, F_H0.pmf(x), align = 'center') pylab.bar(np.linspace(11,16,6), F_H0.pmf(np.linspace(11,16,6)), align = 'center', color='red') xlim(-0.5, 16.5) pylab.show() stats.binom_test(11, 16, 0.5, alternative = 'greater') Explanation: Односторонняя альтернатива гипотеза $H_1$: Джеймс Бонд предпочитает взболтанный мартини. При такой альтернативе более вероятны большие значения статистики; при расчёте достигаемого уровня значимости будем суммировать высоту столбиков в правом хвосте распределения. End of explanation pylab.bar(x, F_H0.pmf(x), align = 'center') pylab.bar(np.linspace(12,16,5), F_H0.pmf(np.linspace(12,16,5)), align = 'center', color='red') pylab.bar(np.linspace(0,4,5), F_H0.pmf(np.linspace(0,4,5)), align = 'center', color='red') xlim(-0.5, 16.5) pylab.show() stats.binom_test(12, 16, 0.5, alternative = 'two-sided') pylab.bar(x, F_H0.pmf(x), align = 'center') pylab.bar(np.linspace(13,16,4), F_H0.pmf(np.linspace(13,16,4)), align = 'center', color='red') pylab.bar(np.linspace(0,3,4), F_H0.pmf(np.linspace(0,3,4)), align = 'center', color='red') xlim(-0.5, 16.5) pylab.show() stats.binom_test(13, 16, 0.5, alternative = 'two-sided') Explanation: Двусторонняя альтернатива гипотеза $H_1$: Джеймс Бонд предпочитает какой-то определённый вид мартини. При такой альтернативе более вероятны очень большие и очень маленькие значения статистики; при расчёте достигаемого уровня значимости будем суммировать высоту столбиков в правом и левом хвостах распределения. End of explanation
12,898
Given the following text description, write Python code to implement the functionality described below step by step Description: Mit Datenanalysen Probleme in der Entwicklung aufzeigen <small>Java User Group Hessen, Kassel, 25.04.2019</small> <b>Markus Harrer</b>, Software Development Analyst Twitter Step1: Was haben wir hier eigentlich? Step2: <b>1</b> DataFrame (~ programmierbares Excel-Arbeitsblatt), <b>4</b> Series (= Spalten), <b>5665947</b> Rows (= Einträge) III. Bereinigen Daten sind oft nicht so, wie man sie braucht Datentypen passen teilweise noch nicht Wir wandeln die Zeitstempel um Step3: Wir berechnen uns das Alter jeder Codezeilenänderung Step4: IV. Anreichern Vorhandenen Daten noch zusätzlich mit anderen Datenquellen verschneiden Aber auch Step5: <br/> <small><i>String-Operationen...die dauern. Gibt aber diverse Optimierungsmöglichkeiten!</i></small> V. Aggregieren Vorhandene Daten sind oft zu viel für manuelle Sichtung Neue Einsichten über Problem aber oft auf hoher Flugbahn möglich Wir fassen nach Komponenten zusammen und arbeiten mit der jeweils jüngsten Zeilenänderung weiter Step6: IV. Visualisieren Grafische Darstellung geben Analysen den letzten Schliff Probleme können Außenstehenden visuell dargestellt besser kommuniziert werden Wir bauen ein Diagramm mit Min-Alter pro Komponente
Python Code: import pandas as pd log = pd.read_csv("../dataset/linux_blame_log.csv.gz") log.head() Explanation: Mit Datenanalysen Probleme in der Entwicklung aufzeigen <small>Java User Group Hessen, Kassel, 25.04.2019</small> <b>Markus Harrer</b>, Software Development Analyst Twitter: @feststelltaste Blog: feststelltaste.de <img src="../resources/innoq_logo.jpg" width=20% height="20%" align="right"/> Das Problem mit den Problemen in der Softwareentwicklung Der typische Software-Problemverlauf <img src="../resources/schuld1.png" width=85% align="center"/> Der typische Software-Problemverlauf <img src="../resources/schuld2.png" width=85% align="center"/> Der typische Software-Problemverlauf <img src="../resources/schuld3.png" width=85% align="center"/> Der typische Software-Problemverlauf <img src="../resources/schuld4.png" width=85% align="center"/> Das eigentliche Problem <img src="../resources/kombar0.png" width=95% align="center"/> <img src="../resources/kombar1.png" width=95% align="center"/> <img src="../resources/kombar2.png" width=95% align="center"/> <img src="../resources/kombar3.png" width=95% align="center"/> Wie Daten analysieren? <div align="center"> <h3>The <span class="yellow">ultimate</span>, <span class="green">super</span> <span class="red">awesome</span><br/> Quality Management Dashboard</h3> <img src="../resources/Sonarqube-nemo-dashboard_small.png"> </div> Häufigkeit von Fragen vs. deren Risiken <img src="../resources/risk1.png" width=95% align="center"/> Häufigkeit von Fragen vs. deren Risiken <img src="../resources/risk2.png" width=95% align="center"/> Häufigkeit von Fragen vs. deren Risiken <img src="../resources/risk3.png" width=95% align="center"/> Häufigkeit von Fragen vs. deren Risiken <img src="../resources/risk4.png" width=95% align="center"/> Es braucht zusätzliche, situations-spezifische Datenanalysen! Wie machen es andere Disziplinen? Data Science! Was ist Data Science? "<b><span class="green">Datenanalysen</span></b> auf nem Mac." <br/> <br/> <div align="right"><small>Frei nach https://twitter.com/cdixon/status/428914681911070720</small></div> <div align="center"> <img src ="../resources/trollface.jpg" align="center"/> </div> Meine Definition von Data Science Was bedeutet "data"? "Without data you‘re just another person with an opinion." <br/> <div align="right"><small>W. Edwards Deming</small></div> <b>=> Belastbare Erkenntnisse mittels <span class="green">Fakten</span> liefern</b> Was bedeutet "science"? "The aim of science is to seek the simplest explanations of complex facts." <br/> <div align="right"><small>Albert Einstein</small></div> <b>=> Neue Erkenntnisse <span class="green">verständlich</span> herausarbeiten</b> Was ist ein Data Scientist? "Jemand, der mehr Ahnung von Statistik<br/> &nbsp;&nbsp;hat als ein <b><span class="green">Softwareentwickler</span></b><br/> &nbsp;&nbsp;und mehr Ahnung von <b><span class="green">Softwareentwicklung</span></b><br/> &nbsp;&nbsp;als ein Statistiker." <br/> <br/> <div align="right"><small>Nach zu https://twitter.com/cdixon/status/428914681911070720</small></div> <b>Data Science:</b> Perfect <b><span class="green">match</span></b>! Was an Daten analysieren? Softwaredaten! Alles was aus der Entwicklung und dem Betrieb der Softwaresysteme so anfällt: * Statische Daten * Laufzeitdaten * Chronologische Daten * Daten aus der Software-Community Zwischenfazit Data Science <span class="red">&#10084;</span> Software Data <span style="color:white">= <b>Software Analytics</b></span> Zwischenfazit Data Science <span class="red">&#10084;</span> Software Data = <b>Software Analytics</b> Definition Software Analytics "Software Analytics is analytics on software data for managers and <b class="green">software engineers</b> with the aim of empowering software development individuals and teams to gain and share insight from their data to <b>make better decisions</b>." <br/> <div align="right"><small>Tim Menzies and Thomas Zimmermann</small></div> Wie Software Analytics umsetzen? Der Leitgedanke [(Daten + Code + Ergebnis) * gedanklichen Schritt] + komplette Automatisierung Schlüsselelement: Computational notebooks Der Notebook-Ansatz <br/> <div align="center"><img src="../resources/notebook_approach.jpg"></div> Technologie (1/2) Klassischer Data-Science-Werkzeugkasten * Jupyter * Python 3 * pandas * matplotlib Technologie (2/2) Jupyter funktioniert und integriert sich auch mit * Cypher / Neo4j / jQAssistant * JVM-Sprachen über beakerx / Tablesaw * bash * ... Beispiele für gezielte Datenanalysen Performance-Bottlenecks Verborgene Teamkommunikation Architektur-/Design-/Code-Smells <b>No-Go-Areas in Altanwendungen</b> ... Praktischer Teil Erstes Hands-On No-Go-Areas in Altanwendungen Der Patient Linux Betriebsystem-Kernel Hat verschiedene Treiberkomponenten Fast ausschließlich in C geschrieben Entwickelt von über 800.000 Entwicklern I. Idee (1/2) <b>Fragestellung</b> * Gibt es besonders alte Komponenten, wo sich niemand mehr auskennt (No-Go-Areas)? <b>Heuristik</b> * Wann waren die letzten Änderungen innerhalb einer Komponente? I. Idee (2/2) Umsetzung Werkzeuge: Jupyter, Python, pandas, matplotlib Datenquelle: Git Blame Log Meta-Ziel: Grundfunktionen anhand eines einfachen Show-Cases sehen. Ausgangsdaten: <b>Git Blame Log</b> <div align="center"> <img src ="../resources/linux_1.gif" align="center"/> </div> Ausgangsdaten: <b>Git Blame Log</b> <div align="center"> <img src ="../resources/linux_2.gif" align="center"/> </div> Ausgangsdaten: <b>Git Blame Log</b> <div align="center"> <img src ="../resources/linux_3.gif" align="center"/> </div> II. Datenbeschaffung Wir laden Git Blame Daten aus einer CSV-Datei End of explanation log.info() Explanation: Was haben wir hier eigentlich? End of explanation log['timestamp'] = pd.to_datetime(log['timestamp']) log.head() Explanation: <b>1</b> DataFrame (~ programmierbares Excel-Arbeitsblatt), <b>4</b> Series (= Spalten), <b>5665947</b> Rows (= Einträge) III. Bereinigen Daten sind oft nicht so, wie man sie braucht Datentypen passen teilweise noch nicht Wir wandeln die Zeitstempel um End of explanation log['age'] = pd.Timestamp('today') - log['timestamp'] log.head() Explanation: Wir berechnen uns das Alter jeder Codezeilenänderung End of explanation log['component'] = log['path'].str.split("/").str[:2].str.join(":") log.head() Explanation: IV. Anreichern Vorhandenen Daten noch zusätzlich mit anderen Datenquellen verschneiden Aber auch: Teile aus vorhanden Daten extrahieren => Dadurch werden mehrere <b>Perspektiven</b> auf ein Problem möglich Wir ordnen jeder Zeilenänderung einer Komponente zu End of explanation age_per_component = log.groupby("component")['age'].min().sort_values() age_per_component.head() Explanation: <br/> <small><i>String-Operationen...die dauern. Gibt aber diverse Optimierungsmöglichkeiten!</i></small> V. Aggregieren Vorhandene Daten sind oft zu viel für manuelle Sichtung Neue Einsichten über Problem aber oft auf hoher Flugbahn möglich Wir fassen nach Komponenten zusammen und arbeiten mit der jeweils jüngsten Zeilenänderung weiter End of explanation age_per_component.plot.bar(figsize=[15,5]); Explanation: IV. Visualisieren Grafische Darstellung geben Analysen den letzten Schliff Probleme können Außenstehenden visuell dargestellt besser kommuniziert werden Wir bauen ein Diagramm mit Min-Alter pro Komponente End of explanation
12,899
Given the following text description, write Python code to implement the functionality described below step by step Description: OIQ Exam Question 2 Question from OIQ Technical Exam, obviously meant to be solved using moment distribution, but here we see how easy it is using slope deflection instead. This version users a newer version of 'sdutil' that computes end-shears as well as moments. Step1: Solution 1 Actually, solution 1 does not use any special library modules at all (except for 'sympy'). Step2: Solution 2 Here it is again, using the SD utilities Step3: End moments and shears Step4: Reactions Step5: Equilibrium check
Python Code: from IPython import display display.SVG('oiq-frame-1.svg') Explanation: OIQ Exam Question 2 Question from OIQ Technical Exam, obviously meant to be solved using moment distribution, but here we see how easy it is using slope deflection instead. This version users a newer version of 'sdutil' that computes end-shears as well as moments. End of explanation from sympy import * var('EI ta tb tc td') Mab = (EI/4)*(4*ta + 2*tb) - 24*4**2/12 Mba = (EI/4)*(2*ta + 4*tb) + 24*4**2/12 Mbc = (EI/2/4)*(4*tb + 2*tc) Mcb = (EI/2/4)*(2*tb + 4*tc) Mcd = (EI/4)*(4*tc + 2*td) - 72*4/8 Mdc = (EI/4)*(2*tc + 4*td) + 72*4/8 eqns = [ta, Mba+Mbc, Mcb+Mcd, Mdc] soln = solve( eqns, [ta, tb, tc, td] ) soln [m.subs(soln).n(4) for m in [Mab,Mba,Mbc,Mcb,Mcd,Mdc]] Rd = (Mcd + 72*2)/4 Ra = (-(Mab + Mba) + 24*4*2)/4 Rc = 24*4 + 72 - Ra - Rd Hd = (Mbc + 72*2 - Rd*4)/4 [r.subs(soln).n(4) for r in [Ra,Rc,Rd,Hd]] Explanation: Solution 1 Actually, solution 1 does not use any special library modules at all (except for 'sympy'). End of explanation from sympy import * init_printing(use_latex='mathjax') from sdutil2 import SD,FEF var('EI theta_a theta_b theta_c theta_d') M_ab, M_ba, V_ab, V_ba = SD(4,EI,theta_a,theta_b) + FEF.udl(4,24) M_bc, M_cb, V_bc, V_cb = SD(4,EI/2,theta_b,theta_c) M_cd, M_dc, V_cd, V_dc = SD(4,EI,theta_c,theta_d) + FEF.p(4,72,2) eqns = [theta_a, M_ba+M_bc, M_cb+M_cd, M_dc] soln = solve( eqns, [theta_a, theta_b, theta_c, theta_d] ) soln Explanation: Solution 2 Here it is again, using the SD utilities: End of explanation [m.subs(soln).n(4) for m in [M_ab,M_ba,M_bc,M_cb,M_cd,M_dc]] [m.subs(soln).n(4) for m in [V_ab,V_ba,V_bc,V_cb,V_cd,V_dc]] Explanation: End moments and shears: End of explanation R_d = -V_dc R_c = V_cd - V_ba R_a = V_ab H_d = -V_cb H_a = V_cb M_a = M_ab [r.subs(soln).n(4) for r in [R_a,R_c,R_d,H_d,H_a,M_a]] Explanation: Reactions: End of explanation ## sum vertical forces (R_a + R_c + R_d - 24*4 - 72).subs(soln) ## sum horizontal forces (H_a+H_d).subs(soln) ## sum moments about a (M_a + 24*4*2 - R_c*4 + 72*6 - R_d*8 - H_d*4).subs(soln) Explanation: Equilibrium check End of explanation