markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
2.2 Bernoulli MLP as decoderIn this case let $p_\theta(x|z)$ be a multivariate Bernoulli whose probabilities are computed from $z$ with a feed forward neural network with a single hidden layer:\begin{align}\log p(x|z) &= \sum_{i=1}^D x_i\log y_i + (1-x_i)\log (1-y_i) \\\textit{ where } y &= f_\sigma(W_5\tanh (W_4z+b_4)+b_5)\end{align}where $f_\sigma(\dot)$ is the elementwise sigmoid activation function, $\{W_4,W_5,b_4,b_5\}$ are the weights and biases of the decoder MLP. A Bernouilli likelihood is suitable for this type of data but you can easily extend it to other likelihood types by parsing into the argument `likelihood` in the `VAE` class, see section 4 for details. | # define fully connected and tanh activation layers for the decoder
decoder_z = mx.sym.FullyConnected(data=z, name="decoder_z",num_hidden=400)
act_z = mx.sym.Activation(data=decoder_z, act_type="tanh",name="activation_z")
# define the output layer with sigmoid activation function, where the dimension is equal to the input dimension
decoder_x = mx.sym.FullyConnected(data=act_z, name="decoder_x",num_hidden=features)
y = mx.sym.Activation(data=decoder_x, act_type="sigmoid",name='activation_x') | _____no_output_____ | Apache-2.0 | example/vae/VAE_example.ipynb | dkuspawono/incubator-mxnet |
2.3 Joint Loss Function for the Encoder and the DecoderThe variational lower bound also called evidence lower bound (ELBO) can be estimated as:\begin{align}\mathcal{L}(\theta,\phi;x_{(i)}) \approx \frac{1}{2}\left(1+\log ((\sigma_j^{(i)})^2)-(\mu_j^{(i)})^2-(\sigma_j^{(i)})^2\right) + \log p_\theta(x^{(i)}|z^{(i)})\end{align}where the first term is the KL divergence of the approximate posterior from the prior, and the second term is an expected negative reconstruction error. We would like to maximize this lower bound, so we can define the loss to be $-\mathcal{L}$(minus ELBO) for MXNet to minimize. | # define the objective loss function that needs to be minimized
KL = 0.5*mx.symbol.sum(1+logvar-pow( mu,2)-mx.symbol.exp(logvar),axis=1)
loss = -mx.symbol.sum(mx.symbol.broadcast_mul(loss_label,mx.symbol.log(y))
+ mx.symbol.broadcast_mul(1-loss_label,mx.symbol.log(1-y)),axis=1)-KL
output = mx.symbol.MakeLoss(sum(loss),name='loss') | _____no_output_____ | Apache-2.0 | example/vae/VAE_example.ipynb | dkuspawono/incubator-mxnet |
3. Training the modelNow, we can define the model and train it. First we will initilize the weights and the biases to be Gaussian(0,0.01), and then use stochastic gradient descent for optimization. To warm start the training, one may also initilize with pre-trainined parameters `arg_params` using `init=mx.initializer.Load(arg_params)`. To save intermediate results, we can optionally use `epoch_end_callback = mx.callback.do_checkpoint(model_prefix, 1)` which saves the parameters to the path given by model_prefix, and with period every $1$ epoch. To assess the performance, we output $-\mathcal{L}$(minus ELBO) after each epoch, with the command `eval_metric = 'Loss'` which is defined above. We will also plot the training loss for mini batches by accessing the log and saving it to a list, and then parsing it to the argument `batch_end_callback`. | # set up the log
nd_iter.reset()
logging.getLogger().setLevel(logging.DEBUG)
# define function to trave back training loss
def log_to_list(period, lst):
def _callback(param):
"""The checkpoint function."""
if param.nbatch % period == 0:
name, value = param.eval_metric.get()
lst.append(value)
return _callback
# define the model
model = mx.mod.Module(
symbol = output ,
data_names=['data'],
label_names = ['loss_label'])
# training the model, save training loss as a list.
training_loss=list()
# initilize the parameters for training using Normal.
init = mx.init.Normal(0.01)
model.fit(nd_iter, # train data
initializer=init,
# if eval_data is supplied, test loss will also be reported
# eval_data = nd_iter_test,
optimizer='sgd', # use SGD to train
optimizer_params={'learning_rate':1e-3,'wd':1e-2},
# save parameters for each epoch if model_prefix is supplied
epoch_end_callback = None if model_prefix==None else mx.callback.do_checkpoint(model_prefix, 1),
batch_end_callback = log_to_list(N/batch_size,training_loss),
num_epoch=100,
eval_metric = 'Loss')
ELBO = [-training_loss[i] for i in range(len(training_loss))]
plt.plot(ELBO)
plt.ylabel('ELBO');plt.xlabel('epoch');plt.title("training curve for mini batches")
plt.show() | _____no_output_____ | Apache-2.0 | example/vae/VAE_example.ipynb | dkuspawono/incubator-mxnet |
As expected, the ELBO is monotonically increasing over epoch, and we reproduced the results given in the paper [Auto-Encoding Variational Bayes](https://arxiv.org/abs/1312.6114/). Now we can extract/load the parameters and then feed the network forward to calculate $y$ which is the reconstructed image, and we can also calculate the ELBO for the test set. | arg_params = model.get_params()[0]
# if saved the parameters, can load them using `load_checkpoint` method at e.g. 100th epoch
# sym, arg_params, aux_params = mx.model.load_checkpoint(model_prefix, 100)
# assert sym.tojson() == output.tojson()
e = y.bind(mx.cpu(), {'data': nd_iter_test.data[0][1],
'encoder_h_weight': arg_params['encoder_h_weight'],
'encoder_h_bias': arg_params['encoder_h_bias'],
'mu_weight': arg_params['mu_weight'],
'mu_bias': arg_params['mu_bias'],
'logvar_weight':arg_params['logvar_weight'],
'logvar_bias':arg_params['logvar_bias'],
'decoder_z_weight':arg_params['decoder_z_weight'],
'decoder_z_bias':arg_params['decoder_z_bias'],
'decoder_x_weight':arg_params['decoder_x_weight'],
'decoder_x_bias':arg_params['decoder_x_bias'],
'loss_label':label})
x_fit = e.forward()
x_construction = x_fit[0].asnumpy()
# learning images on the test set
f, ((ax1, ax2, ax3, ax4)) = plt.subplots(1,4, sharex='col', sharey='row',figsize=(12,3))
ax1.imshow(np.reshape(image_test[0,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax1.set_title('True image')
ax2.imshow(np.reshape(x_construction[0,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax2.set_title('Learned image')
ax3.imshow(np.reshape(x_construction[999,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax3.set_title('Learned image')
ax4.imshow(np.reshape(x_construction[9999,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax4.set_title('Learned image')
plt.show()
# calculate the ELBO which is minus the loss for test set
metric = mx.metric.Loss()
model.score(nd_iter_test, metric) | _____no_output_____ | Apache-2.0 | example/vae/VAE_example.ipynb | dkuspawono/incubator-mxnet |
4. All together: MXNet-based class VAE | from VAE import VAE | _____no_output_____ | Apache-2.0 | example/vae/VAE_example.ipynb | dkuspawono/incubator-mxnet |
One can directly call the class `VAE` to do the training:```VAE(n_latent=5,num_hidden_ecoder=400,num_hidden_decoder=400,x_train=None,x_valid=None,batch_size=100,learning_rate=0.001,weight_decay=0.01,num_epoch=100,optimizer='sgd',model_prefix=None,initializer = mx.init.Normal(0.01),likelihood=Bernoulli)```The outputs are the learned model and training loss. | # can initilize weights and biases with the learned parameters as follows:
# init = mx.initializer.Load(params)
# call the VAE, output model contains the learned model and training loss
out = VAE(n_latent=2, x_train=image, x_valid=None, num_epoch=200)
# encode test images to obtain mu and logvar which are used for sampling
[mu,logvar] = VAE.encoder(out,image_test)
# sample in the latent space
z = VAE.sampler(mu,logvar)
# decode from the latent space to obtain reconstructed images
x_construction = VAE.decoder(out,z)
f, ((ax1, ax2, ax3, ax4)) = plt.subplots(1,4, sharex='col', sharey='row',figsize=(12,3))
ax1.imshow(np.reshape(image_test[0,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax1.set_title('True image')
ax2.imshow(np.reshape(x_construction[0,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax2.set_title('Learned image')
ax3.imshow(np.reshape(x_construction[999,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax3.set_title('Learned image')
ax4.imshow(np.reshape(x_construction[9999,:],(28,28)), interpolation='nearest', cmap=cm.Greys)
ax4.set_title('Learned image')
plt.show()
z1 = z[:,0]
z2 = z[:,1]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(z1,z2,'ko')
plt.title("latent space")
#np.where((z1>3) & (z2<2) & (z2>0))
#select the points from the latent space
a_vec = [2,5,7,789,25,9993]
for i in range(len(a_vec)):
ax.plot(z1[a_vec[i]],z2[a_vec[i]],'ro')
ax.annotate('z%d' %i, xy=(z1[a_vec[i]],z2[a_vec[i]]),
xytext=(z1[a_vec[i]],z2[a_vec[i]]),color = 'r',fontsize=15)
f, ((ax0, ax1, ax2, ax3, ax4,ax5)) = plt.subplots(1,6, sharex='col', sharey='row',figsize=(12,2.5))
for i in range(len(a_vec)):
eval('ax%d' %(i)).imshow(np.reshape(x_construction[a_vec[i],:],(28,28)), interpolation='nearest', cmap=cm.Greys)
eval('ax%d' %(i)).set_title('z%d'%i)
plt.show() | _____no_output_____ | Apache-2.0 | example/vae/VAE_example.ipynb | dkuspawono/incubator-mxnet |
Quick Exercises 1 1. Verify that |+⟩ and |−⟩ are in fact eigenstates of the X-gate. First we need to define the |+⟩ and |−⟩ states in 2 different qubits. I will initalize the first qubit to 1 and the second to 0. | qCirc1 = QuantumCircuit(2, 2)
oneInit = [0, 1]
qCirc1.initialize(oneInit, 0)
zeroInit = [1, 0]
qCirc1.initialize(zeroInit, 1)
qCirc1.draw('mpl')
qCirc1.h(0)
qCirc1.h(1)
qCirc1.draw('mpl') | _____no_output_____ | MIT | Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb | kj3moraes/MyQiskitProgramming |
Now the first qubit is in the |-⟩ and the second qubit is in the |+⟩ state. We can now apply the X gates. If the |+⟩ and |−⟩ states are really eigenstates then a reapplication of the Hadamard gates and a measurement should give |0⟩ and |1⟩. | qCirc1.x([0,1])
qCirc1.h([0,1])
qCirc1.draw('mpl')
qCirc1.measure(0, 0)
qCirc1.measure(1, 1)
qCirc1.draw('mpl') | _____no_output_____ | MIT | Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb | kj3moraes/MyQiskitProgramming |
Simulating this circuit on QASM | nativeSim = Aer.get_backend('qasm_simulator')
result = execute(qCirc1, backend = nativeSim).result()
plot_histogram(result.get_counts(qCirc1)) | _____no_output_____ | MIT | Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb | kj3moraes/MyQiskitProgramming |
This proves the property we were looking for 4. Find the eigenstates of the Y-gate, and their co-ordinates on the Bloch sphere. The eigenstates of the Y gate will simply be the y-basis vectors because the are unaffected by a rotation of $\pi$ about the y-axis. These are $$ \frac{1}{\sqrt{2}}(|0⟩ + i|1⟩) \text{ and } \frac{1}{\sqrt{2}}(|0⟩ - i|1⟩) $$ Experiment with Pauli Y Gate The following experiment is to test out the x-axis basis vectors under a y-axis transformation | qCircTest = QuantumCircuit(2, 2);
qCircTest.initialize(zeroInit, 0);
qCircTest.initialize(oneInit, 1)
qCircTest.draw('mpl')
qCircTest.y([0,1])
qCircTest.measure([0, 1], [0, 1])
qCircTest.draw('mpl')
result = execute(qCircTest, backend = nativeSim, shots = 1024).result()
plot_histogram(result.get_counts(qCircTest)) | _____no_output_____ | MIT | Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb | kj3moraes/MyQiskitProgramming |
Quick Exercises 2 2. Show that applying the sequence of gates: HZH, to any qubit state is equivalent to applying an X-gate. Here we will define another circuit with 1 qubit and show that a HZH transforms |0⟩ to |1⟩ and |1⟩ and |0⟩. | qCirc2 = QuantumCircuit(1, 1)
qCirc2.initialize(zeroInit, 0)
qCirc2.draw('mpl')
qCirc2.h(0)
qCirc2.z(0)
qCirc2.h(0)
qCirc2.measure(0, 0)
qCirc2.draw('mpl') | _____no_output_____ | MIT | Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb | kj3moraes/MyQiskitProgramming |
Simulating this circuit on QASM | result2 = execute(qCirc2, backend = nativeSim, shots = 1024).result()
plot_histogram(result2.get_counts(qCirc2))
qCirc2 = QuantumCircuit(1, 1)
qCirc2.initialize(oneInit, 0)
qCirc2.draw('mpl')
qCirc2.h(0)
qCirc2.z(0)
qCirc2.h(0)
qCirc2.measure(0, 0)
qCirc2.draw('mpl')
result2 = execute(qCirc2, backend = nativeSim, shots = 1024).result()
plot_histogram(result2.get_counts(qCirc2)) | _____no_output_____ | MIT | Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb | kj3moraes/MyQiskitProgramming |
3. Find a combination of X, Z and H-gates that is equivalent to a Y-gate (ignoring global phase). As we saw before, the Y gate does not affect the |0⟩ and |1⟩ basis vectors. Since we can ignore the global phase, we can take $-i$ outside. This yeilds the matrix$$ \sigma_Y = -i \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} $$By shear trial and error the matrix given is ZX. | qCirc23 = QuantumCircuit(1, 1)
qCirc23.initialize([1/sqrt(2), 1j/sqrt(2)], 0)
qCirc23.draw('mpl')
svSim = Aer.get_backend('statevector_simulator')
svResult = execute(qCirc23, backend = svSim, shots = 1024).result()
stateVectorBefore = svResult.get_statevector()
plot_bloch_multivector(stateVectorBefore)
qCirc23.z(0)
qCirc23.x(0)
qCirc23.draw('mpl')
svResult = execute(qCirc23, backend = svSim, shots = 1024).result()
stateVectorAfter = svResult.get_statevector()
plot_bloch_multivector(stateVectorAfter)
qCirc23.measure(0, 0)
result = execute(qCirc23, backend = nativeSim, shots = 1024).result()
plot_histogram(result.get_counts(qCirc23)) | _____no_output_____ | MIT | Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb | kj3moraes/MyQiskitProgramming |
Quick Exercises 3 1. If we initialise our qubit in the state |+⟩, what is the probability of measuring it in state |−⟩? The probability would be 0. 2. Use Qiskit to display the probability of measuring a |0⟩ qubit in the states |+⟩ and |−⟩ | qCirc32 = QuantumCircuit(1, 1)
qCirc32.initialize([1/sqrt(2), 1/sqrt(2)], 0)
qCirc32.measure(0, 0)
qCirc32.draw('mpl')
result = execute(qCirc32, backend = nativeSim, shots = 1024).result()
plot_histogram(result.get_counts(qCirc32)) | _____no_output_____ | MIT | Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb | kj3moraes/MyQiskitProgramming |
3. Try to create a function that measures in the Y-basis. To do this we would need to 1. Convert Y-basis into Z-basis (this can simply be done by rotating the x-axis by $ \pi/2$ )2. Use Qiskit measurement to measure in Z-basis3. Convert back into Y-basis (rotate the x-axis by $ -\pi/2$) | def measure_y(qc, qubit, cbit):
# STEP 1
qc.sdg(qubit)
qc.h(qubit)
# STEP 2
qc.measure(qubit, cbit)
# STEP 3
qc.h(qubit)
qc.s(qubit)
circuit = QuantumCircuit(1, 1)
outwardInit = [1/sqrt(2), -1j/sqrt(2)]
circuit.initialize(outwardInit, 0)
measure_y(circuit, 0 ,0)
circuit.draw('mpl')
result = execute(circuit, backend = nativeSim, shots = 1024).result()
plot_histogram(result.get_counts(circuit))
svResult = execute(circuit, backend = svSim, shots = 1024).result()
plot_bloch_multivector(svResult.get_statevector()) | _____no_output_____ | MIT | Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb | kj3moraes/MyQiskitProgramming |
TensorFlow TutorialWelcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: - Initialize variables- Start your own session- Train algorithms - Implement a Neural NetworkPrograming frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. Updates If you were working on the notebook before this update...* The current notebook is version "v3b".* You can find your original work saved in the notebook with the previous version name (it may be either TensorFlow Tutorial version 3" or "TensorFlow Tutorial version 3a.) * To view the file directory, click on the "Coursera" icon in the top left of this notebook. List of updates* forward_propagation instruction now says 'A1' instead of 'a1' in the formula for Z2; and are updated to say 'A2' instead of 'Z2' in the formula for Z3.* create_placeholders instruction refer to the data type "tf.float32" instead of float.* in the model function, the x axis of the plot now says "iterations (per fives)" instead of iterations(per tens)* In the linear_function, comments remind students to create the variables in the order suggested by the starter code. The comments are updated to reflect this order.* The test of the cost function now creates the logits without passing them through a sigmoid function (since the cost function will include the sigmoid in the built-in tensorflow function).* In the 'model' function, the minibatch_cost is now divided by minibatch_size (instead of num_minibatches).* Updated print statements and 'expected output that are used to check functions, for easier visual comparison. 1 - Exploring the Tensorflow LibraryTo start, you will import the library: | import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1) | _____no_output_____ | MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$ | y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss)) # Prints the loss | 9
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
Writing and running programs in TensorFlow has the following steps:1. Create Tensors (variables) that are not yet executed/evaluated. 2. Write operations between those Tensors.3. Initialize your Tensors. 4. Create a Session. 5. Run the Session. This will run the operations you'd written above. Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value.Now let us look at an easy example. Run the cell below: | a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c) | Tensor("Mul:0", shape=(), dtype=int32)
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it. | sess = tf.Session()
print(sess.run(c)) | 20
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session. | # Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close() | 6
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph. 1.1 - Linear functionLets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. **Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):```pythonX = tf.constant(np.random.randn(3,1), name = "X")```You might find the following functions helpful: - tf.matmul(..., ...) to do a matrix multiplication- tf.add(..., ...) to do an addition- np.random.randn(...) to initialize randomly | # GRADED FUNCTION: linear_function
def linear_function():
"""
Implements a linear function:
Initializes X to be a random tensor of shape (3,1)
Initializes W to be a random tensor of shape (4,3)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
"""
np.random.seed(1)
"""
Note, to ensure that the "random" numbers generated match the expected results,
please create the variables in the order given in the starting code below.
(Do not re-arrange the order).
"""
### START CODE HERE ### (4 lines of code)
X = tf.constant(np.random.randn(3,1), name = "X")
W = tf.constant(np.random.randn(4,3), name = "W")
b = tf.constant(np.random.randn(4,1), name = "b")
Y = tf.add(tf.matmul(W,X),b)
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
sess = tf.Session()
result = sess.run(Y)
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = \n" + str(linear_function())) | result =
[[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
*** Expected Output ***: ```result = [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]]``` 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input. You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session. ** Exercise **: Implement the sigmoid function below. You should use the following: - `tf.placeholder(tf.float32, name = "...")`- `tf.sigmoid(...)`- `sess.run(..., feed_dict = {x: z})`Note that there are two typical ways to create and use sessions in tensorflow: **Method 1:**```pythonsess = tf.Session() Run the variables initialization (if needed), run the operationsresult = sess.run(..., feed_dict = {...})sess.close() Close the session```**Method 2:**```pythonwith tf.Session() as sess: run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) This takes care of closing the session for you :)``` | # GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
"""
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = tf.placeholder(tf.float32, name = "x")
# compute sigmoid(x)
sigmoid = tf.sigmoid(x)
# Create a session, and run it. Please use the method 2 explained above.
# You should use a feed_dict to pass z's value to x.
None
# Run session and call the output "result"
with tf.Session() as sess:
result = sess.run(sigmoid, feed_dict = {x:z})
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12))) | sigmoid(0) = 0.5
sigmoid(12) = 0.999994
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
*** Expected Output ***: **sigmoid(0)**0.5 **sigmoid(12)**0.999994 **To summarize, you how know how to**:1. Create placeholders2. Specify the computation graph corresponding to operations you want to compute3. Create the session4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. 1.3 - Computing the CostYou can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m: $$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$you can do it in one line of code in tensorflow!**Exercise**: Implement the cross entropy loss. The function you will use is: - `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)`Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$ | # GRADED FUNCTION: cost
def cost(logits, labels):
"""
Computes the cost using the sigmoid cross entropy
Arguments:
logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
Returns:
cost -- runs the session of the cost (formula (2))
"""
### START CODE HERE ###
# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = tf.placeholder(tf.float32,name="z")
y = tf.placeholder(tf.float32,name="y")
# Use the loss function (approx. 1 line)
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z,labels=y)
# Create a session (approx. 1 line). See method 1 above.
sess = tf.Session()
# Run the session (approx. 1 line).
cost = sess.run(cost,feed_dict = {z:logits, y:labels})
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return cost
logits = np.array([0.2,0.4,0.7,0.9])
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost)) | cost = [ 0.79813886 0.91301525 0.40318605 0.34115386]
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
** Expected Output** : ```cost = [ 0.79813886 0.91301525 0.40318605 0.34115386]``` 1.4 - Using One Hot encodingsMany times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: - tf.one_hot(labels, depth, axis) **Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this. | # GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
"""
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
"""
### START CODE HERE ###
# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = tf.constant(C,name='C')
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = tf.one_hot(labels,C,axis=0)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session (approx. 1 line)
one_hot = sess.run(one_hot_matrix)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = \n" + str(one_hot)) | one_hot =
[[ 0. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 1.]
[ 0. 1. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0. 0.]]
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
**Expected Output**: ```one_hot = [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]]``` 1.5 - Initialize with zeros and onesNow you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. **Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). - tf.ones(shape) | # GRADED FUNCTION: ones
def ones(shape):
"""
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
"""
### START CODE HERE ###
# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = tf.ones(shape)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session to compute 'ones' (approx. 1 line)
ones = sess.run(ones)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3]))) | ones = [ 1. 1. 1.]
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
**Expected Output:** **ones** [ 1. 1. 1.] 2 - Building your first neural network in tensorflowIn this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:- Create the computation graph- Run the graphLet's delve into the problem you'd like to solve! 2.0 - Problem statement: SIGNS DatasetOne afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.- **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).- **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels. **Figure 1**: SIGNS dataset Run the following code to load the dataset. | # Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() | _____no_output_____ | MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
Change the index below and run the cell to visualize some examples in the dataset. | # Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) | y = 5
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so. | # Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape)) | number of training examples = 1080
number of test examples = 120
X_train shape: (12288, 1080)
Y_train shape: (6, 1080)
X_test shape: (12288, 120)
Y_test shape: (6, 120)
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
**Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. **Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. **The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. 2.1 - Create placeholdersYour first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session. **Exercise:** Implement the function below to create the placeholders in tensorflow. | # GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "tf.float32"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "tf.float32"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
"""
### START CODE HERE ### (approx. 2 lines)
X = tf.placeholder(tf.float32, [n_x,None], name= "X")
Y = tf.placeholder(tf.float32,[n_y, None], name = "Y")
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y)) | X = Tensor("X_2:0", shape=(12288, ?), dtype=float32)
Y = Tensor("Y:0", shape=(6, ?), dtype=float32)
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
**Expected Output**: **X** Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) **Y** Tensor("Placeholder_2:0", shape=(6, ?), dtype=float32) (not necessarily Placeholder_2) 2.2 - Initializing the parametersYour second task is to initialize the parameters in tensorflow.**Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: ```pythonW1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())```Please use `seed = 1` to make sure your results match ours. | # GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
W2 = tf.get_variable("W2", [12,25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b2 = tf.get_variable("b2", [12,1], initializer = tf.zeros_initializer())
W3 = tf.get_variable("W3", [6,12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b3 = tf.get_variable("b3", [6,1], initializer = tf.zeros_initializer())
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"])) | W1 = <tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref>
b1 = <tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref>
W2 = <tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref>
b2 = <tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref>
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
**Expected Output**: **W1** **b1** **W2** **b2** As expected, the parameters haven't been evaluated yet. 2.3 - Forward propagation in tensorflow You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: - `tf.add(...,...)` to do an addition- `tf.matmul(...,...)` to do a matrix multiplication- `tf.nn.relu(...)` to apply the ReLU activation**Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`! | # GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
Z1 = tf.add(tf.matmul(W1,X),b1) # Z1 = np.dot(W1, X) + b1
A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
Z2 = tf.add(tf.matmul(W2,A1),b2) # Z2 = np.dot(W2, A1) + b2
A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
Z3 = tf.add(tf.matmul(W3,A2),b3) # Z3 = np.dot(W3, A2) + b3
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3)) | Z3 = Tensor("Add_2:0", shape=(6, ?), dtype=float32)
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
**Expected Output**: **Z3** Tensor("Add_2:0", shape=(6, ?), dtype=float32) You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. 2.4 Compute costAs seen before, it is very easy to compute the cost using:```pythontf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))```**Question**: Implement the cost function below. - It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.- Besides, `tf.reduce_mean` basically does the summation over the examples. | # GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits,labels=labels))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost)) | cost = Tensor("Mean:0", shape=(), dtype=float32)
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
**Expected Output**: **cost** Tensor("Mean:0", shape=(), dtype=float32) 2.5 - Backward propagation & parameter updatesThis is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.For instance, for gradient descent the optimizer would be:```pythonoptimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)```To make the optimization you would do:```python_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})```This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.**Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable). 2.6 - Building the modelNow, you will bring it all together! **Exercise:** Implement the model. You will be calling the functions you had previously implemented. | def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_x, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
epoch_cost += minibatch_cost / minibatch_size
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per fives)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters | _____no_output_____ | MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.048222. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes! | parameters = model(X_train, Y_train, X_test, Y_test) | Cost after epoch 0: 1.913693
Cost after epoch 100: 1.048222
Cost after epoch 200: 0.756012
Cost after epoch 300: 0.590844
Cost after epoch 400: 0.483423
Cost after epoch 500: 0.392928
Cost after epoch 600: 0.323629
Cost after epoch 700: 0.262100
Cost after epoch 800: 0.210199
Cost after epoch 900: 0.171622
Cost after epoch 1000: 0.145907
Cost after epoch 1100: 0.110942
Cost after epoch 1200: 0.088966
Cost after epoch 1300: 0.061226
Cost after epoch 1400: 0.053860
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
**Expected Output**: **Train Accuracy** 0.999074 **Test Accuracy** 0.716667 Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.**Insights**:- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. - Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters. 2.7 - Test with your own image (optional / ungraded exercise)Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right! | import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
image = image/255.
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction))) | Your algorithm predicts: y = 3
| MIT | 2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb | thesauravkarmakar/deeplearning.ai |
!pip install pyupbit
import pyupbit
#BTC 최근 200시간의 데이터 불러옴
df = pyupbit.get_ohlcv("KRW-ETH", interval="minute60")
df
#시간(ds)와 종가(y)값만 남김
df = df.reset_index()
df['ds'] = df['index']
df['y'] = df['close']
data = df[['ds','y']]
data
#prophet 불러옴
from fbprophet import Prophet
#학습
model = Prophet()
model.fit(data)
#24시간 미래 예측
future = model.make_future_dataframe(periods=24, freq='H')
forecast = model.predict(future)
#그래프1
fig1 = model.plot(forecast)
#그래프2
fig2 = model.plot_components(forecast)
#매수 시점의 가격
nowValue = pyupbit.get_current_price("KRW-ETH")
nowValue
#종가의 가격을 구함
#현재 시간이 자정 이전
closeDf = forecast[forecast['ds'] == forecast.iloc[-1]['ds'].replace(hour=9)]
#현재 시간이 자정 이후
if len(closeDf) == 0:
closeDf = forecast[forecast['ds'] == data.iloc[-1]['ds'].replace(hour=9)]
#어쨋든 당일 종가
closeValue = closeDf['yhat'].values[0]
closeValue
#구체적인 가격
print("현재 시점 가격: ", nowValue)
print("종가의 가격: ", closeValue)
forecast
| _____no_output_____ | MIT | AI.ipynb | wonjongchurl/github_test |
|
How to watch changes to an object==================In this notebook, we learn how kubernetes API resource Watch endpoint is used to observe resource changes. It can be used to get information about changes to any kubernetes object. | from kubernetes import client, config, watch | _____no_output_____ | Apache-2.0 | examples/notebooks/watch_notebook.ipynb | dix000p/kubernetes-client-python |
Load config from default location. | config.load_kube_config() | _____no_output_____ | Apache-2.0 | examples/notebooks/watch_notebook.ipynb | dix000p/kubernetes-client-python |
Create API instance | api_instance = client.CoreV1Api() | _____no_output_____ | Apache-2.0 | examples/notebooks/watch_notebook.ipynb | dix000p/kubernetes-client-python |
Run a Watch on the Pods endpoint. Watch would be executed and produce output about changes to any Pod. After running the cell below, You can test this by running the Pod notebook [create_pod.ipynb](create_pod.ipynb) and observing the additional output here. You can stop the cell from running by restarting the kernel. | w = watch.Watch()
for event in w.stream(api_instance.list_pod_for_all_namespaces):
print("Event: %s %s %s" % (event['type'],event['object'].kind, event['object'].metadata.name)) | _____no_output_____ | Apache-2.0 | examples/notebooks/watch_notebook.ipynb | dix000p/kubernetes-client-python |
TensorFlow Transfer LearningThis notebook shows how to use pre-trained models from [TensorFlowHub](https://www.tensorflow.org/hub). Sometimes, there is not enough data, computational resources, or time to train a model from scratch to solve a particular problem. We'll use a pre-trained model to classify flowers with better accuracy than a new model for use in a mobile application. Learning Objectives1. Know how to apply image augmentation2. Know how to download and use a TensorFlow Hub module as a layer in Keras. | import os
import pathlib
import IPython.display as display
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
from PIL import Image
from tensorflow.keras import Sequential
from tensorflow.keras.layers import (
Conv2D,
Dense,
Dropout,
Flatten,
MaxPooling2D,
Softmax,
) | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
Exploring the dataAs usual, let's take a look at the data before we start building our model. We'll be using a creative-commons licensed flower photo dataset of 3670 images falling into 5 categories: 'daisy', 'roses', 'dandelion', 'sunflowers', and 'tulips'.The below [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) command downloads a dataset to the local Keras cache. To see the files through a terminal, copy the output of the cell below. | data_dir = tf.keras.utils.get_file(
"flower_photos",
"https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz",
untar=True,
)
# Print data path
print("cd", data_dir) | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
We can use python's built in [pathlib](https://docs.python.org/3/library/pathlib.html) tool to get a sense of this unstructured data. | data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob("*/*.jpg")))
print("There are", image_count, "images.")
CLASS_NAMES = np.array(
[item.name for item in data_dir.glob("*") if item.name != "LICENSE.txt"]
)
print("These are the available classes:", CLASS_NAMES) | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
Let's display the images so we can see what our model will be trying to learn. | roses = list(data_dir.glob("roses/*"))
for image_path in roses[:3]:
display.display(Image.open(str(image_path))) | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
Building the datasetKeras has some convenient methods to read in image data. For instance [tf.keras.preprocessing.image.ImageDataGenerator](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) is great for small local datasets. A tutorial on how to use it can be found [here](https://www.tensorflow.org/tutorials/load_data/images), but what if we have so many images, it doesn't fit on a local machine? We can use [tf.data.datasets](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) to build a generator based on files in a Google Cloud Storage Bucket.We have already prepared these images to be stored on the cloud in `gs://cloud-ml-data/img/flower_photos/`. The images are randomly split into a training set with 90% data and an iterable with 10% data listed in CSV files:Training set: [train_set.csv](https://storage.cloud.google.com/cloud-ml-data/img/flower_photos/train_set.csv) Evaluation set: [eval_set.csv](https://storage.cloud.google.com/cloud-ml-data/img/flower_photos/eval_set.csv) Explore the format and contents of the train.csv by running: | !gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | head -5 > /tmp/input.csv
!cat /tmp/input.csv
!gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt
!cat /tmp/labels.txt | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
Let's figure out how to read one of these images from the cloud. TensorFlow's [tf.io.read_file](https://www.tensorflow.org/api_docs/python/tf/io/read_file) can help us read the file contents, but the result will be a [Base64 image string](https://en.wikipedia.org/wiki/Base64). Hmm... not very readable for humans or Tensorflow.Thankfully, TensorFlow's [tf.image.decode_jpeg](https://www.tensorflow.org/api_docs/python/tf/io/decode_jpeg) function can decode this string into an integer array, and [tf.image.convert_image_dtype](https://www.tensorflow.org/api_docs/python/tf/image/convert_image_dtype) can cast it into a 0 - 1 range float. Finally, we'll use [tf.image.resize](https://www.tensorflow.org/api_docs/python/tf/image/resize) to force image dimensions to be consistent for our neural network.We'll wrap these into a function as we'll be calling these repeatedly. While we're at it, let's also define our constants for our neural network. | IMG_HEIGHT = 224
IMG_WIDTH = 224
IMG_CHANNELS = 3
BATCH_SIZE = 32
# 10 is a magic number tuned for local training of this dataset.
SHUFFLE_BUFFER = 10 * BATCH_SIZE
AUTOTUNE = tf.data.experimental.AUTOTUNE
VALIDATION_IMAGES = 370
VALIDATION_STEPS = VALIDATION_IMAGES // BATCH_SIZE
def decode_img(img, reshape_dims):
# Convert the compressed string to a 3D uint8 tensor.
img = tf.image.decode_jpeg(img, channels=IMG_CHANNELS)
# Use `convert_image_dtype` to convert to floats in the [0,1] range.
img = tf.image.convert_image_dtype(img, tf.float32)
# Resize the image to the desired size.
return tf.image.resize(img, reshape_dims) | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
Is it working? Let's see!**TODO 1.a:** Run the `decode_img` function and plot it to see a happy looking daisy. | img = tf.io.read_file(
"gs://cloud-ml-data/img/flower_photos/daisy/754296579_30a9ae018c_n.jpg"
)
# Uncomment to see the image string.
# print(img)
# TODO: decode image and plot it | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
One flower down, 3669 more of them to go. Rather than load all the photos in directly, we'll use the file paths given to us in the csv and load the images when we batch. [tf.io.decode_csv](https://www.tensorflow.org/api_docs/python/tf/io/decode_csv) reads in csv rows (or each line in a csv file), while [tf.math.equal](https://www.tensorflow.org/api_docs/python/tf/math/equal) will help us format our label such that it's a boolean array with a truth value corresponding to the class in `CLASS_NAMES`, much like the labels for the MNIST Lab. | def decode_csv(csv_row):
record_defaults = ["path", "flower"]
filename, label_string = tf.io.decode_csv(csv_row, record_defaults)
image_bytes = tf.io.read_file(filename=filename)
label = tf.math.equal(CLASS_NAMES, label_string)
return image_bytes, label | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
Next, we'll transform the images to give our network more variety to train on. There are a number of [image manipulation functions](https://www.tensorflow.org/api_docs/python/tf/image). We'll cover just a few:* [tf.image.random_crop](https://www.tensorflow.org/api_docs/python/tf/image/random_crop) - Randomly deletes the top/bottom rows and left/right columns down to the dimensions specified.* [tf.image.random_flip_left_right](https://www.tensorflow.org/api_docs/python/tf/image/random_flip_left_right) - Randomly flips the image horizontally* [tf.image.random_brightness](https://www.tensorflow.org/api_docs/python/tf/image/random_brightness) - Randomly adjusts how dark or light the image is.* [tf.image.random_contrast](https://www.tensorflow.org/api_docs/python/tf/image/random_contrast) - Randomly adjusts image contrast.**TODO 1.b:** Augment the image using the random functions. | MAX_DELTA = 63.0 / 255.0 # Change brightness by at most 17.7%
CONTRAST_LOWER = 0.2
CONTRAST_UPPER = 1.8
def read_and_preprocess(image_bytes, label, random_augment=False):
if random_augment:
img = decode_img(image_bytes, [IMG_HEIGHT + 10, IMG_WIDTH + 10])
# TODO: augment the image.
else:
img = decode_img(image_bytes, [IMG_WIDTH, IMG_HEIGHT])
return img, label
def read_and_preprocess_with_augment(image_bytes, label):
return read_and_preprocess(image_bytes, label, random_augment=True) | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
Finally, we'll make a function to craft our full dataset using [tf.data.dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset). The [tf.data.TextLineDataset](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) will read in each line in our train/eval csv files to our `decode_csv` function.[.cache](https://www.tensorflow.org/api_docs/python/tf/data/Datasetcache) is key here. It will store the dataset in memory | def load_dataset(csv_of_filenames, batch_size, training=True):
dataset = (
tf.data.TextLineDataset(filenames=csv_of_filenames)
.map(decode_csv)
.cache()
)
if training:
dataset = (
dataset.map(read_and_preprocess_with_augment)
.shuffle(SHUFFLE_BUFFER)
.repeat(count=None)
) # Indefinately.
else:
dataset = dataset.map(read_and_preprocess).repeat(
count=1
) # Each photo used once.
# Prefetch prepares the next set of batches while current batch is in use.
return dataset.batch(batch_size=batch_size).prefetch(buffer_size=AUTOTUNE) | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
We'll test it out with our training set. A batch size of one will allow us to easily look at each augmented image. | train_path = "gs://cloud-ml-data/img/flower_photos/train_set.csv"
train_data = load_dataset(train_path, 1)
itr = iter(train_data) | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
**TODO 1.c:** Run the below cell repeatedly to see the results of different batches. The images have been un-normalized for human eyes. Can you tell what type of flowers they are? Is it fair for the AI to learn on? | image_batch, label_batch = next(itr)
img = image_batch[0]
plt.imshow(img)
print(label_batch[0]) | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
**Note:** It may take a 4-5 minutes to see result of different batches. MobileNetV2These flower photos are much larger than handwritting recognition images in MNIST. They are about 10 times as many pixels per axis **and** there are three color channels, making the information here over 200 times larger!How do our current techniques stand up? Copy your best model architecture over from the MNIST models lab and see how well it does after training for 5 epochs of 50 steps.**TODO 2.a** Copy over the most accurate model from 2_mnist_models.ipynb or build a new CNN Keras model. | eval_path = "gs://cloud-ml-data/img/flower_photos/eval_set.csv"
nclasses = len(CLASS_NAMES)
hidden_layer_1_neurons = 400
hidden_layer_2_neurons = 100
dropout_rate = 0.25
num_filters_1 = 64
kernel_size_1 = 3
pooling_size_1 = 2
num_filters_2 = 32
kernel_size_2 = 3
pooling_size_2 = 2
layers = [
# TODO: Add your image model.
]
old_model = Sequential(layers)
old_model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
train_ds = load_dataset(train_path, BATCH_SIZE)
eval_ds = load_dataset(eval_path, BATCH_SIZE, training=False)
old_model.fit_generator(
train_ds,
epochs=5,
steps_per_epoch=5,
validation_data=eval_ds,
validation_steps=VALIDATION_STEPS,
) | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
If your model is like mine, it learns a little bit, slightly better then random, but *ugh*, it's too slow! With a batch size of 32, 5 epochs of 5 steps is only getting through about a quarter of our images. Not to mention, this is a much larger problem then MNIST, so wouldn't we need a larger model? But how big do we need to make it?Enter Transfer Learning. Why not take advantage of someone else's hard work? We can take the layers of a model that's been trained on a similar problem to ours and splice it into our own model.[Tensorflow Hub](https://tfhub.dev/s?module-type=image-augmentation,image-classification,image-others,image-style-transfer,image-rnn-agent) is a database of models, many of which can be used for Transfer Learning. We'll use a model called [MobileNet](https://tfhub.dev/google/imagenet/mobilenet_v2_035_224/feature_vector/4) which is an architecture optimized for image classification on mobile devices, which can be done with [TensorFlow Lite](https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_image_retraining.ipynb). Let's compare how a model trained on [ImageNet](http://www.image-net.org/) data compares to one built from scratch.The `tensorflow_hub` python package has a function to include a Hub model as a [layer in Keras](https://www.tensorflow.org/hub/api_docs/python/hub/KerasLayer). We'll set the weights of this model as un-trainable. Even though this is a compressed version of full scale image classification models, it still has over four hundred thousand paramaters! Training all these would not only add to our computation, but it is also prone to over-fitting. We'll add some L2 regularization and Dropout to prevent that from happening to our trainable weights.**TODO 2.b**: Add a Hub Keras Layer at the top of the model using the handle provided. | module_selection = "mobilenet_v2_100_224"
module_handle = "https://tfhub.dev/google/imagenet/{}/feature_vector/4".format(
module_selection
)
transfer_model = tf.keras.Sequential(
[
# TODO
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(
nclasses,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l2(0.0001),
),
]
)
transfer_model.build((None,) + (IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
transfer_model.summary() | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
Even though we're only adding one more `Dense` layer in order to get the probabilities for each of the 5 flower types, we end up with over six thousand parameters to train ourselves. Wow!Moment of truth. Let's compile this new model and see how it compares to our MNIST architecture. | transfer_model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
train_ds = load_dataset(train_path, BATCH_SIZE)
eval_ds = load_dataset(eval_path, BATCH_SIZE, training=False)
transfer_model.fit(
train_ds,
epochs=5,
steps_per_epoch=5,
validation_data=eval_ds,
validation_steps=VALIDATION_STEPS,
) | _____no_output_____ | Apache-2.0 | notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb | henrypurbreadcom/asl-ml-immersion |
Saldías et al. Figure 02Waves - ssh anomaly (canyon minus no-canyon), allowed and scattered waves | from brokenaxes import brokenaxes
import cmocean as cmo
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
import matplotlib.gridspec as gspec
import matplotlib.patches as patches
from netCDF4 import Dataset
import numpy as np
import pandas as pd
import scipy as sc
import scipy.io as sio
import xarray as xr
import matplotlib.colors as mcolors
import matplotlib.lines as mlines
from matplotlib.lines import Line2D
%matplotlib inline
%matplotlib inline
def get_fig_file(file_fig):
# Brink mode
file = sio.loadmat(file_fig)
z, xpl, xxx, zzz = file['z'][0,:], file['xpl'][0,:], file['xxx'][0,:], file['zzz'][0,:]
# (u is cross-shore and v is alongshore in Brink.)
p0, u0, v0, w0, r0 = file['p_profile'], file['u_profile'],file['v_profile'], file['w_profile'], file['r_profile']
scale=0.2
w = w0 * 0.01 * scale # cms-1 to ms-1 and normalization (?)
u = u0 * 0.01 * scale # cms-1 to ms-1 and normalization
v = v0 * 0.01 * scale # cms-1 to ms-1 and normalization
r = r0 * 1.0 * scale # mg/cm³ to kg/m³ and normalization
p = p0 * 0.1 * scale # dyn/cm² to 0.1 Pa (or kg m-1 s-2) and normalization
return(u,v,w,r,p,z,xpl, xxx, zzz)
def plot_Brink(ax,fld,z,xpl,xxx,zzz,minp,maxp,nlev=15):
landc='#8b7765'
levels=np.linspace(minp,maxp,nlev)
cnf = ax.contourf(xpl, z, fld, levels=levels, cmap=cmo.cm.delta, vmin=minp,
vmax=maxp, zorder=1)
ax.contour(xpl, z, fld, levels=levels, linewidths=1, linestyles='-', colors='0.4', zorder=2)
ax.contour(xpl, z, fld, levels=[0], linewidths=2, linestyles='-', colors='k', zorder=2)
ax.fill_between(xxx, zzz.min(), zzz, facecolor=landc, zorder=3)
levels=np.linspace(np.nanmin(v),np.nanmax(v),nlev)
return(cnf, ax)
runs = ['DS','IS','SS']
fig = plt.figure(figsize=(7.48,9))
plt.rcParams.update({'font.size': 8})
# Set up subplot grid
gs = GridSpec(4, 3, width_ratios=[1,1,1], height_ratios=[0.6,1.3,1.5,1.3],
wspace=0.1,hspace=0.3, figure=fig)
ax1 = fig.add_subplot(gs[0, 0])
ax2 = fig.add_subplot(gs[0, 1])
ax3 = fig.add_subplot(gs[0, 2])
ax4 = fig.add_subplot(gs[1, 0])
ax5 = fig.add_subplot(gs[1, 1])
ax6 = fig.add_subplot(gs[1, 2])
ax7 = fig.add_subplot(gs[2, 0])
ax8 = fig.add_subplot(gs[2, 1:])
ax9 = fig.add_subplot(gs[3, 0])
ax10 = fig.add_subplot(gs[3, 1])
ax11 = fig.add_subplot(gs[3, 2])
for ax in [ax2,ax3,ax5,ax6,ax10,ax11]:
ax.set_yticks([])
for ax,run in zip([ax1,ax2,ax3],runs):
ax.set_xlabel('x (km)', labelpad=0)
ax.set_title(run)
for ax in [ax4,ax5,ax6,ax7]:
ax.set_xlabel('Days', labelpad=0)
for ax in [ax9,ax10,ax11]:
ax.set_xlabel('x (km)', labelpad=0)
ax1.set_ylabel('Depth (m)', labelpad=0)
ax4.set_ylabel('y (km)', labelpad=0)
ax7.set_ylabel('y (km)', labelpad=0)
ax9.set_ylabel('Depth (m)', labelpad=0)
ax8.set_xlabel(r'$k$ ($10^{-5}$ rad m$^{-1}$)', labelpad=0)
ax8.set_ylabel(r'$\omega$ ($10^{-5}$ rad s$^{-1}$)', labelpad=0.5)
ax8.yaxis.set_label_position("right")
ax8.yaxis.tick_right()
# Shelf profiles
for run, ax in zip(runs, [ax1,ax2,ax3]):
can_file = '/Volumes/MOBY/ROMS-CTW/ocean_his_ctw_CR_'+run+'_7d.nc'
yshelf = 400
yaxis = int(579/2)
with Dataset(can_file, 'r') as nbl:
hshelf = -nbl.variables['h'][yshelf,:]
haxis = -nbl.variables['h'][yaxis,:]
x_rho = (nbl.variables['x_rho'][:]-400E3)/1000
y_rho = (nbl.variables['y_rho'][:]-400E3)/1000
ax.plot(x_rho[yshelf,:], hshelf,'k-', label='shelf')
ax.plot(x_rho[yaxis,:], haxis,'k:', label='canyon \n axis')
ax.set_xlim(-50,0)
ax.set_ylim(-500,0)
ax1.legend(labelspacing=0)
#SSH hovmöller plots (canyon-no canyon)
xind = 289
for run, ax in zip(runs,(ax4,ax5,ax6)):
nc_file = '/Volumes/MOBY/ROMS-CTW/ocean_his_ctw_NCR_'+run+'_7d.nc'
can_file = '/Volumes/MOBY/ROMS-CTW/ocean_his_ctw_CR_'+run+'_7d.nc'
with Dataset(can_file, 'r') as nbl:
y_rho = nbl.variables['y_rho'][:]
time = nbl.variables['ocean_time'][:]
zeta = nbl.variables['zeta'][:,:,xind]
with Dataset(nc_file, 'r') as nbl:
y_rho_nc = nbl.variables['y_rho'][:]
time_nc = nbl.variables['ocean_time'][:]
zeta_nc = nbl.variables['zeta'][:,:,xind]
pc2 = ax.pcolormesh((time_nc)/(3600*24),(y_rho_nc[:,xind]/1000)-400,
np.transpose((zeta[:,:]-zeta_nc[:,:]))*1000,
cmap=cmo.cm.balance, vmax=4.0, vmin=-4.0)
if run == 'IS':
rect = patches.Rectangle((5,-20),15,160,linewidth=2,edgecolor='k',facecolor='none')
ax.add_patch(rect)
ax.axhline(0.0, color='k', alpha=0.5)
ax.set_ylim(-400,400)
cbar_ax = fig.add_axes([0.92, 0.585, 0.025, 0.17])
cb = fig.colorbar(pc2, cax=cbar_ax, orientation='vertical', format='%1.0f')
cb.set_label(r'Surface elevation (10$^{-3}$ m)')
# Zoomed-in SSH hovmöller plot of IS (canyon-no canyon)
yind = 420
xlim = 100
xind = 289
y1 = 189
y2 = 389
y3 = 526
y4 = 540
y5 = 315
run = 'IS'
ax = ax7
nc_file = '/Volumes/MOBY/ROMS-CTW/ocean_his_ctw_NCR_'+run+'_7d.nc'
can_file = '/Volumes/MOBY/ROMS-CTW/ocean_his_ctw_CR_'+run+'_7d.nc'
with Dataset(can_file, 'r') as nbl:
y_rho = nbl.variables['y_rho'][:]
time = nbl.variables['ocean_time'][:]
zeta = nbl.variables['zeta'][:,:,xind]
with Dataset(nc_file, 'r') as nbl:
y_rho_nc = nbl.variables['y_rho'][:]
time_nc = nbl.variables['ocean_time'][:]
zeta_nc = nbl.variables['zeta'][:,:,xind]
pc2 = ax.pcolormesh((time_nc)/(3600*24),(y_rho_nc[:,xind]/1000)-400,
np.transpose((zeta[:,:]-zeta_nc[:,:]))*1000,
cmap=cmo.cm.balance, vmax=4.0, vmin=-4.0)
t1_IS = (time_nc[47])/(3600*24)
y1_IS = (y_rho_nc[y2,xind]/1000)-400
t2_IS = (time_nc[65])/(3600*24)
y2_IS = (y_rho_nc[y4,xind]/1000)-400
ax.plot([t1_IS, t2_IS],[y1_IS, y2_IS], '.-', color='k')
t1_IS = (time_nc[47])/(3600*24)
y1_IS = (y_rho_nc[289,xind]/1000)-400
t2_IS = (time_nc[55])/(3600*24)
y2_IS = (y_rho_nc[y2,xind]/1000)-400
ax.plot([t1_IS, t2_IS],[y1_IS, y2_IS], '.-',color='k')
ax.axhline(0.0, color='k', alpha=0.5)
ax.axhline(-5.0, color='0.5', alpha=0.5)
ax.axhline(5.0, color='0.5', alpha=0.5)
ax.set_ylim(-20,140)
ax.set_xlim(5,20)
rect = patches.Rectangle((5.1,-19),14.85,158,linewidth=2,edgecolor='k',facecolor='none')
ax.add_patch(rect)
# Dispersion curves
g = 9.81 # gravitational accel. m/s^2
Hs = 100 # m shelf break depth
f = 1.028E-4 # inertial frequency
omega_fw = 1.039E-5 # fw = forcing wave
k_fw = 6.42E-6# rad/m
domain_length = 800E3 # m
canyon_width = 10E3 # m
col1 = '#254441' #'#23022e'
col2 = '#43AA8B' #'#573280'
col3 = '#B2B09B' #'#ada8b6'
col4 = '#FF6F59' #'#58A4B0'
files = ['../dispersion_curves/DS/dispc_DS_mode1_KRM.dat',
'../dispersion_curves/IS/dispc_IS_mode1_KRM.dat',
'../dispersion_curves/SS/dispc_SS_mode1_KRM.dat',
'../dispersion_curves/DS/dispc_DS_mode2_KRM.dat',
'../dispersion_curves/IS/dispc_IS_mode2_KRM.dat',
'../dispersion_curves/SS/dispc_SS_mode2_KRM.dat',
'../dispersion_curves/DS/dispc_DS_mode3_KRM.dat',
'../dispersion_curves/IS/dispc_IS_mode3_KRM.dat',
'../dispersion_curves/SS/dispc_SS_mode3_KRM.dat',
'../dispersion_curves/IS/dispc_IS_mode4_KRM.dat',
'../dispersion_curves/SS/dispc_SS_mode4_KRM.dat',
'../dispersion_curves/DS/dispc_DS_mode5_KRM.dat',
'../dispersion_curves/IS/dispc_IS_mode5_KRM.dat',
'../dispersion_curves/SS/dispc_SS_mode5_KRM.dat',
'../dispersion_curves/IS/dispc_IS_mode6_KRM.dat',
'../dispersion_curves/SS/dispc_SS_mode6_KRM.dat',
]
colors = [col1,
col2,
col3,
col1,
col2,
col3,
col1,
col2,
col3,
col2,
col3,
col1,
col2,
col3,
#col1,
col2,
col3,
]
linestyles = ['-','-','-','--','--','--',':',':',':','-.','-.','-','-','-','--','--']
labels = [ r'DS $\bar{c_1}$',r'IS $\bar{c_1}$',r'SS $\bar{c_1}$',
r'DS $\bar{c_2}$',r'IS $\bar{c_2}$',r'SS $\bar{c_2}$',
r'DS $\bar{c_3}$',r'IS $\bar{c_3}$',r'SS $\bar{c_3}$',
r'IS $\bar{c_4}$',r'SS $\bar{c_4}$',
r'DS $\bar{c_5}$',r'IS $\bar{c_5}$',r'SS $\bar{c_5}$',
r'IS $\bar{c_6}$',r'SS $\bar{c_6}$']
ax8.axhline(omega_fw*1E5, color='0.5', label='1/7 days')
ax8.axhline(f*1E5, color='gold', label='f')
ax8.axvline((1E5*(2*np.pi))/domain_length, linestyle='-', color=col4, alpha=1, label='domain length')
for file, col, lab, line in zip(files, colors, labels, linestyles):
data_mode = pd.read_csv(file, delim_whitespace=True, header=None, names=['wavenum', 'freq', 'perturbation'])
omega = data_mode['freq'][:]
k = data_mode['wavenum'][:]*100
ax8.plot(k*1E5, omega*1E5, linestyle=line,
color=col,linewidth=2,alpha=0.9,
label=lab+r'=%1.2f ms$^{-1}$' % (np.mean(omega/k)))
ax8.plot((omega_fw/1.59)*1E5, omega_fw*1E5, '^',color=col1,
markersize=9, label = 'incident DS %1.2f' %(1.59),
markeredgecolor='0.2',markeredgewidth=1)
ax8.plot((omega_fw/1.39)*1E5, omega_fw*1E5, '^',color=col2,
markersize=9, label = 'incident IS %1.2f' %(1.39),
markeredgecolor='0.2',markeredgewidth=1)
ax8.plot((omega_fw/1.29)*1E5, omega_fw*1E5, '^',color=col3,
markersize=9, label = 'incident SS %1.2f' %(1.29),
markeredgecolor='0.2',markeredgewidth=1)
ax8.plot((omega_fw/0.32)*1E5, omega_fw*1E5, 'o',color=col1,
markersize=9, label = 'DS model c=%1.2f m/s' %(0.32),
markeredgecolor='0.2',markeredgewidth=1)
ax8.plot((omega_fw/0.23)*1E5, omega_fw*1E5, 'o',color=col2,
markersize=9, label = 'IS model c=%1.2f m/s' %(0.23),
markeredgecolor='0.2',markeredgewidth=1)
ax8.plot((omega_fw/1.04)*1E5, omega_fw*1E5, 'o',color=col3,
markersize=9, label = 'SS model c=%1.2f m/s' %(1.04),
markeredgecolor='0.2',markeredgewidth=1)
ax8.plot((omega_fw/0.14)*1E5, omega_fw*1E5, 'd',color=col1,
markersize=11, label = 'DS model c=%1.2f m/s' %(0.14),
markeredgecolor='0.2',markeredgewidth=1)
ax8.plot((omega_fw/0.14)*1E5, omega_fw*1E5, 'd',color=col2,
markersize=9, label = 'IS model c=%1.2f m/s' %(0.14),
markeredgecolor='0.2',markeredgewidth=1)
ax8.set_ylim(0, 1.5)
ax8.set_xlim(0,8)
legend_elements=[]
legend_elements.append(Line2D([0], [0], marker='^',color='w', label='incident',
markerfacecolor='k', mec='k',markersize=6))
legend_elements.append(Line2D([0], [0], marker='o',color='w', label='1$^{st}$ scattered',
markerfacecolor='k', mec='k',markersize=6))
legend_elements.append(Line2D([0], [0], marker='d',color='w', label='2$^{nd}$ scattered',
markerfacecolor='k', mec='k',markersize=6))
for col, run in zip([col1,col2,col3], runs):
legend_elements.append(Line2D([0], [0], marker='s',color=col, linewidth=4,label=run,
markerfacecolor=col, mec=col, markersize=0))
ax8.legend(handles=legend_elements, bbox_to_anchor=(0.65,0.32),frameon=False, handlelength=0.7,
handletextpad=0.5, ncol=2,columnspacing=0.25, framealpha=0, edgecolor='w',labelspacing=0.2)
# Mode structure (Modes 1, 4 and 6 IS run)
run='IS'
modes = ['mode1','mode3', 'mode5']
for mode, ax in zip(modes, [ax9,ax10,ax11]):
u,v,w,r,p,z,xpl,xxx,zzz = get_fig_file('../dispersion_curves/'+run+'/figures_'+run+'_'+mode+'_KRM.mat')
minp = -(1.66e-06)*1E6
maxp = (1.66e-06)*1E6
cntf, ax = plot_Brink(ax, p*1E6, z, xpl, xxx, zzz, minp, maxp)
ax.set_xlim(0,50)
cbar_ax = fig.add_axes([0.92, 0.125, 0.025, 0.17])
cb = fig.colorbar(cntf, cax=cbar_ax, orientation='vertical', format='%1.1f')
cb.set_label(r'Pressure (10$^{-6}$ Pa)')
ax9.text(0.5,0.9,'Incident wave',transform=ax9.transAxes, fontweight='bold')
ax10.text(0.5,0.9,'Mode 3 (IS)',transform=ax10.transAxes, fontweight='bold')
ax11.text(0.5,0.9,'Mode 5 (IS)',transform=ax11.transAxes, fontweight='bold')
ax8.text(0.09,0.75,'mode 1',transform=ax8.transAxes,rotation=70 )
ax8.text(0.27,0.75,'mode 2',transform=ax8.transAxes,rotation=51 )
ax8.text(0.43,0.75,'mode 3',transform=ax8.transAxes,rotation=41 )
ax8.text(0.65,0.75,'mode 4',transform=ax8.transAxes,rotation=30 )
ax8.text(0.87,0.72,'mode 5',transform=ax8.transAxes,rotation=25 )
ax8.text(0.87,0.47,'mode 6',transform=ax8.transAxes,rotation=18 )
ax1.text(0.95,0.05,'a',transform=ax1.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax2.text(0.95,0.05,'b',transform=ax2.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax3.text(0.95,0.05,'c',transform=ax3.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax4.text(0.95,0.03,'d',transform=ax4.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax5.text(0.95,0.03,'e',transform=ax5.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax6.text(0.96,0.03,'f',transform=ax6.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax7.text(0.01,0.94,'g',transform=ax7.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax8.text(0.01,0.03,'h',transform=ax8.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax9.text(0.97,0.03,'i',transform=ax9.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax10.text(0.97,0.03,'j',transform=ax10.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
ax11.text(0.95,0.03,'k',transform=ax11.transAxes, fontsize=8, fontweight='bold',
color='w', bbox={'facecolor': 'black', 'alpha': 1, 'pad': 1})
plt.savefig('Figure2.png',format='png',bbox_inches='tight', dpi=300)
plt.show() | _____no_output_____ | Apache-2.0 | figures/figure02.ipynb | UBC-MOAD/Saldias_et_al_2021 |
Rating and Review Analysis of Car Brands Project Author: Sabriye Ela EsmeThis notebook includes the codes for analyzing rating score and review score of different brands of cars according to 'Edmunds-Consumer Car Ratings and Reviews' data retrieved from https://www.kaggle.com/ankkur13/edmundsconsumer-car-ratings-and-reviews.Firstly, for every brand, the data converted into a dataframe, and from that dataframe, the ratings given to the cars of the brand (between 0-5) summed up and divided by the number of reviews to get an average rating score. Let's start with Porsche. | import pandas as pd
import numpy as np
lst=[]
data = pd.read_csv('C:\\Users\\Ela\\Desktop\\Cars_review_project\\Scraped_Car_Review_porsche.csv', lineterminator='\n')
brand0= 'Porsche' | _____no_output_____ | MIT | Car Data Analysis.ipynb | elaesme/Car-Data-Analysis |
Here's a glimpse of dataframe of Porsche. | data.head()
score0=sum(data['Rating\r'])/data.shape[0]
lst.append([brand0, score0]) | _____no_output_____ | MIT | Car Data Analysis.ipynb | elaesme/Car-Data-Analysis |
Calculation of the score for other brands | #Calculation of the score for other brands
data1 = pd.read_csv('C://Users//Ela//Desktop//Cars_review_project//Scrapped_Car_Reviews_Audi.csv', lineterminator='\n')
brand1= 'Audi'
score1=sum(data1['Rating\r'])/data1.shape[0]
lst.append([brand1, score1])
data2 = pd.read_csv('C://Users//Ela//Desktop//Cars_review_project//Scrapped_Car_Reviews_BMW.csv', lineterminator='\n')
brand2= 'BMW'
score2=sum(data2['Rating\r'])/data2.shape[0]
lst.append([brand2, score2])
data3 = pd.read_csv('C:\\Users\\Ela\\Desktop\\Cars_review_project\\Scraped_Car_Review_mercedes-benz.csv', lineterminator='\n')
brand3=' Mercedes-Benz'
score3=sum(data3['Rating\r'])/data3.shape[0]
lst.append([brand3, score3])
data4 = pd.read_csv('C:\\Users\\Ela\\Desktop\\Cars_review_project\\Scraped_Car_Review_jaguar.csv', lineterminator='\n')
brand4='Jaguar'
score4=sum(data4['Rating\r'])/data4.shape[0]
lst.append([brand4, score4]) | _____no_output_____ | MIT | Car Data Analysis.ipynb | elaesme/Car-Data-Analysis |
Scores for ReviewsIn this part of the code, there are two different functions to find the total review score of a brand. First the review function gets every review as a single sentence and checks the words to give the sentence a score based on how positive or negative the words in the review are.The second function called total_score is to find the average review score for a single brand. It checks every review from the dataframes created earlier to calculate an average review score | def review_score(liste):
words=["love", "amazing", "happy", "great", "best", "win", "powerful", "beatiful"]
negwords=["hate", "regret", "bad", "weak", "disappointed", "sad"]
scorepos=0
scoreneg=0
for i in liste:
if i in words:
scorepos+=1
elif i in negwords:
scoreneg+=1
tot= scorepos - scoreneg
return tot
def total_score(datframe):
rlist=[]
rscore=0
for k in range(len(datframe["Review"])):
bb= datframe["Review"][k].split()
aa = [strings.lower() for strings in bb]
res= review_score(aa)
rscore+=res
return rscore/datframe.shape[0] | _____no_output_____ | MIT | Car Data Analysis.ipynb | elaesme/Car-Data-Analysis |
From this line, the code is about creating a new dataframe which shows the average rating score, average review score and the total score (sum of the 2 different averages) for every brand, in a descending order according to total score. | totlist=[total_score(data), total_score(data1), total_score(data2), total_score(data3),total_score(data4)]
for i in range(len(lst)):
lst[i].append(totlist[i])
lst[i].append(totlist[i]+lst[i][1])
lst.sort(key=lambda x:x[3], reverse= 1)
df = pd.DataFrame(lst[0:],columns=['Brand', 'Average Rating Score', 'Average Review Score', 'Total Score'])
df | _____no_output_____ | MIT | Car Data Analysis.ipynb | elaesme/Car-Data-Analysis |
Get_Histogram_key | YY = QubitOperator('X0 X1 Y3', 0.25j)
Get_Histogram_key(YY) | _____no_output_____ | MIT | quchem_examples/Simulating Quantum Circuit.ipynb | AlexisRalli/VQE-code |
Simulate_Quantum_Circuit | num_shots = 1000
YY = QubitOperator('X0 X1 Y3', 0.25j)
histogram_string= Get_Histogram_key(YY)
Simulate_Quantum_Circuit(full_circuit, num_shots, histogram_string) | _____no_output_____ | MIT | quchem_examples/Simulating Quantum Circuit.ipynb | AlexisRalli/VQE-code |
Get_wavefunction | YY = QubitOperator('X0 X1 Y3', 0.25j)
cirq_NO_M = cirq.Circuit([*HF_circ, *UCCSD_circ.all_operations()])
histogram_string= Get_Histogram_key(YY)
Get_wavefunction(cirq_NO_M, sig_figs=3) | _____no_output_____ | MIT | quchem_examples/Simulating Quantum Circuit.ipynb | AlexisRalli/VQE-code |
Return_as_binary | num_shots = 1000
YY = QubitOperator('X0 X1 Y3', 0.25j)
histogram_string= Get_Histogram_key(YY)
c_result = Simulate_Quantum_Circuit(full_circuit, num_shots, histogram_string)
Return_as_binary(c_result, histogram_string) | _____no_output_____ | MIT | quchem_examples/Simulating Quantum Circuit.ipynb | AlexisRalli/VQE-code |
expectation_value_by_parity | num_shots = 1000
YY = QubitOperator('X0 X1 Y3', 0.25j)
histogram_string= Get_Histogram_key(YY)
c_result = Simulate_Quantum_Circuit(full_circuit, num_shots, histogram_string)
b_result=Return_as_binary(c_result, histogram_string)
expectation_value_by_parity(b_result)
from quchem.Hamiltonian_Generator_Functions import *
### Parameters
Molecule = 'H2'
geometry = [('H', (0., 0., 0.)), ('H', (0., 0., 0.74))]
basis = 'sto-3g'
### Get Hamiltonian
Hamilt = Hamiltonian(Molecule,
run_scf=1, run_mp2=1, run_cisd=1, run_ccsd=1, run_fci=1,
basis=basis,
multiplicity=1,
geometry=geometry) # normally None!
Hamilt.Get_Molecular_Hamiltonian(Get_H_matrix=False)
QubitHam = Hamilt.Get_Qubit_Hamiltonian(transformation='JW')
ansatz_obj = Ansatz(Hamilt.molecule.n_electrons, Hamilt.molecule.n_qubits)
Sec_Quant_CC_ia_ops, Sec_Quant_CC_ijab_ops, theta_parameters_ia, theta_parameters_ijab = ansatz_obj.Get_ia_and_ijab_terms()
Qubit_Op_list_Second_Quant_CC_Ops_ia, Qubit_Op_list_Second_Quant_CC_Ops_ijab = ansatz_obj.UCCSD_single_trotter_step(Sec_Quant_CC_ia_ops, Sec_Quant_CC_ijab_ops,
transformation='JW')
full_ansatz_Q_Circ = Ansatz_Circuit(Qubit_Op_list_Second_Quant_CC_Ops_ia, Qubit_Op_list_Second_Quant_CC_Ops_ijab,
Hamilt.molecule.n_qubits, Hamilt.molecule.n_electrons)
ansatz_cirq_circuit = full_ansatz_Q_Circ.Get_Full_HF_UCCSD_QC(theta_parameters_ia, theta_parameters_ijab)
QubitHam | _____no_output_____ | MIT | quchem_examples/Simulating Quantum Circuit.ipynb | AlexisRalli/VQE-code |
$$\begin{aligned} H &=h_{0} I+h_{1} Z_{0}+h_{2} Z_{1}+h_{3} Z_{2}+h_{4} Z_{3} \\ &+h_{5} Z_{0} Z_{1}+h_{6} Z_{0} Z_{2}+h_{7} Z_{1} Z_{2}+h_{8} Z_{0} Z_{3}+h_{9} Z_{1} Z_{3} \\ &+h_{10} Z_{2} Z_{3}+h_{11} Y_{0} Y_{1} X_{2} X_{3}+h_{12} X_{0} Y_{1} Y_{2} X_{3} \\ &+h_{13} Y_{0} X_{1} X_{2} Y_{3}+h_{14} X_{0} X_{1} Y_{2} Y_{3} \end{aligned}$$ | n_shots=1000
def GIVE_ENERGY(theta_ia_theta_jab_list):
theta_ia = theta_ia_theta_jab_list[:len(theta_parameters_ia)]
theta_ijab = theta_ia_theta_jab_list[len(theta_parameters_ia):]
ansatz_cirq_circuit = full_ansatz_Q_Circ.Get_Full_HF_UCCSD_QC(theta_parameters_ia, theta_parameters_ijab)
VQE_exp = VQE_Experiment(QubitHam, ansatz_cirq_circuit, n_shots)
return VQE_exp.Calc_Energy()
### optimizer
from quchem.Scipy_Optimizer import *
# THETA_params = [*theta_parameters_ia, *theta_parameters_ijab]
THETA_params=[0,0,0]
GG = Optimizer(GIVE_ENERGY, THETA_params, 'Nelder-Mead', store_values=True, display_iter_steps=True,
tol=1e-5,
display_convergence_message=True)
GG.get_env(50)
GG.plot_convergence()
plt.show() | 0: Input_to_Funct: [ 0.00025 -0.0005 0.00025]: Output: -1.1188432276915568
1: Input_to_Funct: [ 0.00025 -0.0005 0.00025]: Output: -1.11630628122307
2: Input_to_Funct: [ 0.00025 -0.0005 0.00025]: Output: -1.1183902015364695
3: Input_to_Funct: [ 0.00025 -0.0005 0.00025]: Output: -1.1133163085994964
4: Input_to_Funct: [ 0.00025 -0.0005 0.00025]: Output: -1.1117760196722009
5: Input_to_Funct: [ 0.00025 -0.0005 0.00025]: Output: -1.123735910166495
6: Input_to_Funct: [ 0.00023032 -0.00044907 0.00021644]: Output: -1.1175747544573131
7: Input_to_Funct: [ 0.00023032 -0.00044907 0.00021644]: Output: -1.1210177532359737
8: Input_to_Funct: [ 0.00023032 -0.00044907 0.00021644]: Output: -1.1144941766027223
9: Input_to_Funct: [ 0.00023032 -0.00044907 0.00021644]: Output: -1.1121384405962706
10: Input_to_Funct: [ 0.00023032 -0.00044907 0.00021644]: Output: -1.119205648615626
11: Input_to_Funct: [ 0.00023032 -0.00044907 0.00021644]: Output: -1.1167593073781568
12: Input_to_Funct: [ 0.00023032 -0.00044907 0.00021644]: Output: -1.1130444929064445
13: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1159438602990006
14: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1169405178401917
15: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1147659922957747
16: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1143129661406876
17: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1210177532359737
18: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.11630628122307
19: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1145847818337398
20: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1138599399856008
21: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1135881242925485
22: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1180277806124002
23: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.111504203979149
24: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1162156759920525
25: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.119386859077661
26: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1177559649193478
27: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1166687021471393
28: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1212895689290259
29: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.114403571371705
30: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1145847818337398
31: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1159438602990006
32: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1166687021471393
33: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1108699673620273
34: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1158532550679832
35: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1178465701503653
36: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1144941766027223
37: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1183902015364695
38: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1152190184508615
39: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1150378079888266
40: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1178465701503653
41: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.114856597526792
42: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1156720446059483
43: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1197492800017304
44: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1250949886317558
45: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1154908341439138
46: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1201117009258
47: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1155814393749308
48: Input_to_Funct: [ 0.00023327 -0.00046107 0.00021931]: Output: -1.1125914667513574
Warning: Maximum number of iterations has been exceeded.
Reason for termination is Maximum number of iterations has been exceeded.
| MIT | quchem_examples/Simulating Quantum Circuit.ipynb | AlexisRalli/VQE-code |
REDUCED H2 ansatz: | from quchem.Simulating_Quantum_Circuit import *
from quchem.Ansatz_Generator_Functions import *
from openfermion.ops import QubitOperator
def H2_ansatz(theta):
HF_circ = [cirq.X.on(cirq.LineQubit(0)), cirq.X.on(cirq.LineQubit(1))]
full_exp_circ_obj = full_exponentiated_PauliWord_circuit(QubitOperator('Y0 X1 X2 X3', -1j), theta)
UCCSD_circ = cirq.Circuit(cirq.decompose_once((full_exp_circ_obj(*cirq.LineQubit.range(full_exp_circ_obj.num_qubits())))))
full_circuit = cirq.Circuit([*HF_circ, *UCCSD_circ.all_operations()])
return full_circuit
H2_ansatz(np.pi)
n_shots=1000
def GIVE_ENERGY(THETA):
ansatz_cirq_circuit = H2_ansatz(THETA)
VQE_exp = VQE_Experiment(QubitHam, ansatz_cirq_circuit, n_shots)
return VQE_exp.Calc_Energy()
### full angle scan
import matplotlib.pyplot as plt
%matplotlib inline
theta_list = np.arange(0,2*np.pi, 0.1)
E_list = [GIVE_ENERGY(theta) for theta in theta_list]
plt.plot(E_list)
print(min(E_list))
## optimzer
from quchem.Scipy_Optimizer import *
THETA_params=[2]
GG = Optimizer(GIVE_ENERGY, THETA_params, 'Nelder-Mead', store_values=True, display_iter_steps=True,
tol=1e-5,
display_convergence_message=True)
GG.get_env(50)
GG.plot_convergence()
plt.show()
ansatz_cirq_circuit = H2_ansatz(3.22500077)
VQE_exp = VQE_Experiment(QubitHam, ansatz_cirq_circuit, 1000)
print('Energy = ', VQE_exp.Calc_Energy())
print('')
print('state:')
VQE_exp.Get_wavefunction_of_state(sig_figs=4) | Energy = -1.1349360253505665
state:
| MIT | quchem_examples/Simulating Quantum Circuit.ipynb | AlexisRalli/VQE-code |
FAERS AE Multilabel Outcomes ML pipeline - Dask Distributed + Joblib + Dask DataFrames Methodology Objective**Use FAERS data on drug safety to identify possible risk factors associated with patient mortality and other serious adverse events associated with approved used of a drug or drug class** Data**_Outcome table_** 1. Start with outcome_c table to define unit of analysis (primaryid)2. Reshape outcome_c to one row per primaryid3. Outcomes grouped into 3 categories: a. death, b. serious, c. other 4. Multiclass model target format: each outcome grp coded into separate columns**_Demo table_**1. Drop fields not used in model input to reduce table size (preferably before import to notebook)2. Check if demo table one row per primaryid (if NOT then need to reshape / clean - TBD)**_Model input and targets_**1. Merge clean demo table with reshaped multilabel outcome targets (rows: primaryid, cols: outcome grps)2. Inspect merged file to check for anomalies (outliers, bad data, ...) Model**_Multilabel Classifier_**1. Since each primaryid has multiple outcomes coded in the outcome_c table, the ML model should predict the probability of each possible outcome.2. In scikit-learn lib most/all classifiers can predict multilabel outcomes by coding target outputs into array ResultsTBD InsightsTBD | # scale sklearn dask example setup - compare to multi thread below
from dask.distributed import Client, progress
client = Client(n_workers=4, threads_per_worker=1, memory_limit='2GB')
client
#import libraries
import numpy as np
print('The numpy version is {}.'.format(np.__version__))
import pandas as pd
print('The pandas version is {}.'.format(pd.__version__))
from pandas import read_csv, DataFrame
# dask dataframe
import dask
import dask.dataframe as dd
from dask.diagnostics import ProgressBar
print('The dask version is {}.'.format(dask.__version__))
#from random import random
import sklearn
#from sklearn.base import (BaseEstimator, TransformerMixin)
print('The scikit-learn version is {}.'.format(sklearn.__version__))
import joblib
from joblib import dump, load
print('The joblib version is {}.'.format(joblib.__version__))
# preprocess + model selection + pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler #, LabelBinarizer, MultiLabelBinarizer
from sklearn.impute import SimpleImputer, KNNImputer, MissingIndicator
from sklearn.model_selection import train_test_split, GridSearchCV #, cross_val_score,
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.multioutput import MultiOutputClassifier
# models supporting multilabel classification
from sklearn.tree import DecisionTreeClassifier, ExtraTreeClassifier
from sklearn.ensemble import ExtraTreesClassifier, RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier, RadiusNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.linear_model import RidgeClassifierCV
# libs for managing imbalanced classes
from sklearn.utils import resample
# metrics appropriate for multilabel classification
from sklearn.metrics import jaccard_score, hamming_loss, accuracy_score, roc_auc_score, average_precision_score
from sklearn.metrics import multilabel_confusion_matrix, ConfusionMatrixDisplay
# feature importance
from sklearn.inspection import permutation_importance, partial_dependence, plot_partial_dependence
# visualization
import matplotlib as mpl
print('The matplotlib version is {}.'.format(mpl.__version__))
from matplotlib import pyplot as plt
from matplotlib.ticker import PercentFormatter
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
print('The seaborn version is {}.'.format(sns.__version__))
sns.set()
# dtree viz libs
import pydotplus
from io import StringIO
from sklearn import tree
from sklearn.tree import export_graphviz
from IPython.display import Image
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
# utilities for timing decorator
import time
from functools import wraps
def timefn(fn):
@wraps(fn)
def measure_time(*args, **kwargs):
t1 = time.time()
result = fn(*args, **kwargs)
t2 = time.time()
print(f'@timefn: {fn.__name__} took {t2 - t1} seconds')
return result
return measure_time
%%time
# read data into dask dataframe
# provide path to datafile
file_in = '../input/demo-outc_cod-multilabel-wt_lbs-age_yrs.csv'
# provide list of fields to import (alt-read in all cols and use df.drop(['cols2drop'],axis = 1)
cols_in = ['primaryid', 'i_f_code', 'rept_cod', 'sex', 'occp_cod', 'outc_cod__CA',
'outc_cod__DE', 'outc_cod__DS', 'outc_cod__HO', 'outc_cod__LT', 'outc_cod__OT',
'outc_cod__RI', 'n_outc', 'wt_lbs', 'age_yrs']
#@timefn
def data_in(infile, incols):
"""Used to time reading data from source"""
ddf = dd.read_csv(infile, usecols=incols)
print(ddf.columns, '\n')
print(ddf.head(),'\n')
print(f'Total number of rows: {len(ddf):,}\n')
ddf2 = ddf.primaryid.nunique().compute()
print(f'Unique number of primaryids: {ddf2}')
return ddf
if __name__=='__main__':
ddf = data_in(file_in, cols_in)
with ProgressBar():
display(ddf.head()) | _____no_output_____ | MIT | faers_multilabel_outcome_ml_pipeline_dask_joblib_dd_3_5_2021.ipynb | briangriner/OSTF-FAERS |
ML Pipeline - Preprocessing step | # dask: data prep + preprocessor + pipeline funcs
def df_prep(ddf):
"""df_prep func used in pipeline to support select features and prep multilabel targets for clf
assumes dask DataFrame is named 'ddf'
"""
# compute ddf
#ddf2 = ddf.compute()
# drop fields from df when defining model targets and features
y_drop = ['primaryid', 'i_f_code', 'rept_cod', 'sex', 'occp_cod', 'n_outc', 'wt_lbs', 'age_yrs']
X_drop = ['primaryid', 'outc_cod__CA', 'outc_cod__DE', 'outc_cod__DS', 'outc_cod__HO',
'outc_cod__LT', 'outc_cod__OT', 'outc_cod__RI']
# convert target to ndarray for sklearn
y = ddf.drop(y_drop, axis=1).compute()
y_arr = y.to_numpy() # look into dask array
X = ddf.drop(X_drop, axis=1).compute()
print('Step 0: Create ndarray for multilabel targets + select model features','\n')
print('y_arr\n', y_arr.shape, '\n', y_arr.dtype, '\n', y_arr[:2], y.columns, '\n')
print('X\n', X.shape, '\n', X.dtypes, '\n', X[:2],'\n', X.columns, '\n')
return X, y_arr
def preprocessor():
"make data preprocessor for pipeline"
# 2. group features by type for categorical vs numeric transformers
num_features = ['n_outc', 'wt_lbs', 'age_yrs']
cat_features = ['i_f_code', 'rept_cod', 'sex', 'occp_cod']
feature_labels = num_features + cat_features
print('Step 2: Group features by type for pipeline\n')
print('num_features\n', num_features)
print('cat_features\n', cat_features)
print('feature_labels\n', feature_labels,'\n')
# 3. create transformers for model input features by type
num_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())])
cat_transformer = Pipeline(steps=[
('1_hot', OneHotEncoder(handle_unknown='ignore'))])
print('Step 3: Column transformers by type for pipeline\n')
print('num_transformer\n', num_transformer)
print('cat_transformer\n', cat_transformer,'\n')
# 4. combine transformers into preprocessing step
preprocessor = ColumnTransformer(transformers=[
#('dfprep', dfprep_transformer, all_features),
('num', num_transformer, num_features),
('cat', cat_transformer, cat_features)], remainder='passthrough')
print('Step 4: Preprocessor for pipeline\n')
print('preprocessor\n', preprocessor,'\n')
return preprocessor
def model_fit(X_, y_):
# fit clfs using dask with joblib backend
for classifier in classifiers:
ml_pipe = Pipeline(steps=[('preprocessor', preprocessor()),
('classifier', MultiOutputClassifier(classifier))
]
)
# use context manager to run dask during training
with joblib.parallel_backend('dask'):
ml_clf = ml_pipe.fit(X_, y_)
# save/load fited clf obj with joblib
dump(ml_clf, 'ml_clf_obj.joblib')
ml_clf_obj = load('ml_clf_obj.joblib')
return ml_clf_obj
# 6. ml clf model pipe
def ml_clf_pipe(clf_lst):
"""Pipeline to evaluate multilabel classifiers prior to hyperparameter tuning."""
# 0. data prep
X, y_arr = df_prep(ddf)
# 1. train, test set split (can extend later to multiple train-test splits in pipeline)
X_train, X_test, y_train, y_test = train_test_split(X, y_arr, test_size = 0.3)
print('Step 1: Train-test set split\n')
print('X_train\n', X_train.shape, '\n', X_train[:2])
print('y_train\n', y_train.shape, '\n', y_train[:2])
print('X_test\n', X_test.shape, '\n', X_test[:2])
print('y_test\n', y_test.shape, '\n', y_test[:2],'\n')
# train, dump & load model
ml_clf_obj = model_fit(X_train, y_train)
y_pred = ml_clf_obj.predict(X_test)
print('y_pred\n', y_pred.shape,'\n',y_pred[3:],'\n')
print('Multilabel Classifier: Performance Metrics:\n')
# accuracy, hamming loss and jaccard score for mlabel
print('accuracy: ', accuracy_score(y_test, y_pred))
print('hamming loss: ', hamming_loss(y_test, y_pred))
print('jaccard score: ', jaccard_score(y_test, y_pred, average='micro'))
print('roc auc score: ', roc_auc_score(y_test, y_pred))
print('average precision score: ', average_precision_score(y_test, y_pred),'\n')
# generate ml cm
multilabel_cm = multilabel_confusion_matrix(y_test, y_pred)
# calc tp, tn, fp, fn rates
tp = multilabel_cm[:, 0, 0]
tn = multilabel_cm[:, 1, 1]
fp = multilabel_cm[:, 0, 1]
fn = multilabel_cm[:, 1, 0]
outc_labels = ['outc_cod__CA','outc_cod__DE', 'outc_cod__DS', 'outc_cod__HO', 'outc_cod__LT',
'outc_cod__OT', 'outc_cod__RI']
print('Recall, Specificity, Fall Out and Miss Rate for Multilabel Adverse Event Outcomes:\n', outc_labels)
# recall
print('recall (true pos rate):\n', tp / (tp + fn))
# specificity
print('specificity (true neg rate):\n', tn / (tn + fp))
# fall out
print('fall out (false pos rate):\n', fp / (fp + tn))
# miss rate
print('miss rate (false neg rate):\n', fn / (fn + tp), '\n')
# plot multilabel confusion matrix
def print_confusion_matrix(confusion_matrix, axes, class_label, class_names, fontsize=14):
df_cm = pd.DataFrame(confusion_matrix, index=class_names, columns=class_names)
try:
heatmap = sns.heatmap(df_cm, annot=True, fmt="d", cbar=False, ax=axes, cmap="YlGnBu") #cmap='RdBu_r',
except ValueError:
raise ValueError("Confusion matrix values must be integers.")
heatmap.yaxis.set_ticklabels(heatmap.yaxis.get_ticklabels(), rotation=0, ha='right',
fontsize=fontsize)
heatmap.xaxis.set_ticklabels(heatmap.yaxis.get_ticklabels(), rotation=45, ha='right',
fontsize=fontsize)
axes.set_xlabel('True label')
axes.set_ylabel('Predicted label')
axes.set_title("Confusion matrix for class - " + class_label)
# plot grid of cm's - one per output - raw
fig, ax = plt.subplots(3, 3, figsize=(12, 7))
for axes, cfs_matrix, label in zip(ax.flatten(), multilabel_cm, outc_labels): # labels):
print_confusion_matrix(cfs_matrix, axes, label, ["N", "Y"])
fig.tight_layout()
plt.show()
# corr matrix heatmap
y_df = pd.DataFrame(y_pred, columns=outc_labels)
ncol = y_df.shape[1]
fig, ax = plt.subplots(figsize=(ncol,ncol))
ax = sns.heatmap(y_df.corr(), fmt='.2f', annot=True, ax=ax, cmap='RdBu_r', vmin=-1, vmax=1)
imgname = str(classifiers[0]) + '-hmap_corr.png'
fig.savefig(imgname, dpi=300, bbox_inches='tight')
print('Correlation Matrix Heatmap of Predicted Multilabel Adverse Events\n')
plt.show()
| _____no_output_____ | MIT | faers_multilabel_outcome_ml_pipeline_dask_joblib_dd_3_5_2021.ipynb | briangriner/OSTF-FAERS |
Decision Tree Classifier | # multilabel clfs
classifiers = [
#RidgeClassifierCV(class_weight='balanced'),
DecisionTreeClassifier(class_weight='balanced'),
#ExtraTreesClassifier(class_weight='balanced'),
#RandomForestClassifier(class_weight='balanced'),
#MLPClassifier(solver='sdg', learning_rate='adaptive', early_stopping=True),
#KNeighborsClassifier(weights='distance'),
#RadiusNeighborsClassifier(weights='distance')
]
# fit and eval model
if __name__ == '__main__':
ml_clf_pipe(classifiers)
# use model object to create predictions on new data
# load fit clf
ml_clf_obj = load('ml_clf_obj.joblib')
dir(ml_clf_obj)
# predict multilabel outcomes
# prep new data
X, y_arr = df_prep(ddf)
# predict ml outcomes
y_arr_pred = ml_clf_obj.predict(X)
print('Predicted AEs: DecisionTree Classifier\n', y_arr_pred.shape, '\n', y_arr_pred[5:]) | Predicted AEs: DecisionTree Classifier
(260715, 7)
[[0 0 0 ... 0 1 0]
[0 1 0 ... 1 1 0]
[0 0 0 ... 0 1 0]
...
[0 0 0 ... 0 1 0]
[0 0 0 ... 0 1 0]
[0 0 0 ... 0 1 0]]
| MIT | faers_multilabel_outcome_ml_pipeline_dask_joblib_dd_3_5_2021.ipynb | briangriner/OSTF-FAERS |
Random Forest Classifier | # multilabel clfs
classifiers = [
#RidgeClassifierCV(class_weight='balanced'),
#DecisionTreeClassifier(class_weight='balanced'),
#ExtraTreesClassifier(class_weight='balanced'),
RandomForestClassifier(class_weight='balanced'),
#MLPClassifier(solver='sdg', learning_rate='adaptive', early_stopping=True),
#KNeighborsClassifier(weights='distance'),
#RadiusNeighborsClassifier(weights='distance')
]
# fit and eval model
if __name__ == '__main__':
ml_clf_pipe(classifiers)
# use model object to create predictions on new data
# load fit clf
ml_clf_obj = load('ml_clf_obj.joblib')
dir(ml_clf_obj)
# predict multilabel outcomes
# prep new data
X, y_arr = df_prep(ddf)
# predict ml outcomes
y_arr_pred = ml_clf_obj.predict(X)
print('Predicted AEs: RandomForest Classifier\n', y_arr_pred.shape, '\n', y_arr_pred[5:]) | Step 0: Create ndarray for multilabel targets + select model features
y_arr
(260715, 7)
int64
[[0 0 0 0 0 1 0]
[0 0 0 1 0 1 0]] Index(['outc_cod__CA', 'outc_cod__DE', 'outc_cod__DS', 'outc_cod__HO',
'outc_cod__LT', 'outc_cod__OT', 'outc_cod__RI'],
dtype='object')
X
(260715, 7)
i_f_code object
rept_cod object
sex object
occp_cod object
n_outc int64
wt_lbs float64
age_yrs float64
dtype: object
i_f_code rept_cod sex occp_cod n_outc wt_lbs age_yrs
0 F EXP F LW 1 178.574463 NaN
1 F EXP F MD 2 NaN 68.0
Index(['i_f_code', 'rept_cod', 'sex', 'occp_cod', 'n_outc', 'wt_lbs',
'age_yrs'],
dtype='object')
Predicted AEs: RandomForest Classifier
(260715, 7)
[[0 0 0 ... 0 1 0]
[0 0 0 ... 0 1 0]
[0 0 0 ... 0 1 0]
...
[0 0 0 ... 0 1 0]
[0 0 0 ... 0 1 0]
[0 0 0 ... 0 1 0]]
| MIT | faers_multilabel_outcome_ml_pipeline_dask_joblib_dd_3_5_2021.ipynb | briangriner/OSTF-FAERS |
RidgeClassifierCV | # multilabel clfs
classifiers = [
RidgeClassifierCV(class_weight='balanced'),
#DecisionTreeClassifier(class_weight='balanced'),
#ExtraTreesClassifier(class_weight='balanced'),
#RandomForestClassifier(class_weight='balanced'),
#MLPClassifier(solver='sdg', learning_rate='adaptive', early_stopping=True),
#KNeighborsClassifier(weights='distance'),
#RadiusNeighborsClassifier(weights='distance')
]
# fit and eval model
if __name__ == '__main__':
ml_clf_pipe(classifiers)
# use model object to create predictions on new data
# load fit clf
ml_clf_obj = load('ml_clf_obj.joblib')
dir(ml_clf_obj)
# predict multilabel outcomes
# prep new data
X, y_arr = df_prep(ddf)
# predict ml outcomes
y_arr_pred = ml_clf_obj.predict(X)
print('Predicted AEs: RidgeClassifierCV\n', y_arr_pred.shape, '\n', y_arr_pred[5:]) | Step 0: Create ndarray for multilabel targets + select model features
y_arr
(260715, 7)
int64
[[0 0 0 0 0 1 0]
[0 0 0 1 0 1 0]] Index(['outc_cod__CA', 'outc_cod__DE', 'outc_cod__DS', 'outc_cod__HO',
'outc_cod__LT', 'outc_cod__OT', 'outc_cod__RI'],
dtype='object')
X
(260715, 7)
i_f_code object
rept_cod object
sex object
occp_cod object
n_outc int64
wt_lbs float64
age_yrs float64
dtype: object
i_f_code rept_cod sex occp_cod n_outc wt_lbs age_yrs
0 F EXP F LW 1 178.574463 NaN
1 F EXP F MD 2 NaN 68.0
Index(['i_f_code', 'rept_cod', 'sex', 'occp_cod', 'n_outc', 'wt_lbs',
'age_yrs'],
dtype='object')
Predicted AEs: RidgeClassifierCV
(260715, 7)
[[1 0 1 ... 0 1 0]
[1 1 1 ... 1 1 0]
[0 0 1 ... 0 1 0]
...
[0 0 1 ... 0 1 0]
[0 0 0 ... 0 1 0]
[0 0 1 ... 0 1 0]]
| MIT | faers_multilabel_outcome_ml_pipeline_dask_joblib_dd_3_5_2021.ipynb | briangriner/OSTF-FAERS |
Preprocessing | X = data.drop(['animal_name']+['class_type'],axis=1)
###Eliminiamo dal dataframe i nomi degli animali che sono una variabile categorica e la classe di appartenenza che
###vogliamo trovare con gli algoritmi di clustering
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
Xs = pd.DataFrame(scaler.fit_transform(X), columns=X.columns)
###come sappiamo applicando uno scaler otteniamo un oggetto numpy, perciò ricreiamo il dataframe utilizzando gli indici di colonna originali
Xs
###Come abbiamo visto il nostro dataset non contiene valori nulli e tutti gli attributi hanno valori booleani 1 o 0
###a eccezione della colonna legs, perciò ho apllicato MinMaxScaler per scalare i valori in modo che anche legs
###assuma valori tra 0 e 1.
###A questo punto i dati sono pronti per applicare gli algoritmi di clustering. | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
Clustering | ###come funziona ciascuno di questi algoritmi????? | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
Kmeans | from sklearn.cluster import KMeans,AgglomerativeClustering,SpectralClustering,DBSCAN,Birch
kmeans = KMeans(n_clusters= 7,random_state=0)
### Random state è un parametro che ci serve per fare in modo che i centroidi di partenza siano determinati a partire
### da un numero, e non generati casualmente in modo che ripetendo il clustering tutte le volte abbiamo lo stesso
### risultato
y_pred_k = kmeans.fit_predict(Xs) | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
Agglomerative clustering | aggc = AgglomerativeClustering(n_clusters = 7, affinity = 'euclidean', linkage = 'ward' )
y_pred_aggc =aggc.fit_predict(Xs) | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
SpectralClustering | spc = SpectralClustering(n_clusters=7, assign_labels="discretize", random_state=0)
y_pred_spc = spc.fit_predict(Xs) | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
DBSCAN | dbscan =DBSCAN(eps=0.3,min_samples=4)
y_pred_dbscan = dbscan.fit_predict(Xv)
#### Ho verificato che con Dbscan otteniamo un risultato molto migliroe se usiamo il dataset in 2 dimensioni Xv
#### che a questo punto ci conviene calcolare prima della parte di results visualization | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
Birch | brc = Birch(n_clusters=7, threshold = 0.1)
y_pred_brc = brc.fit_predict(Xs) | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
Results visualization | import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
pca = PCA(2)
Xv = pca.fit_transform(Xs)
### Applichiamo la principal component analysis per comprimere i dati in due dimensioni e poterli visualizzare.
fig, ax = plt.subplots(figsize=(16,9),ncols=3, nrows=2)
ax[0][0].scatter(Xv[:,0],Xv[:,1], s=110, c=y_verita)
ax[0][1].scatter(Xv[:,0],Xv[:,1], s=110, c=y_pred_k)
ax[0][2].scatter(Xv[:,0],Xv[:,1], s=110, c=y_pred_aggc)
ax[1][0].scatter(Xv[:,0],Xv[:,1], s=110, c=y_pred_spc)
ax[1][1].scatter(Xv[:,0],Xv[:,1], s=110, c=y_pred_dbscan)
ax[1][2].scatter(Xv[:,0],Xv[:,1], s=110, c=y_pred_brc)
ax[0][0].set_title('Classificazione reale', fontsize = 22)
ax[0][1].set_title('Kmeans', fontsize = 22)
ax[0][2].set_title('Agglomerative clustering', fontsize = 22)
ax[1][0].set_title('Spectral clustering', fontsize = 22)
ax[1][1].set_title('Dbscan', fontsize = 22)
ax[1][2].set_title('Birch',fontsize = 22)
plt.tight_layout()
plt.show()
### Visualizziamo i diversi risultati di clustering sulle coordinate dei nostri animali portate in 2 dimensioni. | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
Benchmark e interpretazione | from sklearn.metrics import adjusted_rand_score, completeness_score
### Bisogna dare una piccola descrizione di cosa misurano queste due metriche e perchè sono state scelte | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
Utilizziamo due metriche diverse per verificare quanto il risultato ottenuto con gli algoritmi di clustering sia accurato rispetto alla ground truth. Kmeans | risultati = {}
k_c= completeness_score(y_verita,y_pred_k)
k_a = adjusted_rand_score(y_verita,y_pred_k)
risultati['Kmeans'] =[k_c,k_a] | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
AgglomerativeClustering | aggc_c= completeness_score(y_verita,y_pred_aggc)
aggc_a = adjusted_rand_score(y_verita,y_pred_aggc)
risultati['Agglomerative clustering']=[aggc_c,aggc_a] | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
SpectralClustering | spc_c= completeness_score(y_verita,y_pred_spc)
spc_a = adjusted_rand_score(y_verita,y_pred_spc)
risultati['Spectral clustering']=[spc_c,spc_a] | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
DBSCAN | dbscan_c= completeness_score(y_verita,y_pred_dbscan)
dbscan_a = adjusted_rand_score(y_verita,y_pred_dbscan)
risultati['Dbscan']=[dbscan_c,dbscan_a] | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
Birch | brc_c= completeness_score(y_verita,y_pred_brc)
brc_a = adjusted_rand_score(y_verita,y_pred_brc)
risultati['Birch']=[brc_c,brc_a]
risultati
###perchè è il migliore??? | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
L'algoritmo migliore si rivela essere lo spectral clustering | ## funzione per trovare la posizione di ogni membro del cluster nel dataset originale
def select_points(X, y_pred, cluster_label):
pos = [i for i, x in enumerate(y_pred) if x == cluster_label]
return X.iloc[pos]
select_points(data,y_pred_spc,3)
### Tutti animali dello stesso class_type eccetto la tartaruga come ci aspettiamo visto il punteggio molto
### alto dell'algoritmo spectral clustering
select_points(data,y_pred_dbscan,3)
### dbscan invece conferma il punteggio basso mettendo nella stessa classe animali piuttosto diversi
from scipy.cluster.hierarchy import dendrogram , linkage
##qui costruisco un dendogramma per un clustering gerarchico
Z = linkage(X, method = 'complete')
plt.figure(figsize = (32,40))
dendro = dendrogram(Z, orientation = "left",
labels=[x for x in data["animal_name"]],
leaf_font_size=22)
plt.title("Dendrogram", fontsize = 30, fontweight="bold")
plt.ylabel('Euclidean distance', fontsize = 22)
plt.xlabel("Animal ", fontsize = 22)
plt.show() | _____no_output_____ | MIT | .ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb | GiovanniDiMasi/Cluster_animali |
Новый раздел Установка и импорт | _____no_output_____ | MIT | Test1_Mask_RCNN.ipynb | Art-phys/Mask_RCNN |
|
Importing Libraries | import os
import gc
import numpy as np
import pandas as pd
data_path = r'../input/h-and-m-personalized-fashion-recommendations/transactions_train.csv'
customer_data_path = r'../input/h-and-m-personalized-fashion-recommendations/customers.csv'
article_data_path = r'../input/h-and-m-personalized-fashion-recommendations/articles.csv'
submission_data_path = r'../input/h-m-ensembling/submission.csv'
!mkdir /kaggle/working/recbole_data
recbole_data_path = r'/kaggle/working/recbole_data'
# Data Extraction
def create_data(datapath, data_type=None):
if data_type is None:
df = pd.read_csv(datapath)
elif data_type == 'transaction':
df = pd.read_csv(datapath, dtype={'article_id': str}, parse_dates=['t_dat'])
elif data_type == 'article':
df = pd.read_csv(datapath, dtype={'article_id': str})
return df | _____no_output_____ | MIT | hm-recbole-ifs.ipynb | ManashJKonwar/Kaggle-HM-Recommender |
Reading Transaction data | %%time
# Load all sales data (for 3 years starting from 2018 to 2020)
# ALso, article_id is treated as a string column otherwise it
# would drop the leading zeros while reading the specific column values
transactions_data=create_data(data_path, data_type='transaction')
print(transactions_data.shape)
# # Unique Attributes
print(str(len(transactions_data['t_dat'].drop_duplicates())) + "-total No of unique transactions dates in data sheet")
print(str(len(transactions_data['customer_id'].drop_duplicates())) + "-total No of unique customers ids in data sheet")
print(str(len(transactions_data['article_id'].drop_duplicates())) + "-total No of unique article ids courses names in data sheet")
print(str(len(transactions_data['sales_channel_id'].drop_duplicates())) + "-total No of unique sales channels in data sheet")
transactions_data.head() | (31788324, 5)
734-total No of unique transactions dates in data sheet
1362281-total No of unique customers ids in data sheet
104547-total No of unique article ids courses names in data sheet
2-total No of unique sales channels in data sheet
CPU times: user 55.3 s, sys: 4.24 s, total: 59.5 s
Wall time: 1min 20s
| MIT | hm-recbole-ifs.ipynb | ManashJKonwar/Kaggle-HM-Recommender |
Postprocessing Transaction data 1. timestamp column is created from transaction dates column 2. columns are renamed for easy reading | transactions_data['timestamp'] = transactions_data.t_dat.values.astype(np.int64)// 10 ** 9
transactions_data = transactions_data[transactions_data['timestamp'] > 1585620000]
transactions_data = transactions_data[['customer_id','article_id','timestamp']].rename(columns={'customer_id': 'user_id:token',
'article_id': 'item_id:token',
'timestamp': 'timestamp:float'})
transactions_data | _____no_output_____ | MIT | hm-recbole-ifs.ipynb | ManashJKonwar/Kaggle-HM-Recommender |
Saving transaction data to kaggle based recbole output directory | transactions_data.to_csv(os.path.join(recbole_data_path, 'recbole_data.inter'), index=False, sep='\t')
del [[transactions_data]]
gc.collect() | _____no_output_____ | MIT | hm-recbole-ifs.ipynb | ManashJKonwar/Kaggle-HM-Recommender |
Reading Article data | %%time
# Load all Customers
article_data=create_data(article_data_path, data_type='article')
print(article_data.shape)
print(str(len(article_data['article_id'].drop_duplicates())) + "-total No of unique article ids in article data sheet")
article_data.head() | (105542, 25)
105542-total No of unique article ids in article data sheet
CPU times: user 716 ms, sys: 43.6 ms, total: 760 ms
Wall time: 1.07 s
| MIT | hm-recbole-ifs.ipynb | ManashJKonwar/Kaggle-HM-Recommender |
Postprocessing Article data 1. drop duplicate columns to avoid multicollinearity2. columns are renamed for easy reading | article_data = article_data.drop(columns = ['product_type_name', 'graphical_appearance_name', 'colour_group_name',
'perceived_colour_value_name', 'perceived_colour_master_name', 'index_name',
'index_group_name', 'section_name', 'garment_group_name',
'prod_name', 'department_name', 'detail_desc'])
article_data = article_data.rename(columns = {'article_id': 'item_id:token',
'product_code': 'product_code:token',
'product_type_no': 'product_type_no:float',
'product_group_name': 'product_group_name:token_seq',
'graphical_appearance_no': 'graphical_appearance_no:token',
'colour_group_code': 'colour_group_code:token',
'perceived_colour_value_id': 'perceived_colour_value_id:token',
'perceived_colour_master_id': 'perceived_colour_master_id:token',
'department_no': 'department_no:token',
'index_code': 'index_code:token',
'index_group_no': 'index_group_no:token',
'section_no': 'section_no:token',
'garment_group_no': 'garment_group_no:token'})
article_data | _____no_output_____ | MIT | hm-recbole-ifs.ipynb | ManashJKonwar/Kaggle-HM-Recommender |
Saving article data to kaggle based recbole output directory | article_data.to_csv(os.path.join(recbole_data_path, 'recbole_data.item'), index=False, sep='\t')
del [[article_data]]
gc.collect() | _____no_output_____ | MIT | hm-recbole-ifs.ipynb | ManashJKonwar/Kaggle-HM-Recommender |
Setting up Recbole based dataset and configurations | import logging
from logging import getLogger
from recbole.config import Config
from recbole.data import create_dataset, data_preparation
from recbole.model.sequential_recommender import GRU4RecF
from recbole.trainer import Trainer
from recbole.utils import init_seed, init_logger
parameter_dict = {
'data_path': '/kaggle/working',
'USER_ID_FIELD': 'user_id',
'ITEM_ID_FIELD': 'item_id',
'TIME_FIELD': 'timestamp',
'user_inter_num_interval': "[40,inf)",
'item_inter_num_interval': "[40,inf)",
'load_col': {'inter': ['user_id', 'item_id', 'timestamp'],
'item': ['item_id', 'product_code', 'product_type_no', 'product_group_name', 'graphical_appearance_no',
'colour_group_code', 'perceived_colour_value_id', 'perceived_colour_master_id',
'department_no', 'index_code', 'index_group_no', 'section_no', 'garment_group_no']
},
'selected_features': ['product_code', 'product_type_no', 'product_group_name', 'graphical_appearance_no',
'colour_group_code', 'perceived_colour_value_id', 'perceived_colour_master_id',
'department_no', 'index_code', 'index_group_no', 'section_no', 'garment_group_no'],
'neg_sampling': None,
'epochs': 100,
'eval_args': {
'split': {'RS': [10, 0, 0]},
'group_by': 'user',
'order': 'TO',
'mode': 'full'},
'topk':[12]
}
config = Config(model='GRU4RecF', dataset='recbole_data', config_dict=parameter_dict)
# init random seed
init_seed(config['seed'], config['reproducibility'])
# logger initialization
init_logger(config)
logger = getLogger()
# Create handlers
c_handler = logging.StreamHandler()
c_handler.setLevel(logging.INFO)
logger.addHandler(c_handler)
# write config info into log
logger.info(config)
dataset = create_dataset(config)
logger.info(dataset)
# dataset splitting
train_data, valid_data, test_data = data_preparation(config, dataset)
# model loading and initialization
model = GRU4RecF(config, train_data.dataset).to(config['device'])
logger.info(model)
# trainer loading and initialization
trainer = Trainer(config, model)
# model training
best_valid_score, best_valid_result = trainer.fit(train_data) | GRU4RecF(
(item_embedding): Embedding(7330, 64, padding_idx=0)
(feature_embed_layer): FeatureSeqEmbLayer(
(token_embedding_table): ModuleDict(
(item): FMEmbedding(
(embedding): Embedding(3935, 64)
)
)
(float_embedding_table): ModuleDict(
(item): Embedding(1, 64)
)
(token_seq_embedding_table): ModuleDict(
(item): ModuleList(
(0): Embedding(16, 64)
)
)
)
(item_gru_layers): GRU(64, 128, bias=False, batch_first=True)
(feature_gru_layers): GRU(768, 128, bias=False, batch_first=True)
(dense_layer): Linear(in_features=256, out_features=64, bias=True)
(dropout): Dropout(p=0.3, inplace=False)
(loss_fct): CrossEntropyLoss()
)
Trainable parameters: 1156288
epoch 0 training [time: 45.54s, train loss: 3642.3637]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 1 training [time: 43.23s, train loss: 3390.6134]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 2 training [time: 43.00s, train loss: 3250.4472]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 3 training [time: 43.16s, train loss: 3163.3735]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 4 training [time: 42.99s, train loss: 3099.2533]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 5 training [time: 42.99s, train loss: 3044.1074]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 6 training [time: 43.14s, train loss: 2998.5542]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 7 training [time: 42.97s, train loss: 2962.4046]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 8 training [time: 42.91s, train loss: 2932.5592]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 9 training [time: 42.97s, train loss: 2907.5308]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 10 training [time: 42.84s, train loss: 2885.6282]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 11 training [time: 42.92s, train loss: 2867.5368]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 12 training [time: 42.82s, train loss: 2850.5957]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 13 training [time: 42.83s, train loss: 2836.0930]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 14 training [time: 43.10s, train loss: 2822.5238]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 15 training [time: 42.77s, train loss: 2811.0895]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 16 training [time: 42.90s, train loss: 2799.8698]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 17 training [time: 42.98s, train loss: 2790.4907]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 18 training [time: 42.79s, train loss: 2781.8785]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 19 training [time: 42.84s, train loss: 2774.2283]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 20 training [time: 42.86s, train loss: 2766.8078]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 21 training [time: 42.85s, train loss: 2760.0341]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 22 training [time: 43.00s, train loss: 2753.8305]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 23 training [time: 42.64s, train loss: 2748.0924]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 24 training [time: 42.53s, train loss: 2742.5462]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 25 training [time: 42.67s, train loss: 2737.9435]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 26 training [time: 42.88s, train loss: 2732.9869]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 27 training [time: 43.01s, train loss: 2728.7535]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 28 training [time: 42.79s, train loss: 2724.6929]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 29 training [time: 42.88s, train loss: 2720.6401]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 30 training [time: 42.76s, train loss: 2716.7650]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 31 training [time: 42.87s, train loss: 2712.6816]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 32 training [time: 42.76s, train loss: 2709.9449]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 33 training [time: 42.76s, train loss: 2706.3356]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 34 training [time: 42.60s, train loss: 2703.5010]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 35 training [time: 42.95s, train loss: 2700.1212]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 36 training [time: 42.59s, train loss: 2696.9098]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 37 training [time: 42.57s, train loss: 2694.2124]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 38 training [time: 42.80s, train loss: 2691.2175]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 39 training [time: 42.72s, train loss: 2688.2775]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 40 training [time: 42.52s, train loss: 2687.1207]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 41 training [time: 42.78s, train loss: 2683.6108]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 42 training [time: 42.70s, train loss: 2680.3406]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 43 training [time: 42.75s, train loss: 2678.0644]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 44 training [time: 42.91s, train loss: 2676.2267]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 45 training [time: 42.72s, train loss: 2673.2900]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 46 training [time: 42.84s, train loss: 2670.9084]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 47 training [time: 42.59s, train loss: 2668.3615]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 48 training [time: 42.60s, train loss: 2667.0580]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 49 training [time: 42.56s, train loss: 2664.2967]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 50 training [time: 42.78s, train loss: 2662.0451]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 51 training [time: 42.73s, train loss: 2660.1352]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 52 training [time: 42.78s, train loss: 2657.7945]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 53 training [time: 42.61s, train loss: 2656.4686]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 54 training [time: 42.59s, train loss: 2654.0421]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 55 training [time: 42.58s, train loss: 2651.8889]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 56 training [time: 42.60s, train loss: 2650.4294]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 57 training [time: 42.74s, train loss: 2648.3119]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 58 training [time: 42.59s, train loss: 2646.3108]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 59 training [time: 42.52s, train loss: 2644.7120]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 60 training [time: 42.59s, train loss: 2642.6164]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 61 training [time: 42.59s, train loss: 2640.6564]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 62 training [time: 42.57s, train loss: 2639.2032]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 63 training [time: 42.41s, train loss: 2637.3143]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 64 training [time: 42.18s, train loss: 2635.7403]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 65 training [time: 42.17s, train loss: 2634.4346]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 66 training [time: 42.22s, train loss: 2632.6797]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 67 training [time: 42.12s, train loss: 2630.9897]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 68 training [time: 42.24s, train loss: 2629.3250]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 69 training [time: 42.21s, train loss: 2627.6307]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 70 training [time: 42.24s, train loss: 2626.2554]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 71 training [time: 42.09s, train loss: 2624.0660]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 72 training [time: 42.27s, train loss: 2623.2111]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 73 training [time: 42.10s, train loss: 2621.6412]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 74 training [time: 42.08s, train loss: 2623.1531]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 75 training [time: 42.14s, train loss: 2618.6150]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 76 training [time: 42.16s, train loss: 2617.9686]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 77 training [time: 42.02s, train loss: 2615.9039]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 78 training [time: 42.13s, train loss: 2613.9363]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 79 training [time: 42.12s, train loss: 2612.7309]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 80 training [time: 42.09s, train loss: 2610.8800]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 81 training [time: 42.16s, train loss: 2609.9121]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 82 training [time: 42.10s, train loss: 2608.5970]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 83 training [time: 42.08s, train loss: 2607.7214]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 84 training [time: 42.15s, train loss: 2605.9061]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 85 training [time: 42.24s, train loss: 2604.9589]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 86 training [time: 42.12s, train loss: 2603.3150]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 87 training [time: 42.05s, train loss: 2602.3913]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 88 training [time: 42.11s, train loss: 2600.7972]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 89 training [time: 42.12s, train loss: 2599.0803]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 90 training [time: 42.23s, train loss: 2598.2695]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 91 training [time: 42.18s, train loss: 2596.9184]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 92 training [time: 42.15s, train loss: 2596.0503]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 93 training [time: 42.47s, train loss: 2594.9000]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 94 training [time: 42.34s, train loss: 2593.6262]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 95 training [time: 42.17s, train loss: 2592.5283]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 96 training [time: 42.30s, train loss: 2591.2610]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 97 training [time: 42.32s, train loss: 2589.8100]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 98 training [time: 42.29s, train loss: 2589.1635]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
epoch 99 training [time: 42.22s, train loss: 2588.0752]
Saving current: saved/GRU4RecF-Apr-29-2022_07-24-58.pth
| MIT | hm-recbole-ifs.ipynb | ManashJKonwar/Kaggle-HM-Recommender |
Generate trained recommender based predictions | from recbole.utils.case_study import full_sort_topk
external_user_ids = dataset.id2token(
dataset.uid_field, list(range(dataset.user_num)))[1:]#fist element in array is 'PAD'(default of Recbole) ->remove it
import torch
from recbole.data.interaction import Interaction
def add_last_item(old_interaction, last_item_id, max_len=50):
new_seq_items = old_interaction['item_id_list'][-1]
if old_interaction['item_length'][-1].item() < max_len:
new_seq_items[old_interaction['item_length'][-1].item()] = last_item_id
else:
new_seq_items = torch.roll(new_seq_items, -1)
new_seq_items[-1] = last_item_id
return new_seq_items.view(1, len(new_seq_items))
def predict_for_all_item(external_user_id, dataset, model):
model.eval()
with torch.no_grad():
uid_series = dataset.token2id(dataset.uid_field, [external_user_id])
index = np.isin(dataset.inter_feat[dataset.uid_field].numpy(), uid_series)
input_interaction = dataset[index]
test = {
'item_id_list': add_last_item(input_interaction,
input_interaction['item_id'][-1].item(), model.max_seq_length),
'item_length': torch.tensor(
[input_interaction['item_length'][-1].item() + 1
if input_interaction['item_length'][-1].item() < model.max_seq_length else model.max_seq_length])
}
new_inter = Interaction(test)
new_inter = new_inter.to(config['device'])
new_scores = model.full_sort_predict(new_inter)
new_scores = new_scores.view(-1, test_data.dataset.item_num)
new_scores[:, 0] = -np.inf # set scores of [pad] to -inf
return torch.topk(new_scores, 12)
topk_items = []
for external_user_id in external_user_ids:
_, topk_iid_list = predict_for_all_item(external_user_id, dataset, model)
last_topk_iid_list = topk_iid_list[-1]
external_item_list = dataset.id2token(dataset.iid_field, last_topk_iid_list.cpu()).tolist()
topk_items.append(external_item_list)
print(len(topk_items))
external_item_str = [' '.join(x) for x in topk_items]
result = pd.DataFrame(external_user_ids, columns=['customer_id'])
result['prediction'] = external_item_str
result.head()
del external_item_str
del topk_items
del external_user_ids
del train_data
del valid_data
del test_data
del model
del Trainer
del logger
del dataset
gc.collect() | _____no_output_____ | MIT | hm-recbole-ifs.ipynb | ManashJKonwar/Kaggle-HM-Recommender |
Reading Submission data | submission_data = pd.read_csv(submission_data_path)
submission_data.shape | _____no_output_____ | MIT | hm-recbole-ifs.ipynb | ManashJKonwar/Kaggle-HM-Recommender |
Postprocessing submision data 1. Replacing trained customer ids based prediction from recbole based predictions by performing merge 2. Filling up Nan values for customer ids which were not part of recbole training session 3. Generating the final prediction column 4. Dropping redundant columns | submission_data = pd.merge(submission_data, result, on='customer_id', how='outer')
submission_data
submission_data = submission_data.fillna(-1)
submission_data['prediction'] = submission_data.apply(
lambda x: x['prediction_y'] if x['prediction_y'] != -1 else x['prediction_x'], axis=1)
submission_data
submission_data = submission_data.drop(columns=['prediction_y', 'prediction_x'])
submission_data | _____no_output_____ | MIT | hm-recbole-ifs.ipynb | ManashJKonwar/Kaggle-HM-Recommender |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.