hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d051dede8c5d5c16544133d87453eba1d90322a4 | 17,165 | ipynb | Jupyter Notebook | Deep-Fake-knu-2020/Part_2-Generative-Adversarial-Networks/dc-gan-tutorial.ipynb | kryvokhyzha/examples-and-courses | 477e82ee24e6abba8a6b6d92555f2ed549ca682c | [
"MIT"
] | 1 | 2021-12-13T15:41:48.000Z | 2021-12-13T15:41:48.000Z | Deep-Fake-knu-2020/Part_2-Generative-Adversarial-Networks/dc-gan-tutorial.ipynb | kryvokhyzha/examples-and-courses | 477e82ee24e6abba8a6b6d92555f2ed549ca682c | [
"MIT"
] | 15 | 2021-09-12T15:06:13.000Z | 2022-03-31T19:02:08.000Z | Deep-Fake-knu-2020/Part_2-Generative-Adversarial-Networks/dc-gan-tutorial.ipynb | kryvokhyzha/examples-and-courses | 477e82ee24e6abba8a6b6d92555f2ed549ca682c | [
"MIT"
] | 1 | 2022-01-29T00:37:52.000Z | 2022-01-29T00:37:52.000Z | 36.136842 | 848 | 0.520944 | [
[
[
"import os\nimport random\nimport torch\nimport torch.nn as nn\nimport torch.nn.parallel\nimport torch.backends.cudnn as cudnn\nimport torch.optim as optim\nimport torch.utils.data\nimport torchvision.datasets as dset\nimport torchvision.transforms as transforms\nimport torchvision.utils as vutils\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\nfrom IPython.display import HTML\n\n# Set random seed for reproducibility\nmanualSeed = 999\nprint(\"Random Seed: \", manualSeed)\nrandom.seed(manualSeed)\ntorch.manual_seed(manualSeed)",
"_____no_output_____"
],
[
"# Root directory for dataset\ndataroot = \"./data\"\n\n# Number of workers for dataloader\nworkers = 2\n\n# Batch size during training\nbatch_size = 64\n\n# Spatial size of training images. All images will be resized to this\n# size using a transformer.\nimage_size = 32\n\n# Number of channels in the training images. For color images this is 3\nnc = 3\n\n# Size of z latent vector (i.e. size of generator input)\nnz = 100\n\n# Size of feature maps in generator\nngf = 64\n\n# Size of feature maps in discriminator\nndf = 64\n\n# Number of training epochs\nnum_epochs = 20\n\n# Learning rate for optimizers\nlr = 0.0002\n\n# Beta1 hyperparam for Adam optimizers\nbeta1 = 0.5\n\n# Number of GPUs available. Use 0 for CPU mode.\nngpu = 1",
"_____no_output_____"
],
[
"# Create the dataset\ndataset = dset.CIFAR10(\n root=dataroot,\n download=True,\n transform=transforms.Compose([\n transforms.Resize((image_size, image_size)),\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n ])\n)",
"_____no_output_____"
],
[
"dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,\n shuffle=True, num_workers=workers)\n\n# Decide which device we want to run on\ndevice = torch.device(\"cuda:0\" if (torch.cuda.is_available() and ngpu > 0) else \"cpu\")\n\n# Plot some training images\nreal_batch = next(iter(dataloader))\nplt.figure(figsize=(8,8))\nplt.axis(\"off\")\nplt.title(\"Training Images\")\nplt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))\nplt.show()",
"_____no_output_____"
]
],
[
[
"## The Generator \n\nThe generator, G, is designed to map the latent space vector (z) to data-space. Since our data are images, converting z to data-space means ultimately creating a RGB image with the same size as the training images (i.e. 3x32x32). In practice, this is accomplished through a series of strided two dimensional convolutional transpose layers, each paired with a 2d batch norm layer and a relu activation. The output of the generator is fed through a tanh function to return it to the input data range of [−1,1]. It is worth noting the existence of the batch norm functions after the conv-transpose layers, as this is a critical contribution of the DCGAN paper. These layers help with the flow of gradients during training. An image of the generator from the DCGAN paper is shown below.",
"_____no_output_____"
]
],
[
[
"# Generator Code\n\nclass Generator(nn.Module):\n def __init__(self, ngpu):\n super(Generator, self).__init__()\n self.ngpu = ngpu\n self.main = nn.Sequential(\n # input is Z, going into a convolution\n nn.ConvTranspose2d(nz, ngf * 8, 4, 1, 0, bias=False),\n nn.BatchNorm2d(ngf * 8),\n nn.ReLU(True),\n # state size. (ngf*8) x 4 x 4\n nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ngf * 4),\n nn.ReLU(True),\n # state size. (ngf*4) x 8 x 8\n nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ngf * 2),\n nn.ReLU(True),\n # state size. (ngf*2) x 16 x 16\n nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ngf),\n nn.ReLU(True),\n # state size. (ngf) x 32 x 32\n nn.ConvTranspose2d(ngf, nc, kernel_size=1, stride=1, padding=0, bias=False),\n nn.Tanh()\n )\n\n def forward(self, input):\n return self.main(input)\n",
"_____no_output_____"
],
[
"# Create the generator\nnetG = Generator(ngpu).to(device)\n\n# Print the model\nprint(netG)",
"_____no_output_____"
],
[
"# Input shape for the DCGAN generator is the variable of shape (1, 100, 1, 1, ).\n# There ara nothing important about this shape and you can change it to other numbers \n# by modifying `nz` variable. (ex. 128, 200, etc).\n\n# Lets check that GAN generates image with correct shape (1, 3, 32, 32)\n\ninput_variable = torch.randn((1, 100, 1, 1, )).to(device)\nnetG(input_variable).shape",
"_____no_output_____"
]
],
[
[
"## The Discriminator\n\nAs mentioned, the discriminator, D, is a binary classification network that takes an image as input and outputs a scalar probability that the input image is real (as opposed to fake). Here, D takes a 3x64x64 input image, processes it through a series of Conv2d, BatchNorm2d, and LeakyReLU layers, and outputs the final probability through a Sigmoid activation function. This architecture can be extended with more layers if necessary for the problem, but there is significance to the use of the strided convolution, BatchNorm, and LeakyReLUs. The DCGAN paper mentions it is a good practice to use strided convolution rather than pooling to downsample because it lets the network learn its own pooling function. Also batch norm and leaky relu functions promote healthy gradient flow which is critical for the learning process of both G and D.",
"_____no_output_____"
]
],
[
[
"class Discriminator(nn.Module):\n def __init__(self, ngpu):\n super(Discriminator, self).__init__()\n self.ngpu = ngpu\n self.main = nn.Sequential(\n # input is (nc) x 64 x 64\n nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),\n nn.LeakyReLU(0.2, inplace=True),\n # state size. (ndf) x 32 x 32\n nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ndf * 2),\n nn.LeakyReLU(0.2, inplace=True),\n # state size. (ndf*2) x 16 x 16\n nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ndf * 4),\n nn.LeakyReLU(0.2, inplace=True),\n # state size. (ndf*4) x 8 x 8\n nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),\n nn.BatchNorm2d(ndf * 8),\n nn.LeakyReLU(0.2, inplace=True),\n # state size. (ndf*8) x 4 x 4\n nn.Conv2d(ndf * 8, 1, 2, 2, 0, bias=False),\n nn.Sigmoid()\n )\n\n def forward(self, input):\n return self.main(input)",
"_____no_output_____"
],
[
"# Create the Discriminator\nnetD = Discriminator(ngpu).to(device)\n\n# Print the model\nprint(netD)",
"_____no_output_____"
],
[
"# Discriminator is the model that should predict single number from input image.\n# This number is the probability of input being fake.\n\n# Lets check that Discriminator will return single number from input of size (1, 3, 32, 32)\n\ninput_variable = torch.randn((1, 3, 32, 32, )).to(device)\nnetD(input_variable)",
"_____no_output_____"
],
[
"# Initialize BCELoss function\n# This is the lost function used in DCGAN\ncriterion = nn.BCELoss()\n\n# Create batch of latent vectors that we will use to visualize\n# the progression of the generator\nfixed_noise = torch.randn(64, nz, 1, 1, device=device)\n\n# Establish convention for real and fake labels during training\nreal_label = 1\nfake_label = 0\n\n# Setup Adam optimizers for both G and D\noptimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999))\noptimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999))",
"_____no_output_____"
],
[
"# Training Loop\n\n# Lists to keep track of progress\nimg_list = []\nG_losses = []\nD_losses = []\niters = 0\n\nprint(\"Starting Training Loop...\")\n# For each epoch\nfor epoch in range(num_epochs):\n # For each batch in the dataloader\n for i, data in enumerate(dataloader, 0):\n\n ############################\n # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))\n ###########################\n ## Train with all-real batch\n netD.zero_grad()\n # Format batch\n real_cpu = data[0].to(device)\n b_size = real_cpu.size(0)\n label = torch.full((b_size,), real_label, device=device)\n # Forward pass real batch through D\n output = netD(real_cpu).view(-1)\n # Calculate loss on all-real batch\n errD_real = criterion(output, label)\n # Calculate gradients for D in backward pass\n errD_real.backward()\n D_x = output.mean().item()\n\n ## Train with all-fake batch\n # Generate batch of latent vectors\n noise = torch.randn(b_size, nz, 1, 1, device=device)\n # Generate fake image batch with G\n fake = netG(noise)\n label.fill_(fake_label)\n # Classify all fake batch with D\n output = netD(fake.detach()).view(-1)\n # Calculate D's loss on the all-fake batch\n errD_fake = criterion(output, label)\n # Calculate the gradients for this batch\n errD_fake.backward()\n D_G_z1 = output.mean().item()\n # Add the gradients from the all-real and all-fake batches\n errD = errD_real + errD_fake\n # Update D\n optimizerD.step()\n\n ############################\n # (2) Update G network: maximize log(D(G(z)))\n ###########################\n netG.zero_grad()\n label.fill_(real_label) # fake labels are real for generator cost\n # Since we just updated D, perform another forward pass of all-fake batch through D\n output = netD(fake).view(-1)\n # Calculate G's loss based on this output\n errG = criterion(output, label)\n # Calculate gradients for G\n errG.backward()\n D_G_z2 = output.mean().item()\n # Update G\n optimizerG.step()\n\n # Output training stats\n if i % 50 == 0:\n print('[%d/%d][%d/%d]\\tLoss_D: %.4f\\tLoss_G: %.4f\\tD(x): %.4f\\tD(G(z)): %.4f / %.4f'\n % (epoch+1, num_epochs, i, len(dataloader),\n errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))\n\n # Save Losses for plotting later\n G_losses.append(errG.item())\n D_losses.append(errD.item())\n\n # Check how the generator is doing by saving G's output on fixed_noise\n if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):\n with torch.no_grad():\n fake = netG(fixed_noise).detach().cpu()\n img_list.append(vutils.make_grid(fake, padding=2, normalize=True))\n\n iters += 1",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,5))\nplt.title(\"Generator and Discriminator Loss During Training\")\nplt.plot(G_losses,label=\"G\")\nplt.plot(D_losses,label=\"D\")\nplt.xlabel(\"iterations\")\nplt.ylabel(\"Loss\")\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"#%%capture\nfig = plt.figure(figsize=(8,8))\nplt.axis(\"off\")\nims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list]\nani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True)\n\nHTML(ani.to_jshtml())",
"_____no_output_____"
],
[
"# Grab a batch of real images from the dataloader\nreal_batch = next(iter(dataloader))\n\n# Plot the real images\nplt.figure(figsize=(15,15))\nplt.subplot(1,2,1)\nplt.axis(\"off\")\nplt.title(\"Real Images\")\nplt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))\n\n# Plot the fake images from the last epoch\nplt.subplot(1,2,2)\nplt.axis(\"off\")\nplt.title(\"Fake Images\")\nplt.imshow(np.transpose(img_list[-1],(1,2,0)))\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Task\n\n1) Train for longer to see how good the results get\n\n2) Modify this model to take torchvision.datasets.SVHN as input\n\n3) Modify this model to take torchvision.datasets.MNIST as input\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d051f3fcba978d2d15037486bc1d72a2a6fae363 | 3,759 | ipynb | Jupyter Notebook | LookingPictures.ipynb | PacktPublishing/Applications-of-Statistical-Learning-with-Python | f1b265aeada7cbcca8f1215ca470158299c4c1df | [
"MIT"
] | 7 | 2018-06-26T16:07:35.000Z | 2021-11-08T13:10:38.000Z | LookingPictures.ipynb | PacktPublishing/Applications-of-Statistical-Learning-with-Python | f1b265aeada7cbcca8f1215ca470158299c4c1df | [
"MIT"
] | null | null | null | LookingPictures.ipynb | PacktPublishing/Applications-of-Statistical-Learning-with-Python | f1b265aeada7cbcca8f1215ca470158299c4c1df | [
"MIT"
] | 6 | 2018-05-10T21:31:08.000Z | 2021-08-16T13:49:24.000Z | 26.286713 | 432 | 0.572227 | [
[
[
"# Looking at the Pictures\n*Curtis Miller*\n\nIn this notebook we see the images in our dataset and create some helper tools for managing the data. First, let's load in the needed libraries.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport cv2\nimport matplotlib.pyplot as plt\nimport matplotlib\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"The faces are stored in a CSV file `fer2013.csv`, loaded in next.",
"_____no_output_____"
]
],
[
[
"faces = pd.read_csv(\"fer2013.csv\")\n\nfaces",
"_____no_output_____"
],
[
"faces.Usage.value_counts()",
"_____no_output_____"
]
],
[
[
"The faces themselves are in the `pixels` column of the `DataFrame`, in a string. We want to convert the faces to NumPy 48x48 arrays that can be plotted with matplotlib. The values themselves are the intensities of grayscale pixels. We split the strings on spaces and convert characters to their corresponding numbers, reshaping to a desired array.\n\nThis is all done with the following function.",
"_____no_output_____"
]
],
[
[
"def string_to_image(pixelstring):\n return np.array(pixelstring.split(' '), dtype=np.int16).reshape(48, 48)",
"_____no_output_____"
],
[
"plt.imshow(string_to_image(faces.pixels[0]))",
"_____no_output_____"
],
[
"plt.imshow(string_to_image(faces.pixels[8]))",
"_____no_output_____"
]
],
[
[
"As humans we would like to know what the codes in the `emotion` column represent. The following dictionary defines the mapping. We won't use it in training but it's useful when presenting.",
"_____no_output_____"
]
],
[
[
"emotion_code = {0: \"angry\",\n 1: \"disgust\",\n 2: \"fear\",\n 3: \"happy\",\n 4: \"sad\",\n 5: \"surprise\",\n 6: \"neutral\"}",
"_____no_output_____"
]
],
[
[
"The dataset is already very clean. The images wrap tightly around faces so there isn't much point in any further processing; we can go straight to training. Of course, if we wanted to use the classifier on an image not from this dataset we would have to find a way to use the classifier trained on this dataset for it. This will require detecting faces in that new, foreign image and resizing them to work with our classifier.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d051ff063600fc7c6fbd2f498a20b2d0c2e4dc2e | 336,598 | ipynb | Jupyter Notebook | SBM_experiment.ipynb | YuTian8328/flow-based-clustering | da293edbfac058f5908fc0ab057d3097f0becc47 | [
"MIT"
] | null | null | null | SBM_experiment.ipynb | YuTian8328/flow-based-clustering | da293edbfac058f5908fc0ab057d3097f0becc47 | [
"MIT"
] | null | null | null | SBM_experiment.ipynb | YuTian8328/flow-based-clustering | da293edbfac058f5908fc0ab057d3097f0becc47 | [
"MIT"
] | null | null | null | 217.862783 | 146,896 | 0.885632 | [
[
[
"%cd ../",
"/mnt/c/Users/mottd/OneDrive/Tiedostot/python_excel/FederatedLearning-master\n"
]
],
[
[
"## Stochastic Block Model Experiment",
"_____no_output_____"
],
[
"Before geting into the experiment details, let's review algorithm 1 and the primal and dual updates.",
"_____no_output_____"
],
[
"### Algorithm 1",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"# %load algorithm/main.py\n%time\nfrom sklearn.metrics import mean_squared_error\n\nfrom penalty import *\n\n\ndef algorithm_1(K, D, weight_vec, datapoints, true_labels, samplingset, lambda_lasso, penalty_func_name='norm1', calculate_score=False):\n '''\n :param K: the number of iterations\n :param D: the block incidence matrix\n :param weight_vec: a list containing the edges's weights of the graph\n :param datapoints: a dictionary containing the data of each node in the graph needed for the algorithm 1\n :param true_labels: a list containing the true labels of the nodes\n :param samplingset: the sampling set\n :param lambda_lasso: the parameter lambda\n :param penalty_func_name: the name of the penalty function used in the algorithm\n\n :return iteration_scores: the mean squared error of the predicted weight vectors in each iteration\n :return new_w: the predicted weigh vectors for each node\n '''\n\n Sigma = np.diag(np.full(weight_vec.shape, 0.9 / 2))\n '''\n Sigma: the block diagonal matrix Sigma\n '''\n T_matrix = np.diag(np.array((1.0 / (np.sum(abs(D), 0)))).ravel())\n '''\n T_matrix: the block diagonal matrix T\n '''\n\n if np.linalg.norm(np.dot(Sigma ** 0.5, D).dot(T_matrix ** 0.5), 2) > 1:\n print ('product norm', np.linalg.norm(np.dot(Sigma ** 0.5, D).dot(T_matrix ** 0.5), 2))\n\n E, N = D.shape\n m, n = datapoints[0]['features'].shape\n\n # define the penalty function\n if penalty_func_name == 'norm1':\n penalty_func = Norm1Pelanty(lambda_lasso, weight_vec, Sigma, n)\n\n elif penalty_func_name == 'norm2':\n penalty_func = Norm2Pelanty(lambda_lasso, weight_vec, Sigma, n)\n\n elif penalty_func_name == 'mocha':\n penalty_func = MOCHAPelanty(lambda_lasso, weight_vec, Sigma, n)\n\n else:\n raise Exception('Invalid penalty name')\n\n # starting algorithm 1\n\n new_w = np.array([np.zeros(n) for i in range(N)])\n '''\n new_w: the primal variable of the algorithm 1\n '''\n new_u = np.array([np.zeros(n) for i in range(E)])\n '''\n new_u: the dual variable of the algorithm 1\n '''\n\n iteration_scores = []\n for iterk in range(K):\n # if iterk % 100 == 0:\n # print ('iter:', iterk)\n prev_w = np.copy(new_w)\n\n # algorithm 1, line 2\n hat_w = new_w - np.dot(T_matrix, np.dot(D.T, new_u))\n\n for i in range(N):\n if i in samplingset: # algorithm 1, line 6\n\n optimizer = datapoints[i]['optimizer']\n new_w[i] = optimizer.optimize(datapoints[i]['features'], datapoints[i]['label'], hat_w[i], datapoints[i]['degree'])\n\n else:\n new_w[i] = hat_w[i]\n\n # algorithm 1, line 9\n tilde_w = 2 * new_w - prev_w\n new_u = new_u + np.dot(Sigma, np.dot(D, tilde_w))\n\n # algorithm 1, line 10\n new_u = penalty_func.update(new_u)\n\n # calculate the MSE of the predicted weight vectors\n if calculate_score:\n Y_pred = []\n for i in range(N):\n Y_pred.append(np.dot(datapoints[i]['features'], new_w[i]))\n\n iteration_scores.append(mean_squared_error(true_labels.reshape(N, m), Y_pred))\n\n # print (np.max(abs(new_w - prev_w)))\n\n return iteration_scores, new_w\n",
"CPU times: user 3 µs, sys: 1e+03 ns, total: 4 µs\nWall time: 7.15 µs\n"
]
],
[
[
"### Primal Update ",
"_____no_output_____"
],
[
"As you see in the algorithm picture, the primal update needs a optimizer operator for the sampling set (line 6). We have implemented the optimizers discussed in the paper, both the logistic loss and squared error loss optimizers implementations with pytorch is available, also we have implemented the squared error loss optimizer using the fixed point equation in the `Networked Linear Regression` section of the paper. ",
"_____no_output_____"
]
],
[
[
"# %load algorithm/optimizer.py \nimport torch\nimport abc\nimport numpy as np\n\nfrom abc import ABC\n\n\n# The linear model which is implemented by pytorch\nclass TorchLinearModel(torch.nn.Module):\n def __init__(self, n):\n super(TorchLinearModel, self).__init__()\n self.linear = torch.nn.Linear(n, 1, bias=False)\n\n def forward(self, x):\n y_pred = self.linear(x)\n return y_pred\n\n\n# The abstract optimizer model which should have model, optimizer, and criterion as the input\nclass Optimizer(ABC):\n def __init__(self, model, optimizer, criterion):\n self.model = model\n self.optimizer = optimizer\n self.criterion = criterion\n\n @abc.abstractmethod\n def optimize(self, x_data, y_data, old_weight, regularizer_term):\n torch_old_weight = torch.from_numpy(np.array(old_weight, dtype=np.float32))\n self.model.linear.weight.data = torch_old_weight\n for iterinner in range(40):\n self.optimizer.zero_grad()\n y_pred = self.model(x_data)\n loss1 = self.criterion(y_pred, y_data)\n loss2 = 1 / (2 * regularizer_term) * torch.mean((self.model.linear.weight - torch_old_weight) ** 2) # + 10000*torch.mean((model.linear.bias+0.5)**2)#model.linear.weight.norm(2)\n loss = loss1 + loss2\n loss.backward()\n self.optimizer.step()\n\n return self.model.linear.weight.data.numpy()\n\n\n# The linear model in Networked Linear Regression section of the paper\nclass LinearModel:\n def __init__(self, degree, features, label):\n mtx1 = 2 * degree * np.dot(features.T, features).astype('float64')\n mtx1 += 1 * np.eye(mtx1.shape[0])\n mtx1_inv = np.linalg.inv(mtx1)\n\n mtx2 = 2 * degree * np.dot(features.T, label).T\n\n self.mtx1_inv = mtx1_inv\n self.mtx2 = mtx2\n\n def forward(self, x):\n mtx2 = x + self.mtx2\n mtx_inv = self.mtx1_inv\n\n return np.dot(mtx_inv, mtx2)\n\n\n# The Linear optimizer in Networked Linear Regression section of the paper\nclass LinearOptimizer(Optimizer):\n\n def __init__(self, model):\n super(LinearOptimizer, self).__init__(model, None, None)\n\n def optimize(self, x_data, y_data, old_weight, regularizer_term):\n return self.model.forward(old_weight)\n\n\n# The Linear optimizer model which is implemented by pytorch\nclass TorchLinearOptimizer(Optimizer):\n def __init__(self, model):\n criterion = torch.nn.MSELoss(reduction='mean')\n optimizer = torch.optim.RMSprop(model.parameters())\n super(TorchLinearOptimizer, self).__init__(model, optimizer, criterion)\n\n def optimize(self, x_data, y_data, old_weight, regularizer_term):\n return super(TorchLinearOptimizer, self).optimize(x_data, y_data, old_weight, regularizer_term)\n\n\n# The Logistic optimizer model which is implemented by pytorch\nclass TorchLogisticOptimizer(Optimizer):\n def __init__(self, model):\n criterion = torch.nn.BCELoss(reduction='mean')\n optimizer = torch.optim.RMSprop(model.parameters())\n super(TorchLogisticOptimizer, self).__init__(model, optimizer, criterion)\n\n def optimize(self, x_data, y_data, old_weight, regularizer_term):\n return super(TorchLogisticOptimizer, self).optimize(x_data, y_data, old_weight, regularizer_term)\n",
"_____no_output_____"
]
],
[
[
"### Dual Update ",
"_____no_output_____"
],
[
"As mentioned in the paper, the dual update has a penalty function(line 10) which is either norm1, norm2, or mocha.",
"_____no_output_____"
]
],
[
[
"# %load algorithm/penalty.py\nimport abc\nimport numpy as np\n\nfrom abc import ABC\n\n\n# The abstract penalty function which has a function update\nclass Penalty(ABC):\n def __init__(self, lambda_lasso, weight_vec, Sigma, n):\n self.lambda_lasso = lambda_lasso\n self.weight_vec = weight_vec\n self.Sigma = Sigma\n\n @abc.abstractmethod\n def update(self, new_u):\n pass\n\n\n# The norm2 penalty function\nclass Norm2Pelanty(Penalty):\n def __init__(self, lambda_lasso, weight_vec, Sigma, n):\n super(Norm2Pelanty, self).__init__(lambda_lasso, weight_vec, Sigma, n)\n self.limit = np.array(lambda_lasso * weight_vec)\n\n def update(self, new_u):\n normalized_u = np.where(np.linalg.norm(new_u, axis=1) >= self.limit)\n new_u[normalized_u] = (new_u[normalized_u].T * self.limit[normalized_u] / np.linalg.norm(new_u[normalized_u], axis=1)).T\n return new_u\n\n\n# The MOCHA penalty function\nclass MOCHAPelanty(Penalty):\n def __init__(self, lambda_lasso, weight_vec, Sigma, n):\n super(MOCHAPelanty, self).__init__(lambda_lasso, weight_vec, Sigma, n)\n self.normalize_factor = 1 + np.dot(2 * self.Sigma, 1/(self.lambda_lasso * self.weight_vec))\n\n def update(self, new_u):\n for i in range(new_u.shape[1]):\n new_u[:, i] /= self.normalize_factor\n\n return new_u\n\n\n# The norm1 penalty function\nclass Norm1Pelanty(Penalty):\n def __init__(self, lambda_lasso, weight_vec, Sigma, n):\n super(Norm1Pelanty, self).__init__(lambda_lasso, weight_vec, Sigma, n)\n self.limit = np.array([np.zeros(n) for i in range(len(weight_vec))])\n for i in range(n):\n self.limit[:, i] = lambda_lasso * weight_vec\n\n def update(self, new_u):\n normalized_u = np.where(abs(new_u) >= self.limit)\n new_u[normalized_u] = self.limit[normalized_u] * new_u[normalized_u] / abs(new_u[normalized_u])\n return new_u\n",
"_____no_output_____"
]
],
[
[
"## Create SBM Graph",
"_____no_output_____"
],
[
"The stochastic block model is a generative model for random graphs with some clusters structure. Two nodes within the same cluster of the empirical graph are connected by an edge with probability pin, two nodes from different clusters are connected by an edge with probability pout. Each node $i \\in V$ represents a local dataset consisting of $m$ feature vectors $x^{(i,1)}, ... , x^{(i,m)} \\in R^n$. The feature vectors are i.i.d. realizations of a standard Gaussian random vector x ∼ N(0,I). The labels $y_1^{(i)}, . . . , y_m^{(i)} \\in R$ of the nodes $i \\in V$ are generated according to the linear model $y_r^{(i)} = (x^{(i, r)})^T w^{(i)} + \\epsilon$, with $\\epsilon ∼ N(0,\\sigma)$. To learn the weight $w^{(i)}$ ,we apply Algorithm 1 to a training set M obtained by randomly selecting 40% of the nodes.",
"_____no_output_____"
]
],
[
[
"from optimizer import *\nfrom torch.autograd import Variable\n#from graspy.simulations import sbm\n\n\ndef get_sbm_data(cluster_sizes, G, W, m=5, n=2, noise_sd=0, is_torch_model=True):\n '''\n :param cluster_sizes: a list containing the size of each cluster\n :param G: generated SBM graph with defined clusters using graspy.simulations\n :param W: a list containing the weight vectors for each cluster\n :param m, n: shape of features vector for each node\n :param pin: the probability of edges inside each cluster\n :param pout: the probability of edges between the clusters\n :param noise_sd: the standard deviation of the noise for calculating the labels\n \n :return B: adjacency matrix of the graph\n :return weight_vec: a list containing the edges's weights of the graph\n :return true_labels: a list containing the true labels of the nodes\n :return datapoints: a dictionary containing the data of each node in the graph needed for the algorithm 1 \n '''\n\n N = len(G)\n E = int(G.number_of_edges())#int(len(np.argwhere(G > 0))/2)\n '''\n N: total number of nodes\n E: total number of edges\n '''\n \n \n # create B(adjacency matrix) and edges's weights vector(weight_vec) based on the graph G\n B = np.zeros((E, N))\n '''\n B: adjacency matrix of the graph with the shape of E*N\n '''\n weight_vec = np.zeros(E)\n '''\n weight_vec: a list containing the edges's weights of the graph with the shape of E\n '''\n \n cnt = 0\n for i, j in G.edges:\n if i > j:\n continue\n B[cnt, i] = 1\n B[cnt, j] = -1\n\n weight_vec[cnt] = 1\n cnt += 1\n \n \n # create the data of each node needed for the algorithm 1 \n \n node_degrees = np.array((1.0 / (np.sum(abs(B), 0)))).ravel()\n '''\n node_degrees: a list containing the nodes degree for the alg1 (1/N_i)\n '''\n \n datapoints = {}\n '''\n datapoints: a dictionary containing the data of each node in the graph needed for the algorithm 1,\n which are features, label, degree, and also the optimizer model for each node\n '''\n true_labels = []\n '''\n true_labels: the true labels for the nodes of the graph\n '''\n cnt = 0\n for i, cluster_size in enumerate(cluster_sizes):\n for j in range(cluster_size):\n features = np.random.normal(loc=0.0, scale=1.0, size=(m, n))\n '''\n features: the feature vector of node i which are i.i.d. realizations of a standard Gaussian random vector x~N(0,I)\n '''\n label = np.dot(features, W[i]) + np.random.normal(0,noise_sd)\n '''\n label: the label of the node i that is generated according to the linear model y = x^T w + e\n '''\n \n true_labels.append(label)\n\n if is_torch_model:\n model = TorchLinearModel(n)\n optimizer = TorchLinearOptimizer(model)\n features = Variable(torch.from_numpy(features)).to(torch.float32)\n label = Variable(torch.from_numpy(label)).to(torch.float32) \n\n else:\n\n model = LinearModel(node_degrees[i], features, label)\n optimizer = LinearOptimizer(model) \n '''\n model : the linear model for the node i \n optimizer : the optimizer model for the node i \n ''' \n \n datapoints[cnt] = {\n 'features': features,\n 'degree': node_degrees[i],\n 'label': label,\n 'optimizer': optimizer\n }\n cnt += 1\n \n\n return B, weight_vec, np.array(true_labels), datapoints\n\n\n",
"_____no_output_____"
]
],
[
[
"### Compare Results",
"_____no_output_____"
],
[
"As the result we compare the MSE of Algorithm 1 with plain linear regression \nand decision tree regression",
"_____no_output_____"
]
],
[
[
"# %load results/compare_results.py\nimport numpy as np\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.metrics import mean_squared_error\n\n\ndef get_algorithm1_MSE(datapoints, predicted_w, samplingset):\n '''\n :param datapoints: a dictionary containing the data of each node in the graph needed for the algorithm 1\n :param predicted_w: the predicted weigh vectors for each node\n :param samplingset: the sampling set for the algorithm 1\n\n :return alg1_MSE: the MSE of the algorithm 1 for all the nodes, the samplingset and other nodes (test set)\n '''\n not_samplingset = [i for i in range(len(datapoints)) if i not in samplingset]\n\n true_labels = []\n pred_labels = []\n for i in range(len(datapoints)):\n features = np.array(datapoints[i]['features'])\n label = np.array(datapoints[i]['label'])\n true_labels.append(label)\n\n pred_labels.append(np.dot(features, predicted_w[i]))\n\n pred_labels = np.array(pred_labels)\n true_labels = np.array(true_labels)\n\n alg1_MSE = {'total': mean_squared_error(true_labels, pred_labels),\n 'train': mean_squared_error(true_labels[samplingset], pred_labels[samplingset]),\n 'test': mean_squared_error(true_labels[not_samplingset], pred_labels[not_samplingset])}\n\n return alg1_MSE\n\n\ndef get_linear_regression_MSE(x, y, samplingset, not_samplingset):\n '''\n :param x: a list containing the features of the nodes\n :param y: a list containing the labels of the nodes\n :param samplingset: the training dataset\n :param not_samplingset: the test dataset\n :return linear_regression_MSE : the MSE of linear regression for all the nodes, the samplingset and other nodes (test set)\n '''\n\n model = LinearRegression().fit(x[samplingset], y[samplingset])\n pred_y = model.predict(x)\n\n linear_regression_MSE = {'total': mean_squared_error(y, pred_y),\n 'train': mean_squared_error(y[samplingset],\n pred_y[samplingset]),\n 'test': mean_squared_error(y[not_samplingset],\n pred_y[not_samplingset])}\n\n return linear_regression_MSE\n\n\ndef get_decision_tree_MSE(x, y, samplingset, not_samplingset):\n '''\n :param x: a list containing the features of the nodes\n :param y: a list containing the labels of the nodes\n :param samplingset: the training dataset\n :param not_samplingset: the test dataset\n :return decision_tree_MSE : the MSE of decision tree for all the nodes, the samplingset and other nodes (test set)\n '''\n\n max_depth = 2\n\n regressor = DecisionTreeRegressor(max_depth=max_depth)\n regressor.fit(x[samplingset], y[samplingset])\n pred_y = regressor.predict(x)\n\n decision_tree_MSE = {'total': mean_squared_error(y, pred_y),\n 'train': mean_squared_error(y[samplingset],\n pred_y[samplingset]),\n 'test': mean_squared_error(y[not_samplingset],\n pred_y[not_samplingset])}\n return decision_tree_MSE\n\n\ndef get_scores(datapoints, predicted_w, samplingset):\n N = len(datapoints)\n '''\n N : the total number of nodes\n '''\n\n # calculate algorithm1 MSE\n alg_1_score = get_algorithm1_MSE(datapoints, predicted_w, samplingset)\n\n # prepare the data for calculating the linear regression and decision tree regression MSEs\n X = []\n '''\n X: an array containing the features of all the nodes\n '''\n true_labels = []\n '''\n true_labels: an array containing the labels of all the nodes\n '''\n for i in range(len(datapoints)):\n X.append(np.array(datapoints[i]['features']))\n true_labels.append(np.array(datapoints[i]['label']))\n\n X = np.array(X)\n true_labels = np.array(true_labels)\n m, n = X[0].shape\n\n x = X.reshape(-1, n)\n y = true_labels.reshape(-1, 1)\n\n reformated_samplingset = []\n for item in samplingset:\n for i in range(m):\n reformated_samplingset.append(m * item + i)\n reformated_not_samplingset = [i for i in range(m * N) if i not in reformated_samplingset]\n\n # calculate linear regression MSE\n linear_regression_score = get_linear_regression_MSE(x, y, reformated_samplingset, reformated_not_samplingset)\n\n # calculate decision tree MSE\n decision_tree_score = get_decision_tree_MSE(x, y, reformated_samplingset, reformated_not_samplingset)\n\n return alg_1_score, linear_regression_score, decision_tree_score\n",
"_____no_output_____"
]
],
[
[
"### SBM with Two Clusters",
"_____no_output_____"
],
[
"This SBM has two clusters $|C_1| = |C_2| = 100$.\nTwo nodes within the same cluster are connected by an edge with probability `pin=0.5`, \nand two nodes from different clusters are connected by an edge with probability `pout=0.01`. \nEach node $i \\in V$ represents a local dataset consisting of feature vectors $x^{(i,1)}, ... , x^{(i,5)} \\in R^2$.\nThe feature vectors are i.i.d. realizations of a standard Gaussian random vector x ~ N(0,I).\nThe labels $y_1^{(i)}, . . . , y_5^{(i)} \\in R$ for each node $i \\in V$\nare generated according to the linear model $y_r^{(i)} = (x^{(i, r)})^T w^{(i)} + \\epsilon$, with $\\epsilon = 0$. \nThe tuning parameter $\\lambda$ in algorithm1 \nis manually chosen, guided by the resulting MSE, as $\\lambda=0.01$ for norm1 and norm2 and also $\\lambda=0.05$ for mocha penalty function. \nTo learn the weight $w^{(i)}$ ,we apply Algorithm 1 to a training set M obtained by randomly selecting 40% of the nodes and use the rest as test set. As the result we compare the mean MSE of Algorithm 1 with plain linear regression and decision tree regression with respect to the different random sampling sets.",
"_____no_output_____"
]
],
[
[
"#from graspy.simulations import sbm\nimport networkx as nx\n\n\ndef get_sbm_2blocks_data(m=5, n=2, pin=0.5, pout=0.01, noise_sd=0, is_torch_model=True):\n '''\n :param m, n: shape of features vector for each node\n :param pin: the probability of edges inside each cluster\n :param pout: the probability of edges between the clusters\n :param noise_sd: the standard deviation of the noise for calculating the labels\n \n :return B: adjacency matrix of the graph\n :return weight_vec: a list containing the edges's weights of the graph\n :return true_labels: a list containing the true labels of the nodes\n :return datapoints: a dictionary containing the data of each node in the graph needed for the algorithm 1 \n '''\n cluster_sizes = [100, 100]\n\n # generate graph G which is a SBM wich 2 clusters\n #G = sbm(n=cluster_sizes, p=[[pin, pout],[pout, pin]])\n probs = [[pin, pout], [pout, pin]]\n G = nx.stochastic_block_model(cluster_sizes, probs)\n '''\n G: generated SBM graph with 2 clusters\n ''' \n \n # define weight vectors for each cluster of the graph\n \n W1 = np.array([2, 2])\n '''\n W1: the weigh vector for the first cluster\n '''\n W2 = np.array([-2, 2])\n '''\n W2: the weigh vector for the second cluster\n '''\n \n W = [W1, W2]\n \n \n return get_sbm_data(cluster_sizes, G, W, m, n, noise_sd, is_torch_model)\n\n",
"_____no_output_____"
],
[
"a = nx.stochastic_block_model([100, 100], [[0.1,0.01], [0.01,0.1]])\nnx.draw(a,with_labels=True)",
"_____no_output_____"
]
],
[
[
"Plot the MSE with respect to the different random sampling sets for each penalty function, the plots are in the log scale",
"_____no_output_____"
]
],
[
[
"%time\nimport random \nimport matplotlib.pyplot as plt\n\nfrom collections import defaultdict\n\n\nPENALTY_FUNCS = ['norm1', 'norm2', 'mocha']\n\nLAMBDA_LASSO = {'norm1': 0.01, 'norm2': 0.01, 'mocha': 0.05}\n\nK = 1000\n\nB, weight_vec, true_labels, datapoints = get_sbm_2blocks_data(pin=0.5, pout=0.01, is_torch_model=False)\nE, N = B.shape\n\nalg1_scores = defaultdict(list)\nlinear_regression_scores = defaultdict(list)\ndecision_tree_scores = defaultdict(list)\n##samplingset = random.sample([i for i in range(N)], k=int(0.4* N))\n##lambda_lasso = LAMBDA_LASSO['mocha']\n##algorithm_1(K, B, weight_vec, datapoints, true_labels, samplingset, lambda_lasso, PENALTY_FUNCS[0])\n\nnum_tries = 5\nfor i in range(num_tries):\n samplingset = random.sample([i for i in range(N)], k=int(0.4* N))\n\n for penalty_func in PENALTY_FUNCS:\n\n lambda_lasso = LAMBDA_LASSO[penalty_func]\n _, predicted_w = algorithm_1(K, B, weight_vec, datapoints, true_labels, samplingset, lambda_lasso, penalty_func)\n\n alg1_score, linear_regression_score, decision_tree_score = get_scores(datapoints, predicted_w, samplingset)\n \n alg1_scores[penalty_func].append(alg1_score)\n linear_regression_scores[penalty_func].append(linear_regression_score)\n decision_tree_scores[penalty_func].append(decision_tree_score)\n",
"CPU times: user 3 µs, sys: 1 µs, total: 4 µs\nWall time: 7.15 µs\n"
],
[
"%time\nlabels = ['alg1,norm1', 'alg1,norm2', 'alg1,mocha', 'linear reg', 'decision tree']\nx_pos = np.arange(len(labels))\n \n \nprint('algorithm 1, norm1:', \n '\\n mean train MSE:', np.mean([item['train'] for item in alg1_scores['norm1']]),\n '\\n mean test MSE:', np.mean([item['test'] for item in alg1_scores['norm1']]))\n\nprint('algorithm 1, norm2:', \n '\\n mean train MSE:', np.mean([item['train'] for item in alg1_scores['norm2']]),\n '\\n mean test MSE:', np.mean([item['test'] for item in alg1_scores['norm2']]))\n\nprint('algorithm 1, mocha:', \n '\\n mean train MSE:', np.mean([item['train'] for item in alg1_scores['mocha']]),\n '\\n mean test MSE:', np.mean([item['test'] for item in alg1_scores['mocha']])) \n \nprint('linear regression:', \n '\\n mean train MSE:', np.mean([item['train'] for item in linear_regression_scores['norm1']]),\n '\\n mean test MSE:', np.mean([item['test'] for item in linear_regression_scores['norm1']]))\n\nprint('decision tree:', \n '\\n mean train MSE:', np.mean([item['train'] for item in decision_tree_scores['norm1']]),\n '\\n mean test MSE:', np.mean([item['test'] for item in decision_tree_scores['norm1']]))\n \nalg1_norm1_score = [item['total'] for item in alg1_scores['norm1']]\nalg1_norm2_score = [item['total'] for item in alg1_scores['norm2']]\nalg1_mocha_score = [item['total'] for item in alg1_scores['mocha']] \nlinear_regression_score = [item['total'] for item in linear_regression_scores['norm1']]\ndecision_tree_score = [item['total'] for item in decision_tree_scores['norm1']]\n\nmean_MSEs = [\n np.mean(alg1_norm1_score), \n np.mean(alg1_norm2_score), \n np.mean(alg1_mocha_score), \n np.mean(linear_regression_score), \n np.mean(decision_tree_score)\n]\n\nstd_MSEs = [\n np.std(alg1_norm1_score), \n np.std(alg1_norm2_score),\n np.std(alg1_mocha_score), \n np.std(linear_regression_score), \n np.std(decision_tree_score)]\n\n\nfig, ax = plt.subplots()\nax.bar(x_pos, mean_MSEs,\n yerr=std_MSEs,\n align='center',\n alpha=0.5,\n ecolor='black',\n capsize=20)\nax.set_ylabel('MSE')\nax.set_xticks(x_pos)\nax.set_xticklabels(labels)\nax.set_yscale('log')\nax.set_title('error bars plot')\nplt.show()\nplt.close()\n ",
"CPU times: user 3 µs, sys: 1 µs, total: 4 µs\nWall time: 26.9 µs\nalgorithm 1, norm1: \n mean train MSE: 8.845062633626295e-06 \n mean test MSE: 8.411817666751793e-06\nalgorithm 1, norm2: \n mean train MSE: 8.937548539721603e-06 \n mean test MSE: 8.583071087032906e-06\nalgorithm 1, mocha: \n mean train MSE: 0.0011548714912415193 \n mean test MSE: 0.059934032754604294\nlinear regression: \n mean train MSE: 4.174356924195071 \n mean test MSE: 3.993515488232095\ndecision tree: \n mean train MSE: 4.198915999492509 \n mean test MSE: 4.493515851377256\n"
]
],
[
[
"Plot the MSE with respect to the different noise standard deviations (0.01, 0.1, 1.0) for each penalty function, as you can see algorithm 1 is somehow robust to the noise.",
"_____no_output_____"
]
],
[
[
"%time\nimport random\nimport matplotlib.pyplot as plt\n\n\nPENALTY_FUNCS = ['norm1', 'norm2', 'mocha']\n\nlambda_lasso = 0.01\n\nK = 20\nsampling_ratio = 0.6\npouts = [0.01, 0.1, 0.2, 0.4, 0.6]\ncolors = ['steelblue', 'darkorange', 'green']\n\nfor penalty_func in PENALTY_FUNCS:\n print('penalty_func:', penalty_func)\n\n for i, noise in enumerate([0.01, 0.1, 1.0]):\n MSEs_mean = {}\n MSEs_std = {}\n \n for pout in pouts:\n \n num_tries = 5\n pout_mses = []\n for j in range(num_tries):\n B, weight_vec, true_labels, datapoints = get_sbm_2blocks_data(pin=0.5, pout=pout, noise_sd=noise, is_torch_model=False)\n E, N = B.shape\n\n samplingset = random.sample([i for i in range(N)], k=int(sampling_ratio * N))\n\n _, predicted_w = algorithm_1(K, B, weight_vec, datapoints, true_labels, samplingset, lambda_lasso, penalty_func)\n\n alg1_score, _, _ = get_scores(datapoints, predicted_w, samplingset)\n pout_mses.append(alg1_score['total'])\n MSEs_mean[pout] = np.mean(pout_mses)\n MSEs_std[pout] = np.std(pout_mses)\n\n plt.errorbar(list(MSEs_mean.keys()), list(MSEs_mean.values()), yerr=list(MSEs_std.values()), \n ecolor=colors[i], capsize=3,\n label='noise=' + str(noise), c=colors[i])\n\n print('noise', noise)\n print(' MSEs:', MSEs_mean)\n\n plt.xlabel('p_out')\n plt.ylabel('MSE')\n plt.legend(loc='best')\n plt.title('Penalty function : %s' % penalty_func)\n plt.show()\n plt.close()",
"CPU times: user 0 ns, sys: 0 ns, total: 0 ns\nWall time: 29.3 µs\npenalty_func: norm1\nnoise 0.01\n MSEs: {0.01: 2.705315973442155, 0.1: 2.8803085633466834, 0.2: 3.123534394242319, 0.4: 3.118645741846799, 0.6: 3.2209511562160515}\nnoise 0.1\n MSEs: {0.01: 2.858618729737168, 0.1: 2.8760340056295552, 0.2: 3.0985472679149177, 0.4: 3.166597939776252, 0.6: 3.259783197200458}\nnoise 1.0\n MSEs: {0.01: 3.940328318550642, 0.1: 3.8713989829443323, 0.2: 3.8776937800828435, 0.4: 4.023545611925063, 0.6: 4.102011359863877}\n"
]
],
[
[
"Plot the MSE with respect to the different sampling ratios (0.2, 0.4, 0.6) for each penalty function",
"_____no_output_____"
]
],
[
[
"import random\nimport matplotlib.pyplot as plt\n\n\nPENALTY_FUNCS = ['norm1', 'norm2', 'mocha']\n\nlambda_lasso = 0.01\n\nK = 30\nsampling_ratio = 0.6\n\npouts = [0.01, 0.1, 0.2, 0.4, 0.6]\ncolors = ['steelblue', 'darkorange', 'green']\n\nfor penalty_func in PENALTY_FUNCS:\n print('penalty_func:', penalty_func)\n\n for i, sampling_ratio in enumerate([0.2, 0.4, 0.6]):\n MSEs_mean = {}\n MSEs_std = {}\n \n for pout in pouts:\n \n num_tries = 5\n pout_mses = []\n for j in range(num_tries):\n\n B, weight_vec, true_labels, datapoints = get_sbm_2blocks_data(pin=0.5, pout=pout, is_torch_model=False)\n E, N = B.shape\n\n samplingset = random.sample([i for i in range(N)], k=int(sampling_ratio * N))\n\n _, predicted_w = algorithm_1(K, B, weight_vec, datapoints, true_labels, samplingset, lambda_lasso, penalty_func)\n\n alg1_score, _, _ = get_scores(datapoints, predicted_w, samplingset)\n pout_mses.append(alg1_score['total'])\n MSEs_mean[pout] = np.mean(pout_mses)\n MSEs_std[pout] = np.std(pout_mses)\n\n plt.errorbar(list(MSEs_mean.keys()), list(MSEs_mean.values()), yerr=list(MSEs_std.values()), \n ecolor=colors[i], capsize=3,\n label='M=' + str(sampling_ratio), c=colors[i])\n \n print('M:', sampling_ratio)\n print('MSE:', MSEs_mean)\n \n plt.xlabel('p_out')\n plt.ylabel('MSE')\n plt.legend(loc='best')\n plt.title('Penalty function : %s' % penalty_func)\n plt.show()\n plt.close()\n ",
"penalty_func: norm1\nM: 0.2\nMSE: {0.01: 6.011022085530584, 0.1: 5.854915785166783, 0.2: 6.136745451677013, 0.4: 6.165292827085321, 0.6: 6.495651188025879}\nM: 0.4\nMSE: {0.01: 4.224003404596983, 0.1: 4.423759609325218, 0.2: 4.394123644502406, 0.4: 4.516906390091848, 0.6: 4.494963955858758}\nM: 0.6\nMSE: {0.01: 2.7074374532565324, 0.1: 2.668277009267647, 0.2: 2.79190605215051, 0.4: 3.1706581302101258, 0.6: 3.0742020650648056}\n"
]
],
[
[
"### SBM with Five Clusters",
"_____no_output_____"
],
[
"The size of the clusters are {70, 10, 50, 100, 150} \nwith random weight vectors $\\in R^2$ selected uniformly from $[0,1)$. \nWe run Algorithm 1 with a fixed `pin = 0.5` and `pout = 0.001`, \nand a fixed number of 1000 iterations. Each node $i \\in V$ represents a local dataset consisting of feature vectors $x^{(i,1)}, ... , x^{(i,5)} \\in R^2$.\nThe feature vectors are i.i.d. realizations of a standard Gaussian random vector x ~ N(0,I).\nThe labels $y_1^{(i)}, . . . , y_5^{(i)} \\in R$ for each node $i \\in V$\nare generated according to the linear model $y_r^{(i)} = (x^{(i, r)})^T w^{(i)} + \\epsilon$, with $\\epsilon = 0$. The tuning parameter $\\lambda$ in algorithm1 \nis manually chosen, guided by the resulting MSE, as $\\lambda=0.01$ for norm1 and norm2 and also $\\lambda=0.05$ for mocha penalty function. \nWe assume that labels $y^{(i)}$ are available for 20% of the graph nodes. We randomly choose the training set M \nand use the rest as test set.\nAs the result we compare the mean MSE of Algorithm 1 with plain linear regression \nand decision tree regression with respect to the different random sampling sets.",
"_____no_output_____"
]
],
[
[
"from graspy.simulations import sbm\n\n\ndef get_sbm_5blocks_data(m=5, n=2, pin=0.5, pout=0.01, noise_sd=0, is_torch_model=True):\n '''\n :param m, n: shape of features vector for each node\n :param pin: the probability of edges inside each cluster\n :param pout: the probability of edges between the clusters\n :param noise_sd: the standard deviation of the noise for calculating the labels\n \n :return B: adjacency matrix of the graph\n :return weight_vec: a list containing the edges's weights of the graph\n :return true_labels: a list containing the true labels of the nodes\n :return datapoints: a dictionary containing the data of each node in the graph needed for the algorithm 1 \n '''\n cluster_sizes = [70, 10, 50, 100, 150]\n \n p = [[pin if i==j else pout for i in range(len(cluster_sizes))] for j in range(len(cluster_sizes))]\n\n # generate graph G which is a SBM wich 2 clusters\n G = sbm(n=cluster_sizes, p=p)\n '''\n G: generated SBM graph with 2 clusters\n ''' \n \n # define weight vectors for each cluster of the graph\n W = []\n for i in range(len(cluster_sizes)):\n # the weigh vector for the ith cluster\n W.append(np.random.random(n))\n \n \n \n return get_sbm_data(cluster_sizes, G, W, m, n, noise_sd, is_torch_model)\n\n\n",
"_____no_output_____"
],
[
"import random \n\n \nPENALTY_FUNCS = ['norm1', 'norm2', 'mocha']\n\nLAMBDA_LASSO = {'norm1': 0.01, 'norm2': 0.01, 'mocha': 0.05}\n\nK = 1000\n\nB, weight_vec, true_labels, datapoints = get_sbm_5blocks_data(pin=0.5, pout=0.001, is_torch_model=False)\nE, N = B.shape\n\nalg1_scores = defaultdict(list)\nlinear_regression_scores = defaultdict(list)\ndecision_tree_scores = defaultdict(list)\n\nnum_tries = 5\nfor i in range(num_tries):\n samplingset = random.sample([i for i in range(N)], k=int(0.2* N))\n\n for penalty_func in PENALTY_FUNCS:\n\n lambda_lasso = LAMBDA_LASSO[penalty_func]\n _, predicted_w = algorithm_1(K, B, weight_vec, datapoints, true_labels, samplingset, lambda_lasso, penalty_func)\n\n alg1_score, linear_regression_score, decision_tree_score = get_scores(datapoints, predicted_w, samplingset)\n \n alg1_scores[penalty_func].append(alg1_score)\n linear_regression_scores[penalty_func].append(linear_regression_score)\n decision_tree_scores[penalty_func].append(decision_tree_score)\n ",
"_____no_output_____"
],
[
"labels = ['alg1,norm1', 'alg1,norm2', 'alg1,mocha', 'linear reg', 'decision tree']\nx_pos = np.arange(len(labels))\n \n \nprint('algorithm 1, norm1:', \n '\\n mean train MSE:', np.mean([item['train'] for item in alg1_scores['norm1']]),\n '\\n mean test MSE:', np.mean([item['test'] for item in alg1_scores['norm1']]))\n\nprint('algorithm 1, norm2:', \n '\\n mean train MSE:', np.mean([item['train'] for item in alg1_scores['norm2']]),\n '\\n mean test MSE:', np.mean([item['test'] for item in alg1_scores['norm2']]))\n\nprint('algorithm 1, mocha:', \n '\\n mean train MSE:', np.mean([item['train'] for item in alg1_scores['mocha']]),\n '\\n mean test MSE:', np.mean([item['test'] for item in alg1_scores['mocha']])) \n \nprint('linear regression:', \n '\\n mean train MSE:', np.mean([item['train'] for item in linear_regression_scores['norm1']]),\n '\\n mean test MSE:', np.mean([item['test'] for item in linear_regression_scores['norm1']]))\n\nprint('decision tree:', \n '\\n mean train MSE:', np.mean([item['train'] for item in decision_tree_scores['norm1']]),\n '\\n mean test MSE:', np.mean([item['test'] for item in decision_tree_scores['norm1']]))\n \nalg1_norm1_score = [item['total'] for item in alg1_scores['norm1']]\nalg1_norm2_score = [item['total'] for item in alg1_scores['norm2']]\nalg1_mocha_score = [item['total'] for item in alg1_scores['mocha']] \nlinear_regression_score = [item['total'] for item in linear_regression_scores['norm1']]\ndecision_tree_score = [item['total'] for item in decision_tree_scores['norm1']]\n\nmean_MSEs = [\n np.mean(alg1_norm1_score), \n np.mean(alg1_norm2_score), \n np.mean(alg1_mocha_score), \n np.mean(linear_regression_score), \n np.mean(decision_tree_score)\n]\n\nstd_MSEs = [\n np.std(alg1_norm1_score), \n np.std(alg1_norm2_score),\n np.std(alg1_mocha_score), \n np.std(linear_regression_score), \n np.std(decision_tree_score)]\n\n\nfig, ax = plt.subplots()\nax.bar(x_pos, mean_MSEs,\n yerr=std_MSEs,\n align='center',\n alpha=0.5,\n ecolor='black',\n capsize=20)\nax.set_ylabel('MSE')\nax.set_xticks(x_pos)\nax.set_xticklabels(labels)\nax.set_yscale('log')\nax.set_title('error bars plot')\nplt.show()\nplt.close()\n ",
"_____no_output_____"
],
[
"import scipy\nversion = scipy.version.version\n\nprint(version)\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d052019d6124bb1365f211ec800f9a2e85951635 | 48,382 | ipynb | Jupyter Notebook | notebooks/community/ml_ops/stage3/get_started_with_custom_training_pipeline_components.ipynb | changlan/vertex-ai-samples | 639ecb962e2ca8ddcd0b8e94fc81c96ed85a34b7 | [
"Apache-2.0"
] | null | null | null | notebooks/community/ml_ops/stage3/get_started_with_custom_training_pipeline_components.ipynb | changlan/vertex-ai-samples | 639ecb962e2ca8ddcd0b8e94fc81c96ed85a34b7 | [
"Apache-2.0"
] | null | null | null | notebooks/community/ml_ops/stage3/get_started_with_custom_training_pipeline_components.ipynb | changlan/vertex-ai-samples | 639ecb962e2ca8ddcd0b8e94fc81c96ed85a34b7 | [
"Apache-2.0"
] | null | null | null | 39.112369 | 441 | 0.531396 | [
[
[
"# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# E2E ML on GCP: MLOps stage 3 : formalization: get started with custom training pipeline components\n\n<table align=\"left\">\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/ml_ops_stage3/get_started_with_custom_training_pipeline_components.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n <td>\n <a href=\"https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/ml_ops_stage3/get_started_with_custom_training_pipeline_components.ipynb\">\n Open in Google Cloud Notebooks\n </a>\n </td>\n</table>\n<br/><br/><br/>",
"_____no_output_____"
],
[
"## Overview\n\n\nThis tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 3 : formalization: get started with custom training pipeline components.",
"_____no_output_____"
],
[
"### Dataset\n\nThe dataset used for this tutorial is the [Flowers dataset](https://www.tensorflow.org/datasets/catalog/tf_flowers) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip.",
"_____no_output_____"
],
[
"## Installations\n\nInstall *one time* the packages for executing the MLOps notebooks.",
"_____no_output_____"
]
],
[
[
"ONCE_ONLY = False\nif ONCE_ONLY:\n ! pip3 install -U tensorflow==2.5 $USER_FLAG\n ! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG\n ! pip3 install -U tensorflow-transform==1.2 $USER_FLAG\n ! pip3 install -U tensorflow-io==0.18 $USER_FLAG\n ! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG\n ! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG\n ! pip3 install --upgrade google-cloud-bigquery $USER_FLAG\n ! pip3 install --upgrade google-cloud-logging $USER_FLAG\n ! pip3 install --upgrade apache-beam[gcp] $USER_FLAG\n ! pip3 install --upgrade pyarrow $USER_FLAG\n ! pip3 install --upgrade cloudml-hypertune $USER_FLAG\n ! pip3 install --upgrade kfp $USER_FLAG",
"_____no_output_____"
]
],
[
[
"### Restart the kernel\n\nOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.",
"_____no_output_____"
]
],
[
[
"import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)",
"_____no_output_____"
]
],
[
[
"#### Set your project ID\n\n**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.",
"_____no_output_____"
]
],
[
[
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}",
"_____no_output_____"
],
[
"if PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)",
"_____no_output_____"
],
[
"! gcloud config set project $PROJECT_ID",
"_____no_output_____"
]
],
[
[
"#### Region\n\nYou can also change the `REGION` variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n\n- Americas: `us-central1`\n- Europe: `europe-west4`\n- Asia Pacific: `asia-east1`\n\nYou may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\n\nLearn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations).",
"_____no_output_____"
]
],
[
[
"REGION = \"us-central1\" # @param {type: \"string\"}",
"_____no_output_____"
]
],
[
[
"#### Timestamp\n\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.",
"_____no_output_____"
]
],
[
[
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")",
"_____no_output_____"
]
],
[
[
"### Create a Cloud Storage bucket\n\n**The following steps are required, regardless of your notebook environment.**\n\nWhen you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.\n\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.",
"_____no_output_____"
]
],
[
[
"BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}",
"_____no_output_____"
],
[
"if BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP",
"_____no_output_____"
]
],
[
[
"**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.",
"_____no_output_____"
]
],
[
[
"! gsutil mb -l $REGION $BUCKET_NAME",
"_____no_output_____"
]
],
[
[
"Finally, validate access to your Cloud Storage bucket by examining its contents:",
"_____no_output_____"
]
],
[
[
"! gsutil ls -al $BUCKET_NAME",
"_____no_output_____"
]
],
[
[
"#### Service Account\n\n**If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below.",
"_____no_output_____"
]
],
[
[
"SERVICE_ACCOUNT = \"[your-service-account]\" # @param {type:\"string\"}",
"_____no_output_____"
],
[
"if (\n SERVICE_ACCOUNT == \"\"\n or SERVICE_ACCOUNT is None\n or SERVICE_ACCOUNT == \"[your-service-account]\"\n):\n # Get your GCP project id from gcloud\n shell_output = !gcloud auth list 2>/dev/null\n SERVICE_ACCOUNT = shell_output[2].strip()\n print(\"Service Account:\", SERVICE_ACCOUNT)",
"_____no_output_____"
]
],
[
[
"#### Set service account access for Vertex AI Pipelines\n\nRun the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.",
"_____no_output_____"
]
],
[
[
"! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME\n\n! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME",
"_____no_output_____"
]
],
[
[
"### Set up variables\n\nNext, set up some variables used throughout the tutorial.\n### Import libraries and define constants",
"_____no_output_____"
]
],
[
[
"import google.cloud.aiplatform as aip",
"_____no_output_____"
],
[
"import json\n\nfrom kfp import dsl\nfrom kfp.v2 import compiler\nfrom kfp.v2.dsl import component",
"_____no_output_____"
]
],
[
[
"### Initialize Vertex AI SDK for Python\n\nInitialize the Vertex AI SDK for Python for your project and corresponding bucket.",
"_____no_output_____"
]
],
[
[
"aip.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)",
"_____no_output_____"
]
],
[
[
"#### Set hardware accelerators\n\nYou can set hardware accelerators for training and prediction.\n\nSet the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:\n\n (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)\n\n\nOtherwise specify `(None, None)` to use a container image to run on a CPU.\n\nLearn more about [hardware accelerator support for your region](https://cloud.google.com/vertex-ai/docs/general/locations#accelerators).\n\n*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.",
"_____no_output_____"
]
],
[
[
"if os.getenv(\"IS_TESTING_TRAIN_GPU\"):\n TRAIN_GPU, TRAIN_NGPU = (\n aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,\n int(os.getenv(\"IS_TESTING_TRAIN_GPU\")),\n )\nelse:\n TRAIN_GPU, TRAIN_NGPU = (aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, 1)\n\nif os.getenv(\"IS_TESTING_DEPLOY_GPU\"):\n DEPLOY_GPU, DEPLOY_NGPU = (\n aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,\n int(os.getenv(\"IS_TESTING_DEPLOY_GPU\")),\n )\nelse:\n DEPLOY_GPU, DEPLOY_NGPU = (None, None)",
"_____no_output_____"
]
],
[
[
"#### Set pre-built containers\n\nSet the pre-built Docker container image for training and prediction.\n\n\nFor the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers).\n\n\nFor the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers).",
"_____no_output_____"
]
],
[
[
"if os.getenv(\"IS_TESTING_TF\"):\n TF = os.getenv(\"IS_TESTING_TF\")\nelse:\n TF = \"2.5\".replace(\".\", \"-\")\n\nif TF[0] == \"2\":\n if TRAIN_GPU:\n TRAIN_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n TRAIN_VERSION = \"tf-cpu.{}\".format(TF)\n if DEPLOY_GPU:\n DEPLOY_VERSION = \"tf2-gpu.{}\".format(TF)\n else:\n DEPLOY_VERSION = \"tf2-cpu.{}\".format(TF)\nelse:\n if TRAIN_GPU:\n TRAIN_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n TRAIN_VERSION = \"tf-cpu.{}\".format(TF)\n if DEPLOY_GPU:\n DEPLOY_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n DEPLOY_VERSION = \"tf-cpu.{}\".format(TF)\n\nTRAIN_IMAGE = \"{}-docker.pkg.dev/vertex-ai/training/{}:latest\".format(\n REGION.split(\"-\")[0], TRAIN_VERSION\n)\nDEPLOY_IMAGE = \"{}-docker.pkg.dev/vertex-ai/prediction/{}:latest\".format(\n REGION.split(\"-\")[0], DEPLOY_VERSION\n)\n\nprint(\"Training:\", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)\nprint(\"Deployment:\", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)",
"_____no_output_____"
]
],
[
[
"#### Set machine type\n\nNext, set the machine type to use for training and prediction.\n\n- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction.\n - `machine type`\n - `n1-standard`: 3.75GB of memory per vCPU.\n - `n1-highmem`: 6.5GB of memory per vCPU\n - `n1-highcpu`: 0.9 GB of memory per vCPU\n - `vCPUs`: number of \\[2, 4, 8, 16, 32, 64, 96 \\]\n\n*Note: The following is not supported for training:*\n\n - `standard`: 2 vCPUs\n - `highcpu`: 2, 4 and 8 vCPUs\n\n*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.",
"_____no_output_____"
]
],
[
[
"if os.getenv(\"IS_TESTING_TRAIN_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_TRAIN_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nTRAIN_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Train machine type\", TRAIN_COMPUTE)\n\nif os.getenv(\"IS_TESTING_DEPLOY_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_DEPLOY_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nDEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Deploy machine type\", DEPLOY_COMPUTE)",
"_____no_output_____"
]
],
[
[
"#### Location of Cloud Storage training data.\n\nNow set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.",
"_____no_output_____"
]
],
[
[
"IMPORT_FILE = (\n \"gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv\"\n)",
"_____no_output_____"
]
],
[
[
"### Examine the training package\n\n#### Package layout\n\nBefore you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.\n\n- PKG-INFO\n- README.md\n- setup.cfg\n- setup.py\n- trainer\n - \\_\\_init\\_\\_.py\n - task.py\n\nThe files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.\n\nThe file `trainer/task.py` is the Python script for executing the custom training job. *Note*, when we referred to it in the worker pool specification, we replace the directory slash with a dot (`trainer.task`) and dropped the file suffix (`.py`).\n\n#### Package Assembly\n\nIn the following cells, you will assemble the training package.",
"_____no_output_____"
]
],
[
[
"# Make folder for Python training script\n! rm -rf custom\n! mkdir custom\n\n# Add package information\n! touch custom/README.md\n\nsetup_cfg = \"[egg_info]\\n\\ntag_build =\\n\\ntag_date = 0\"\n! echo \"$setup_cfg\" > custom/setup.cfg\n\nsetup_py = \"import setuptools\\n\\nsetuptools.setup(\\n\\n install_requires=[\\n\\n 'tensorflow==2.5.0',\\n\\n 'tensorflow_datasets==1.3.0',\\n\\n ],\\n\\n packages=setuptools.find_packages())\"\n! echo \"$setup_py\" > custom/setup.py\n\npkg_info = \"Metadata-Version: 1.0\\n\\nName: Flowers image classification\\n\\nVersion: 0.0.0\\n\\nSummary: Demostration training script\\n\\nHome-page: www.google.com\\n\\nAuthor: Google\\n\\nAuthor-email: [email protected]\\n\\nLicense: Public\\n\\nDescription: Demo\\n\\nPlatform: Vertex\"\n! echo \"$pkg_info\" > custom/PKG-INFO\n\n# Make the training subfolder\n! mkdir custom/trainer\n! touch custom/trainer/__init__.py",
"_____no_output_____"
]
],
[
[
"### Create the task script for the Python training package\n\nNext, you create the `task.py` script for driving the training package. Some noteable steps include:\n\n- Command-line arguments:\n - `data-format` The format of the data. In this example, the data is exported from an `ImageDataSet` and will be in a JSONL format.\n - `train-data-dir`, `val-data-dir`, `test-data-dir`: The Cloud Storage locations of the train, validation and test data. When using Vertex AI custom training, these locations will be specified in the corresponding environment variables: `AIP_TRAINING_DATA_URI`, `AIP_VALIDATION_DATA_URI`, and `AIP_TEST_DATA_URI`.\n - `model-dir`: The location to save the trained model. When using Vertex AI custom training, the location will be specified in the environment variable: `AIP_MODEL_DIR`,\n - `distributr`: single, mirrored or distributed training strategy.\n- Data preprocessing (`get_data()`):\n - Compiles the one or more JSONL data files for a dataset, and constructs a `tf.data.Dataset()` generator for data preprocessing and model feeding.\n- Model architecture (`get_model()`):\n - Builds the corresponding model architecture.\n- Training (`train_model()`):\n - Trains the model\n- Model artifact saving\n - Saves the model artifacts where the Cloud Storage location is determined based on the type of distribution training strategy.",
"_____no_output_____"
]
],
[
[
"%%writefile custom/trainer/task.py\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib\nimport argparse\nimport os\nimport sys\nimport json\nimport logging\nimport tqdm\n\ndef parse_args():\n parser = argparse.ArgumentParser(description=\"TF.Keras Image Classification\")\n\n # data source\n parser.add_argument(\"--data-format\", default=os.getenv('AIP_DATA_FORMAT'), dest=\"data_format\", type=str, help=\"data format\")\n parser.add_argument(\"--train-data-dir\", default=os.getenv('AIP_TRAINING_DATA_URI'), dest=\"train_data_dir\", type=str, help=\"train data directory\")\n parser.add_argument(\"--val-data-dir\", default=os.getenv('AIP_VALIDATION_DATA_URI'), dest=\"val_data_dir\", type=str, help=\"validation data directory\")\n parser.add_argument(\"--test-data-dir\", default=os.getenv('AIP_TEST_DATA_URI'), dest=\"test_data_dir\", type=str, help=\"test data directory\")\n\n # data preprocessing\n parser.add_argument(\"--image-width\", dest=\"image_width\", default=32, type=int, help=\"image width\")\n parser.add_argument(\"--image-height\", dest=\"image_height\", default=32, type=int, help=\"image height\")\n\n # model artifact location\n parser.add_argument(\n \"--model-dir\",\n default=os.getenv(\"AIP_MODEL_DIR\"),\n type=str,\n help=\"model directory\",\n )\n\n # training hyperparameters\n parser.add_argument(\n \"--lr\", dest=\"lr\", default=0.01, type=float, help=\"Learning rate.\"\n )\n parser.add_argument(\"--batch-size\", default=16, type=int, help=\"mini-batch size\")\n parser.add_argument(\n \"--epochs\", default=10, type=int, help=\"number of training epochs\"\n )\n parser.add_argument(\n \"--steps\",\n dest=\"steps\",\n default=200,\n type=int,\n help=\"Number of steps per epoch.\",\n )\n parser.add_argument(\n \"--distribute\",\n dest=\"distribute\",\n type=str,\n default=\"single\",\n help=\"distributed training strategy\",\n )\n\n args = parser.parse_args()\n return args\n\n\nargs = parse_args()\n\nlogging.getLogger().setLevel(logging.DEBUG)\nlogging.info('DEVICES' + str(device_lib.list_local_devices()))\n\n# Single Machine, single compute device\nif args.distribute == 'single':\n if tf.test.is_gpu_available():\n strategy = tf.distribute.OneDeviceStrategy(device=\"/gpu:0\")\n else:\n strategy = tf.distribute.OneDeviceStrategy(device=\"/cpu:0\")\n logging.info(\"Single device training\")\n# Single Machine, multiple compute device\nelif args.distribute == 'mirrored':\n strategy = tf.distribute.MirroredStrategy()\n logging.info(\"Mirrored Strategy distributed training\")\n# Multi Machine, multiple compute device\nelif args.distribute == 'multiworker':\n strategy = tf.distribute.MultiWorkerMirroredStrategy()\n logging.info(\"Multi-worker Strategy distributed training\")\n logging.info('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))\n\nlogging.info('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))\n\nNUM_WORKERS = strategy.num_replicas_in_sync\nGLOBAL_BATCH_SIZE = args.batch_size * NUM_WORKERS\n\n\ndef _is_chief(task_type, task_id):\n ''' Check for primary if multiworker training\n '''\n return (task_type == 'chief') or (task_type == 'worker' and task_id == 0) or task_type is None\n\n\ndef get_data():\n logging.info('DATA_FORMAT ' + args.data_format)\n logging.info('TRAINING_DATA_URI ' + args.train_data_dir)\n logging.info('VALIDATION_DATA_URI ' + args.val_data_dir)\n logging.info('TEST_DATA_URI ' + args.test_data_dir)\n\n class_names = [\"daisy\", \"dandelion\", \"roses\", \"sunflowers\", \"tulips\"]\n class_indices = dict(zip(class_names, range(len(class_names))))\n num_classes = len(class_names)\n\n GLOBAL_BATCH_SIZE = args.batch_size * NUM_WORKERS\n\n def parse_image(filename):\n image = tf.io.read_file(filename)\n image = tf.image.decode_jpeg(image, channels=3)\n image = tf.image.resize(image, [args.image_width, args.image_height])\n return image\n\n def scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255.0\n return image, label\n\n def extract(data_dir, batch_size=GLOBAL_BATCH_SIZE, repeat=True):\n data = []\n labels = []\n for data_uri in tqdm.tqdm(tf.io.gfile.glob(pattern=data_dir)):\n with tf.io.gfile.GFile(name=data_uri, mode=\"r\") as gfile:\n for line in gfile.readlines():\n instance = json.loads(line)\n data.append(instance[\"imageGcsUri\"])\n classification_annotation = instance[\"classificationAnnotations\"][0]\n label = classification_annotation[\"displayName\"]\n labels.append(class_indices[label])\n\n data_dataset = tf.data.Dataset.from_tensor_slices(data)\n data_dataset = data_dataset.map(\n parse_image, num_parallel_calls=tf.data.experimental.AUTOTUNE\n )\n\n label_dataset = tf.data.Dataset.from_tensor_slices(labels)\n label_dataset = label_dataset.map(lambda x: tf.one_hot(x, num_classes))\n\n dataset = tf.data.Dataset.zip((data_dataset, label_dataset)).map(scale).cache().shuffle(batch_size * 32)\n if repeat:\n dataset = dataset.repeat()\n dataset = dataset.batch(batch_size)\n\n # Add property to retain the class names\n dataset.class_names = class_names\n\n return dataset\n\n\n logging.info('Prepare training data')\n train_dataset = extract(args.train_data_dir)\n\n logging.info('Prepare validation data')\n val_dataset = extract(args.val_data_dir, batch_size=1, repeat=False)\n\n return num_classes, train_dataset, val_dataset\n\n\ndef get_model(num_classes):\n logging.info(\"Get model architecture\")\n model = tf.keras.Sequential(\n [\n tf.keras.layers.Conv2D(\n 32, 3, activation=\"relu\", input_shape=(args.image_width, args.image_height, 3)\n ),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Conv2D(32, 3, activation=\"relu\"),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(num_classes, activation=\"softmax\"),\n ]\n )\n model.compile(\n loss=tf.keras.losses.categorical_crossentropy,\n optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),\n metrics=[\"accuracy\"],\n )\n return model\n\ndef train_model(model, train_dataset, val_dataset):\n logging.info(\"Start model training\")\n history = model.fit(\n x=train_dataset, epochs=args.epochs, validation_data=val_dataset, steps_per_epoch=args.steps\n )\n return history\n\nnum_classes, train_dataset, val_dataset = get_data()\nwith strategy.scope():\n model = get_model(num_classes=num_classes)\nhistory = train_model(model, train_dataset, val_dataset)\n\nlogging.info(\"Save the model to: \" + args.model_dir)\nif args.distribute == 'multiworker':\n task_type, task_id = (strategy.cluster_resolver.task_type,\n strategy.cluster_resolver.task_id)\nelse:\n task_type, task_id = None, None\n\n# single, mirrored or primary for multiworker\nif _is_chief(task_type, task_id):\n model.save(args.model_dir)\n# non-primary workers for multi-workers\nelse:\n # each worker saves their model instance to a unique temp location\n worker_dir = args.model_dir + '/workertemp_' + str(task_id)\n tf.io.gfile.makedirs(worker_dir)\n model.save(worker_dir)",
"_____no_output_____"
]
],
[
[
"#### Store training script on your Cloud Storage bucket\n\nNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.",
"_____no_output_____"
]
],
[
[
"! rm -f custom.tar custom.tar.gz\n! tar cvf custom.tar custom\n! gzip custom.tar\n! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_flowers.tar.gz",
"_____no_output_____"
],
[
"!gsutil ls gs://andy-1234-221921aip-20211201001323/pipeline_root/custom_icn_training/aiplatform-custom-training-2021-12-01-00:39:25.109/dataset-899163017009168384-image_classification_multi_label-2021-12-01T00:39:26.044880Z/",
"_____no_output_____"
]
],
[
[
"## Construct custom training pipeline\n\nIn the example below, you construct a pipeline for training a custom model using pre-built Google Cloud Pipeline Components for Vertex AI Training, as follows:\n\n\n1. Pipeline arguments, specify the locations of:\n - `import_file`: The CSV index file for the dataset.\n - `python_package`: The custom training Python package.\n - `python_module`: The entry module in the package to execute.\n\n2. Use the prebuilt component `ImageDatasetCreateOp` to create a Vertex AI Dataset resource, where:\n - The display name for the dataset is passed into the pipeline.\n - The import file for the dataset is passed into the pipeline.\n - The component returns the dataset resource as `outputs[\"dataset\"]`\n3. Use the prebuilt component `CustomPythonPackageTrainingJobRunOp` to train a custom model and upload the custom model as a Vertex AI Model resource, where:\n - The display name for the dataset is passed into the pipeline.\n - The dataset is the output from the `ImageDatasetCreateOp`.\n - The python package, command line argument are passed into the pipeline.\n - The training and serving containers are specified in the pipeline definition.\n - The component returns the model resource as `outputs[\"model\"]`.\n4. Use the prebuilt component `EndpointCreateOp` to create a Vertex AI Endpoint to deploy the trained model to, where:\n - Since the component has no dependencies on other components, by default it would be executed in parallel with the model training.\n - The `after(training_op)` is added to serialize its execution, so its only executed if the training operation completes successfully.\n - The component returns the endpoint resource as `outputs[\"endpoint\"]`.\n5. Use the prebuilt component `ModelDeployOp` to deploy the trained Vertex AI model to, where:\n - The display name for the dataset is passed into the pipeline.\n - The model is the output from the `CustomPythonPackageTrainingJobRunOp`.\n - The endpoint is the output from the `EndpointCreateOp`\n\n*Note:* Since each component is executed as a graph node in its own execution context, you pass the parameter `project` for each component op, in constrast to doing a `aip.init(project=project)` if this was a Python script calling the SDK methods directly within the same execution context.",
"_____no_output_____"
]
],
[
[
"from google_cloud_pipeline_components import aiplatform as gcc_aip\n\nPIPELINE_ROOT = \"{}/pipeline_root/custom_icn_training\".format(BUCKET_NAME)\n\n\[email protected](\n name=\"custom-icn-training\", description=\"Custom image classification training\"\n)\ndef pipeline(\n import_file: str,\n display_name: str,\n python_package: str,\n python_module: str,\n project: str = PROJECT_ID,\n region: str = REGION,\n):\n\n dataset_op = gcc_aip.ImageDatasetCreateOp(\n project=project,\n display_name=display_name,\n gcs_source=import_file,\n import_schema_uri=aip.schema.dataset.ioformat.image.single_label_classification,\n )\n\n training_op = gcc_aip.CustomPythonPackageTrainingJobRunOp(\n project=project,\n display_name=display_name,\n dataset=dataset_op.outputs[\"dataset\"],\n # Training\n python_package_gcs_uri=python_package,\n python_module_name=python_module,\n container_uri=TRAIN_IMAGE,\n staging_bucket=PIPELINE_ROOT,\n annotation_schema_uri=aip.schema.dataset.annotation.image.classification,\n args=[\"--epochs\", \"50\", \"--image-width\", \"32\", \"--image-height\", \"32\"],\n replica_count=1,\n machine_type=TRAIN_COMPUTE,\n accelerator_type=TRAIN_GPU.name,\n accelerator_count=TRAIN_NGPU,\n # Serving - As part of this operation, the model is registered to Vertex AI\n model_serving_container_image_uri=DEPLOY_IMAGE,\n model_display_name=display_name,\n )\n\n endpoint_op = gcc_aip.EndpointCreateOp(\n project=project,\n location=region,\n display_name=display_name,\n ).after(training_op)\n\n deploy_op = gcc_aip.ModelDeployOp(\n model=training_op.outputs[\"model\"],\n endpoint=endpoint_op.outputs[\"endpoint\"],\n dedicated_resources_min_replica_count=1,\n dedicated_resources_max_replica_count=1,\n dedicated_resources_machine_type=\"n1-standard-4\",\n )",
"_____no_output_____"
]
],
[
[
"### Compile and execute the pipeline\n\nNext, you compile the pipeline and then exeute it. The pipeline takes the following parameters, which are passed as the dictionary `parameter_values`:\n\n- `import_file`: The Cloud Storage path to the dataset index file.\n- `display_name`: The display name for the generated Vertex AI resources.\n- `python_package`: The Python package for the custom training job.\n- `python_module`: The Python module in the package to execute.\n- `project`: The project ID.\n- `region`: The region.",
"_____no_output_____"
]
],
[
[
"compiler.Compiler().compile(\n pipeline_func=pipeline, package_path=\"custom_icn_training.json\"\n)\n\npipeline = aip.PipelineJob(\n display_name=\"custom_icn_training\",\n template_path=\"custom_icn_training.json\",\n pipeline_root=PIPELINE_ROOT,\n parameter_values={\n \"import_file\": IMPORT_FILE,\n \"display_name\": \"flowers\" + TIMESTAMP,\n \"python_package\": f\"{BUCKET_NAME}/trainer_flowers.tar.gz\",\n \"python_module\": \"trainer.task\",\n \"project\": PROJECT_ID,\n \"region\": REGION,\n },\n)\n\npipeline.run()\n\n! rm -f custom_icn_training.json",
"_____no_output_____"
]
],
[
[
"### Delete a pipeline job\n\nAfter a pipeline job is completed, you can delete the pipeline job with the method `delete()`. Prior to completion, a pipeline job can be canceled with the method `cancel()`.",
"_____no_output_____"
]
],
[
[
"pipeline.delete()",
"_____no_output_____"
]
],
[
[
"# Cleaning up\n\nTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloud\nproject](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.\n\nOtherwise, you can delete the individual resources you created in this tutorial:\n\n- Dataset\n- Pipeline\n- Model\n- Endpoint\n- AutoML Training Job\n- Batch Job\n- Custom Job\n- Hyperparameter Tuning Job\n- Cloud Storage Bucket",
"_____no_output_____"
]
],
[
[
"delete_all = True\n\nif delete_all:\n # Delete the dataset using the Vertex dataset object\n try:\n if \"dataset\" in globals():\n dataset.delete()\n except Exception as e:\n print(e)\n\n # Delete the model using the Vertex model object\n try:\n if \"model\" in globals():\n model.delete()\n except Exception as e:\n print(e)\n\n # Delete the endpoint using the Vertex endpoint object\n try:\n if \"endpoint\" in globals():\n endpoint.delete()\n except Exception as e:\n print(e)\n\n # Delete the AutoML or Pipeline training job\n try:\n if \"dag\" in globals():\n dag.delete()\n except Exception as e:\n print(e)\n\n # Delete the custom training job\n try:\n if \"job\" in globals():\n job.delete()\n except Exception as e:\n print(e)\n\n # Delete the batch prediction job using the Vertex batch prediction object\n try:\n if \"batch_predict_job\" in globals():\n batch_predict_job.delete()\n except Exception as e:\n print(e)\n\n # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object\n try:\n if \"hpt_job\" in globals():\n hpt_job.delete()\n except Exception as e:\n print(e)\n\n if \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0521367da3374448f62353d5738dce3fee416e6 | 21,561 | ipynb | Jupyter Notebook | notebooks/emergency_frequencies.ipynb | samurai-madrid/reinforced-learning | eba7cd6bcc2b194cd2985cba7c8399cc61623b14 | [
"MIT"
] | 1 | 2020-05-24T09:31:37.000Z | 2020-05-24T09:31:37.000Z | notebooks/emergency_frequencies.ipynb | samurai-madrid/reinforced-learning | eba7cd6bcc2b194cd2985cba7c8399cc61623b14 | [
"MIT"
] | null | null | null | notebooks/emergency_frequencies.ipynb | samurai-madrid/reinforced-learning | eba7cd6bcc2b194cd2985cba7c8399cc61623b14 | [
"MIT"
] | 2 | 2020-09-26T21:10:40.000Z | 2022-03-07T08:01:31.000Z | 30.282303 | 310 | 0.441909 | [
[
[
"# SAMUR Emergency Frequencies",
"_____no_output_____"
],
[
"This notebook explores how the frequency of different types of emergency changes with time in relation to different periods (hours of the day, days of the week, months of the year...) and locations in Madrid. This will be useful for constructing a realistic emergency generator in the city simulation.\n\nLet's start with some imports and setup, and then read the table.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport datetime\nimport matplotlib.pyplot as plt\nimport yaml\n%matplotlib inline",
"_____no_output_____"
],
[
"df = pd.read_csv(\"../data/emergency_data.csv\")\ndf.head()",
"_____no_output_____"
]
],
[
[
"The column for the time of the call is a string, so let's change that into a timestamp.",
"_____no_output_____"
]
],
[
[
"df[\"time_call\"] = pd.to_datetime(df[\"Solicitud\"])",
"_____no_output_____"
]
],
[
[
"We will also need to assign a numerical code to each district of the city in order to properly vectorize the distribution an make it easier to work along with other parts of the project.",
"_____no_output_____"
]
],
[
[
"district_codes = {\n 'Centro': 1, \n 'Arganzuela': 2, \n 'Retiro': 3, \n 'Salamanca': 4, \n 'Chamartín': 5, \n 'Tetuán': 6, \n 'Chamberí': 7, \n 'Fuencarral - El Pardo': 8, \n 'Moncloa - Aravaca': 9, \n 'Latina': 10, \n 'Carabanchel': 11, \n 'Usera': 12, \n 'Puente de Vallecas': 13, \n 'Moratalaz': 14, \n 'Ciudad Lineal': 15, \n 'Hortaleza': 16, \n 'Villaverde': 17, \n 'Villa de Vallecas': 18, \n 'Vicálvaro': 19, \n 'San Blas - Canillejas': 20, \n 'Barajas': 21,\n }\n\ndf[\"district_code\"] = df.Distrito.apply(lambda x: district_codes[x])",
"_____no_output_____"
]
],
[
[
"Each emergency has already been assigned a severity level, depending on the nature of the reported emergency.",
"_____no_output_____"
]
],
[
[
"df[\"severity\"] = df[\"Gravedad\"]",
"_____no_output_____"
]
],
[
[
"We also need the hour, weekday and month of the event in order to assign it in the various distributions.",
"_____no_output_____"
]
],
[
[
"df[\"hour\"] = df[\"time_call\"].apply(lambda x: x.hour) # From 0 to 23\ndf[\"weekday\"] = df[\"time_call\"].apply(lambda x: x.weekday()+1) # From 1 (Mon) to 7 (Sun)\ndf[\"month\"] = df[\"time_call\"].apply(lambda x: x.month)",
"_____no_output_____"
]
],
[
[
"Let's also strip down the dataset to just the columns we need right now.",
"_____no_output_____"
]
],
[
[
"df = df[[\"district_code\", \"severity\", \"time_call\", \"hour\", \"weekday\", \"month\"]]\ndf.head()",
"_____no_output_____"
]
],
[
[
"We are going to group the distributions by severity.",
"_____no_output_____"
]
],
[
[
"emergencies_per_grav = df.severity.value_counts().sort_index().rename(\"total_emergencies\")\nemergencies_per_grav",
"_____no_output_____"
]
],
[
[
"We will also need the global frequency of the emergencies:",
"_____no_output_____"
]
],
[
[
"total_seconds = (df.time_call.max()-df.time_call.min()).total_seconds()\nfrequencies_per_grav = (emergencies_per_grav / total_seconds).rename(\"emergency_frequencies\")\nfrequencies_per_grav",
"_____no_output_____"
]
],
[
[
"Each emergency will need to be assigne a district. Assuming independent distribution of emergencies by district and time, each will be assigned to a district according to a global probability based on this dataset, as follows.",
"_____no_output_____"
]
],
[
[
"prob_per_district = (df.district_code.value_counts().sort_index()/df.district_code.value_counts().sum()).rename(\"distric_weight\")\nprob_per_district",
"_____no_output_____"
]
],
[
[
"In order to be able to simplify the generation of emergencies, we are going to assume that the distributions of emergencies per hour, per weekday and per month are independent, sharing no correlation. This is obiously not fully true, but it is a good approximation for the chosen time-frames.",
"_____no_output_____"
]
],
[
[
"hourly_dist = (df.hour.value_counts()/df.hour.value_counts().mean()).sort_index().rename(\"hourly_distribution\")\ndaily_dist = (df.weekday.value_counts()/df.weekday.value_counts().mean()).sort_index().rename(\"daily_distribution\")\nmonthly_dist = (df.month.value_counts()/df.month.value_counts().mean()).sort_index().rename(\"monthly_distribution\")",
"_____no_output_____"
]
],
[
[
"We will actually make one of these per severity level.",
"_____no_output_____"
],
[
"This will allow us to modify the base emergency density of a given severity as follows:",
"_____no_output_____"
]
],
[
[
"def emergency_density(gravity, hour, weekday, month):\n base_density = frequencies_per_grav[gravity]\n density = base_density * hourly_dist[hour] * daily_dist[weekday] * monthly_dist[month]\n return density",
"_____no_output_____"
],
[
"emergency_density(3, 12, 4, 5) # Emergency frequency for severity level 3, at 12 hours of a thursday in May",
"_____no_output_____"
]
],
[
[
"In order for the model to read these distributions we will need to store them in a dict-like format, in this case YAML, which is easily readable by human or machine.",
"_____no_output_____"
]
],
[
[
"dists = {}\nfor severity in range(1, 6):\n sub_df = df[df[\"severity\"] == severity]\n \n frequency = float(frequencies_per_grav.round(8)[severity])\n \n hourly_dist = (sub_df.hour. value_counts()/sub_df.hour. value_counts().mean()).sort_index().round(5).to_dict()\n daily_dist = (sub_df.weekday.value_counts()/sub_df.weekday.value_counts().mean()).sort_index().round(5).to_dict()\n monthly_dist = (sub_df.month. value_counts()/sub_df.month. value_counts().mean()).sort_index().round(5).to_dict()\n \n district_prob = (sub_df.district_code.value_counts()/sub_df.district_code.value_counts().sum()).sort_index().round(5).to_dict()\n \n dists[severity] = {\"frequency\": frequency,\n \"hourly_dist\": hourly_dist,\n \"daily_dist\": daily_dist,\n \"monthly_dist\": monthly_dist,\n \"district_prob\": district_prob}\n ",
"_____no_output_____"
],
[
"f = open(\"../data/distributions.yaml\", \"w+\")\nyaml.dump(dists, f, allow_unicode=True)",
"_____no_output_____"
]
],
[
[
"We can now check that the dictionary stored in the YAML file is the same one we have created.",
"_____no_output_____"
]
],
[
[
"with open(\"../data/distributions.yaml\") as dist_file:\n yaml_dict = yaml.safe_load(dist_file)",
"_____no_output_____"
],
[
"yaml_dict == dists",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0522505be35acc7280cf434992482f4986c26de | 9,532 | ipynb | Jupyter Notebook | officialTutorial/juliaTutorial.ipynb | terasakisatoshi/juliaExer | e3c2195f39de858915a3dcd47684eccbb7ecb552 | [
"MIT"
] | 2 | 2020-05-02T01:24:20.000Z | 2020-10-04T12:03:25.000Z | officialTutorial/juliaTutorial.ipynb | terasakisatoshi/juliaExer | e3c2195f39de858915a3dcd47684eccbb7ecb552 | [
"MIT"
] | null | null | null | officialTutorial/juliaTutorial.ipynb | terasakisatoshi/juliaExer | e3c2195f39de858915a3dcd47684eccbb7ecb552 | [
"MIT"
] | null | null | null | 20.237792 | 468 | 0.484683 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0522a2fe47e653868b51b7a961b902e7cbd9714 | 16,378 | ipynb | Jupyter Notebook | add_remove_stock.ipynb | tommytse722/dash-flask-login | fe6cbb07c35b3cebb3634af90dfc80236d7f6d7e | [
"MIT"
] | null | null | null | add_remove_stock.ipynb | tommytse722/dash-flask-login | fe6cbb07c35b3cebb3634af90dfc80236d7f6d7e | [
"MIT"
] | null | null | null | add_remove_stock.ipynb | tommytse722/dash-flask-login | fe6cbb07c35b3cebb3634af90dfc80236d7f6d7e | [
"MIT"
] | null | null | null | 139.982906 | 1,645 | 0.701368 | [
[
[
"import stock_mgt as sm",
"_____no_output_____"
],
[
"sm.drop_stock_table()",
"_____no_output_____"
],
[
"sm.create_stock_table()",
"_____no_output_____"
],
[
"sm.download_stock()",
"_____no_output_____"
],
[
"sm.get_index_list(\"hsi\")",
"https://www.hsi.com.hk/static/uploads/contents/en/indexes/report/hsi/con_6Oct20.csv\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0522f7a842919d7154c304cbc7a0dcac1792f2b | 11,634 | ipynb | Jupyter Notebook | NYC_Citibike_Challenge.ipynb | ArnavAnjaria/Bikesharing | a6b5cab852e14ea73019ab703868fec5ca9bc641 | [
"Apache-2.0"
] | null | null | null | NYC_Citibike_Challenge.ipynb | ArnavAnjaria/Bikesharing | a6b5cab852e14ea73019ab703868fec5ca9bc641 | [
"Apache-2.0"
] | null | null | null | NYC_Citibike_Challenge.ipynb | ArnavAnjaria/Bikesharing | a6b5cab852e14ea73019ab703868fec5ca9bc641 | [
"Apache-2.0"
] | null | null | null | 35.577982 | 92 | 0.411982 | [
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"# 1. Create a DataFrame for the 201908-citibike-tripdata data. \ncitibike_data = '201908-citibike-tripdata.csv'\ncitibike_df = pd.read_csv(citibike_data)",
"_____no_output_____"
],
[
"# 2. Check the datatypes of your columns. \ncitibike_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2344224 entries, 0 to 2344223\nData columns (total 15 columns):\n # Column Dtype \n--- ------ ----- \n 0 tripduration int64 \n 1 starttime object \n 2 stoptime object \n 3 start station id float64\n 4 start station name object \n 5 start station latitude float64\n 6 start station longitude float64\n 7 end station id float64\n 8 end station name object \n 9 end station latitude float64\n 10 end station longitude float64\n 11 bikeid int64 \n 12 usertype object \n 13 birth year int64 \n 14 gender int64 \ndtypes: float64(6), int64(4), object(5)\nmemory usage: 268.3+ MB\n"
],
[
"# 3. Convert the 'tripduration' column to datetime datatype.\ncitibike_df['tripduration'] = pd.to_datetime(citibike_df['tripduration'], unit='s')\ncitibike_df.head()",
"_____no_output_____"
],
[
"# 4. Check the datatypes of your columns. \ncitibike_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2344224 entries, 0 to 2344223\nData columns (total 15 columns):\n # Column Dtype \n--- ------ ----- \n 0 tripduration datetime64[ns]\n 1 starttime object \n 2 stoptime object \n 3 start station id float64 \n 4 start station name object \n 5 start station latitude float64 \n 6 start station longitude float64 \n 7 end station id float64 \n 8 end station name object \n 9 end station latitude float64 \n 10 end station longitude float64 \n 11 bikeid int64 \n 12 usertype object \n 13 birth year int64 \n 14 gender int64 \ndtypes: datetime64[ns](1), float64(6), int64(3), object(5)\nmemory usage: 268.3+ MB\n"
],
[
"# 5. Export the Dataframe as a new CSV file without the index.\ncitibike_df.to_csv('citibike_201908_updated.csv', index=False)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d052377950224eebac90f1316306556201c8ed1c | 26,080 | ipynb | Jupyter Notebook | chatbot/34.gru-birnn-seq2seq-luong-bahdanau-stack-beam.ipynb | huseinzol05/Tensorflow-NLP-Models | 0741216aa8235e1228b3de7903cc36d73f8f2b45 | [
"MIT"
] | 1,705 | 2018-11-03T17:34:22.000Z | 2022-03-29T04:30:01.000Z | chatbot/34.gru-birnn-seq2seq-luong-bahdanau-stack-beam.ipynb | eridgd/NLP-Models-Tensorflow | d46e746cd038f25e8ee2df434facbe12e31576a1 | [
"MIT"
] | 26 | 2019-03-16T17:23:00.000Z | 2021-10-08T08:06:09.000Z | chatbot/34.gru-birnn-seq2seq-luong-bahdanau-stack-beam.ipynb | eridgd/NLP-Models-Tensorflow | d46e746cd038f25e8ee2df434facbe12e31576a1 | [
"MIT"
] | 705 | 2018-11-03T17:34:25.000Z | 2022-03-24T02:29:14.000Z | 40.434109 | 141 | 0.528067 | [
[
[
"import numpy as np\nimport tensorflow as tf\nfrom sklearn.utils import shuffle\nimport re\nimport time\nimport collections\nimport os",
"_____no_output_____"
],
[
"def build_dataset(words, n_words, atleast=1):\n count = [['PAD', 0], ['GO', 1], ['EOS', 2], ['UNK', 3]]\n counter = collections.Counter(words).most_common(n_words)\n counter = [i for i in counter if i[1] >= atleast]\n count.extend(counter)\n dictionary = dict()\n for word, _ in count:\n dictionary[word] = len(dictionary)\n data = list()\n unk_count = 0\n for word in words:\n index = dictionary.get(word, 0)\n if index == 0:\n unk_count += 1\n data.append(index)\n count[0][1] = unk_count\n reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))\n return data, count, dictionary, reversed_dictionary",
"_____no_output_____"
],
[
"lines = open('movie_lines.txt', encoding='utf-8', errors='ignore').read().split('\\n')\nconv_lines = open('movie_conversations.txt', encoding='utf-8', errors='ignore').read().split('\\n')\n\nid2line = {}\nfor line in lines:\n _line = line.split(' +++$+++ ')\n if len(_line) == 5:\n id2line[_line[0]] = _line[4]\n \nconvs = [ ]\nfor line in conv_lines[:-1]:\n _line = line.split(' +++$+++ ')[-1][1:-1].replace(\"'\",\"\").replace(\" \",\"\")\n convs.append(_line.split(','))\n \nquestions = []\nanswers = []\n\nfor conv in convs:\n for i in range(len(conv)-1):\n questions.append(id2line[conv[i]])\n answers.append(id2line[conv[i+1]])\n \ndef clean_text(text):\n text = text.lower()\n text = re.sub(r\"i'm\", \"i am\", text)\n text = re.sub(r\"he's\", \"he is\", text)\n text = re.sub(r\"she's\", \"she is\", text)\n text = re.sub(r\"it's\", \"it is\", text)\n text = re.sub(r\"that's\", \"that is\", text)\n text = re.sub(r\"what's\", \"that is\", text)\n text = re.sub(r\"where's\", \"where is\", text)\n text = re.sub(r\"how's\", \"how is\", text)\n text = re.sub(r\"\\'ll\", \" will\", text)\n text = re.sub(r\"\\'ve\", \" have\", text)\n text = re.sub(r\"\\'re\", \" are\", text)\n text = re.sub(r\"\\'d\", \" would\", text)\n text = re.sub(r\"\\'re\", \" are\", text)\n text = re.sub(r\"won't\", \"will not\", text)\n text = re.sub(r\"can't\", \"cannot\", text)\n text = re.sub(r\"n't\", \" not\", text)\n text = re.sub(r\"n'\", \"ng\", text)\n text = re.sub(r\"'bout\", \"about\", text)\n text = re.sub(r\"'til\", \"until\", text)\n text = re.sub(r\"[-()\\\"#/@;:<>{}`+=~|.!?,]\", \"\", text)\n return ' '.join([i.strip() for i in filter(None, text.split())])\n\nclean_questions = []\nfor question in questions:\n clean_questions.append(clean_text(question))\n \nclean_answers = [] \nfor answer in answers:\n clean_answers.append(clean_text(answer))\n \nmin_line_length = 2\nmax_line_length = 5\nshort_questions_temp = []\nshort_answers_temp = []\n\ni = 0\nfor question in clean_questions:\n if len(question.split()) >= min_line_length and len(question.split()) <= max_line_length:\n short_questions_temp.append(question)\n short_answers_temp.append(clean_answers[i])\n i += 1\n\nshort_questions = []\nshort_answers = []\n\ni = 0\nfor answer in short_answers_temp:\n if len(answer.split()) >= min_line_length and len(answer.split()) <= max_line_length:\n short_answers.append(answer)\n short_questions.append(short_questions_temp[i])\n i += 1\n\nquestion_test = short_questions[500:550]\nanswer_test = short_answers[500:550]\nshort_questions = short_questions[:500]\nshort_answers = short_answers[:500]",
"_____no_output_____"
],
[
"concat_from = ' '.join(short_questions+question_test).split()\nvocabulary_size_from = len(list(set(concat_from)))\ndata_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from)\nprint('vocab from size: %d'%(vocabulary_size_from))\nprint('Most common words', count_from[4:10])\nprint('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]])\nprint('filtered vocab size:',len(dictionary_from))\nprint(\"% of vocab used: {}%\".format(round(len(dictionary_from)/vocabulary_size_from,4)*100))",
"vocab from size: 657\nMost common words [('you', 132), ('is', 78), ('i', 68), ('what', 51), ('it', 50), ('that', 49)]\nSample data [7, 28, 129, 35, 61, 42, 12, 22, 82, 225] ['what', 'good', 'stuff', 'she', 'okay', 'they', 'do', 'to', 'hey', 'sweet']\nfiltered vocab size: 661\n% of vocab used: 100.61%\n"
],
[
"concat_to = ' '.join(short_answers+answer_test).split()\nvocabulary_size_to = len(list(set(concat_to)))\ndata_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to)\nprint('vocab from size: %d'%(vocabulary_size_to))\nprint('Most common words', count_to[4:10])\nprint('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]])\nprint('filtered vocab size:',len(dictionary_to))\nprint(\"% of vocab used: {}%\".format(round(len(dictionary_to)/vocabulary_size_to,4)*100))",
"vocab from size: 660\nMost common words [('i', 97), ('you', 91), ('is', 62), ('it', 58), ('not', 47), ('what', 39)]\nSample data [12, 216, 5, 4, 94, 25, 59, 10, 8, 79] ['the', 'real', 'you', 'i', 'hope', 'so', 'they', 'do', 'not', 'hi']\nfiltered vocab size: 664\n% of vocab used: 100.61%\n"
],
[
"GO = dictionary_from['GO']\nPAD = dictionary_from['PAD']\nEOS = dictionary_from['EOS']\nUNK = dictionary_from['UNK']",
"_____no_output_____"
],
[
"for i in range(len(short_answers)):\n short_answers[i] += ' EOS'",
"_____no_output_____"
],
[
"class Chatbot:\n def __init__(self, size_layer, num_layers, embedded_size, \n from_dict_size, to_dict_size, batch_size,\n grad_clip=5.0, beam_width=5, force_teaching_ratio=0.5):\n \n def cells(size, reuse=False):\n return tf.nn.rnn_cell.GRUCell(size, reuse=reuse)\n \n self.X = tf.placeholder(tf.int32, [None, None])\n self.Y = tf.placeholder(tf.int32, [None, None])\n self.X_seq_len = tf.count_nonzero(self.X, 1, dtype=tf.int32)\n self.Y_seq_len = tf.count_nonzero(self.Y, 1, dtype=tf.int32)\n batch_size = tf.shape(self.X)[0]\n \n encoder_embeddings = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1))\n decoder_embeddings = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1))\n self.encoder_out = tf.nn.embedding_lookup(encoder_embeddings, self.X)\n \n def bahdanau(size):\n attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(num_units = size, \n memory = self.encoder_out)\n return tf.contrib.seq2seq.AttentionWrapper(cell = cells(size), \n attention_mechanism = attention_mechanism,\n attention_layer_size = size)\n \n def luong(size):\n attention_mechanism = tf.contrib.seq2seq.LuongAttention(num_units = size, \n memory = self.encoder_out)\n return tf.contrib.seq2seq.AttentionWrapper(cell = cells(size), \n attention_mechanism = attention_mechanism,\n attention_layer_size = size)\n \n \n for n in range(num_layers):\n (out_fw, out_bw), (state_fw, state_bw) = tf.nn.bidirectional_dynamic_rnn(\n cell_fw = bahdanau(size_layer//2),\n cell_bw = luong(size_layer//2),\n inputs = self.encoder_out,\n sequence_length = self.X_seq_len,\n dtype = tf.float32,\n scope = 'bidirectional_rnn_%d'%(n))\n encoder_embedded = tf.concat((out_fw, out_bw), 2)\n \n bi_state = tf.concat((state_fw[0],state_bw[0]), -1)\n encoder_state = tuple([bi_state] * num_layers)\n dense = tf.layers.Dense(to_dict_size)\n \n with tf.variable_scope('decode'):\n attention_mechanism = tf.contrib.seq2seq.LuongAttention(\n num_units = size_layer, \n memory = self.encoder_out,\n memory_sequence_length = self.X_seq_len)\n luong_cells = tf.contrib.seq2seq.AttentionWrapper(\n cell = tf.nn.rnn_cell.MultiRNNCell([cells(size_layer) for _ in range(num_layers)]),\n attention_mechanism = attention_mechanism,\n attention_layer_size = size_layer)\n attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(\n num_units = size_layer, \n memory = self.encoder_out,\n memory_sequence_length = self.X_seq_len)\n bahdanau_cells = tf.contrib.seq2seq.AttentionWrapper(\n cell = tf.nn.rnn_cell.MultiRNNCell([cells(size_layer) for _ in range(num_layers)]),\n attention_mechanism = attention_mechanism,\n attention_layer_size = size_layer)\n decoder_cells = tf.nn.rnn_cell.MultiRNNCell([luong_cells, bahdanau_cells])\n main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1])\n decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)\n training_helper = tf.contrib.seq2seq.ScheduledEmbeddingTrainingHelper(\n inputs = tf.nn.embedding_lookup(decoder_embeddings, decoder_input),\n sequence_length = self.Y_seq_len,\n embedding = decoder_embeddings,\n sampling_probability = 1 - force_teaching_ratio,\n time_major = False)\n training_decoder = tf.contrib.seq2seq.BasicDecoder(\n cell = decoder_cells,\n helper = training_helper,\n initial_state = decoder_cells.zero_state(batch_size, tf.float32),\n output_layer = tf.layers.Dense(to_dict_size))\n training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(\n decoder = training_decoder,\n impute_finished = True,\n maximum_iterations = tf.reduce_max(self.Y_seq_len))\n self.training_logits = training_decoder_output.rnn_output\n \n with tf.variable_scope('decode', reuse=True):\n encoder_out_tiled = tf.contrib.seq2seq.tile_batch(self.encoder_out, beam_width)\n encoder_state_tiled = tf.contrib.seq2seq.tile_batch(encoder_state, beam_width)\n X_seq_len_tiled = tf.contrib.seq2seq.tile_batch(self.X_seq_len, beam_width)\n attention_mechanism = tf.contrib.seq2seq.LuongAttention(\n num_units = size_layer, \n memory = encoder_out_tiled,\n memory_sequence_length = X_seq_len_tiled)\n luong_cells = tf.contrib.seq2seq.AttentionWrapper(\n cell = tf.nn.rnn_cell.MultiRNNCell([cells(size_layer,reuse=True) for _ in range(num_layers)]),\n attention_mechanism = attention_mechanism,\n attention_layer_size = size_layer)\n attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(\n num_units = size_layer, \n memory = encoder_out_tiled,\n memory_sequence_length = X_seq_len_tiled)\n bahdanau_cells = tf.contrib.seq2seq.AttentionWrapper(\n cell = tf.nn.rnn_cell.MultiRNNCell([cells(size_layer,reuse=True) for _ in range(num_layers)]),\n attention_mechanism = attention_mechanism,\n attention_layer_size = size_layer)\n decoder_cells = tf.nn.rnn_cell.MultiRNNCell([luong_cells, bahdanau_cells])\n predicting_decoder = tf.contrib.seq2seq.BeamSearchDecoder(\n cell = decoder_cells,\n embedding = decoder_embeddings,\n start_tokens = tf.tile(tf.constant([GO], dtype=tf.int32), [batch_size]),\n end_token = EOS,\n initial_state = decoder_cells.zero_state(batch_size * beam_width, tf.float32),\n beam_width = beam_width,\n output_layer = tf.layers.Dense(to_dict_size, _reuse=True),\n length_penalty_weight = 0.0)\n predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(\n decoder = predicting_decoder,\n impute_finished = False,\n maximum_iterations = 2 * tf.reduce_max(self.X_seq_len))\n self.predicting_ids = predicting_decoder_output.predicted_ids[:, :, 0]\n \n masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)\n self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,\n targets = self.Y,\n weights = masks)\n self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(self.cost)\n y_t = tf.argmax(self.training_logits,axis=2)\n y_t = tf.cast(y_t, tf.int32)\n self.prediction = tf.boolean_mask(y_t, masks)\n mask_label = tf.boolean_mask(self.Y, masks)\n correct_pred = tf.equal(self.prediction, mask_label)\n correct_index = tf.cast(correct_pred, tf.float32)\n self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))",
"_____no_output_____"
],
[
"size_layer = 256\nnum_layers = 2\nembedded_size = 128\nlearning_rate = 0.001\nbatch_size = 16\nepoch = 20",
"_____no_output_____"
],
[
"tf.reset_default_graph()\nsess = tf.InteractiveSession()\nmodel = Chatbot(size_layer, num_layers, embedded_size, len(dictionary_from), \n len(dictionary_to), batch_size,learning_rate)\nsess.run(tf.global_variables_initializer())",
"_____no_output_____"
],
[
"def str_idx(corpus, dic):\n X = []\n for i in corpus:\n ints = []\n for k in i.split():\n ints.append(dic.get(k,UNK))\n X.append(ints)\n return X",
"_____no_output_____"
],
[
"X = str_idx(short_questions, dictionary_from)\nY = str_idx(short_answers, dictionary_to)\nX_test = str_idx(question_test, dictionary_from)\nY_test = str_idx(answer_test, dictionary_from)",
"_____no_output_____"
],
[
"def pad_sentence_batch(sentence_batch, pad_int):\n padded_seqs = []\n seq_lens = []\n max_sentence_len = max([len(sentence) for sentence in sentence_batch])\n for sentence in sentence_batch:\n padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))\n seq_lens.append(len(sentence))\n return padded_seqs, seq_lens",
"_____no_output_____"
],
[
"for i in range(epoch):\n total_loss, total_accuracy = 0, 0\n for k in range(0, len(short_questions), batch_size):\n index = min(k+batch_size, len(short_questions))\n batch_x, seq_x = pad_sentence_batch(X[k: index], PAD)\n batch_y, seq_y = pad_sentence_batch(Y[k: index], PAD)\n predicted, accuracy,loss, _ = sess.run([model.predicting_ids, \n model.accuracy, model.cost, model.optimizer], \n feed_dict={model.X:batch_x,\n model.Y:batch_y})\n total_loss += loss\n total_accuracy += accuracy\n total_loss /= (len(short_questions) / batch_size)\n total_accuracy /= (len(short_questions) / batch_size)\n print('epoch: %d, avg loss: %f, avg accuracy: %f'%(i+1, total_loss, total_accuracy))",
"epoch: 1, avg loss: 5.532531, avg accuracy: 0.212835\nepoch: 2, avg loss: 4.742440, avg accuracy: 0.249774\nepoch: 3, avg loss: 4.442560, avg accuracy: 0.268008\nepoch: 4, avg loss: 4.169395, avg accuracy: 0.275505\nepoch: 5, avg loss: 3.918481, avg accuracy: 0.281193\nepoch: 6, avg loss: 3.679547, avg accuracy: 0.288808\nepoch: 7, avg loss: 3.392268, avg accuracy: 0.305328\nepoch: 8, avg loss: 3.103028, avg accuracy: 0.335677\nepoch: 9, avg loss: 2.844912, avg accuracy: 0.365380\nepoch: 10, avg loss: 2.618098, avg accuracy: 0.397785\nepoch: 11, avg loss: 2.441453, avg accuracy: 0.427563\nepoch: 12, avg loss: 2.211901, avg accuracy: 0.450747\nepoch: 13, avg loss: 1.998827, avg accuracy: 0.492086\nepoch: 14, avg loss: 1.851775, avg accuracy: 0.513139\nepoch: 15, avg loss: 1.724460, avg accuracy: 0.543630\nepoch: 16, avg loss: 1.627682, avg accuracy: 0.567876\nepoch: 17, avg loss: 1.424056, avg accuracy: 0.631384\nepoch: 18, avg loss: 1.234232, avg accuracy: 0.674330\nepoch: 19, avg loss: 1.074105, avg accuracy: 0.706967\nepoch: 20, avg loss: 0.906385, avg accuracy: 0.756537\n"
],
[
"for i in range(len(batch_x)):\n print('row %d'%(i+1))\n print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))\n print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))\n print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\\n')",
"row 1\nQUESTION: i am a werewolf\nREAL ANSWER: a werewolf\nPREDICTED ANSWER: a werewolf \n\nrow 2\nQUESTION: i was dreaming again\nREAL ANSWER: i would think so\nPREDICTED ANSWER: i would think so \n\nrow 3\nQUESTION: the kitchen\nREAL ANSWER: very nice\nPREDICTED ANSWER: very nice \n\nrow 4\nQUESTION: the bedroom\nREAL ANSWER: there is only one bed\nPREDICTED ANSWER: there is only one bed \n\n"
],
[
"batch_x, seq_x = pad_sentence_batch(X_test[:batch_size], PAD)\nbatch_y, seq_y = pad_sentence_batch(Y_test[:batch_size], PAD)\npredicted = sess.run(model.predicting_ids, feed_dict={model.X:batch_x})\n\nfor i in range(len(batch_x)):\n print('row %d'%(i+1))\n print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))\n print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))\n print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\\n')",
"row 1\nQUESTION: but david\nREAL ANSWER: is here that\nPREDICTED ANSWER: we not not \n\nrow 2\nQUESTION: hopeless it is hopeless\nREAL ANSWER: tell ballet then back\nPREDICTED ANSWER: i is \n\nrow 3\nQUESTION: miss price\nREAL ANSWER: yes learning\nPREDICTED ANSWER: yes doctor \n\nrow 4\nQUESTION: mr kessler wake up please\nREAL ANSWER: is here are\nPREDICTED ANSWER: where you \n\nrow 5\nQUESTION: there were witnesses\nREAL ANSWER: why she out\nPREDICTED ANSWER: well you you you \n\nrow 6\nQUESTION: what about it\nREAL ANSWER: not you are\nPREDICTED ANSWER: what deal's her \n\nrow 7\nQUESTION: go on ask them\nREAL ANSWER: i just home\nPREDICTED ANSWER: you you you up \n\nrow 8\nQUESTION: beware the moon\nREAL ANSWER: seen hi is he\nPREDICTED ANSWER: what is \n\nrow 9\nQUESTION: did you hear that\nREAL ANSWER: is down what\nPREDICTED ANSWER: the sound again \n\nrow 10\nQUESTION: i heard that\nREAL ANSWER: it here not\nPREDICTED ANSWER: and am they it \n\nrow 11\nQUESTION: the hound of the baskervilles\nREAL ANSWER: heard\nPREDICTED ANSWER: you here \n\nrow 12\nQUESTION: it is moving\nREAL ANSWER: not you hear\nPREDICTED ANSWER: i is \n\nrow 13\nQUESTION: nice doggie good boy\nREAL ANSWER: bill stupid\nPREDICTED ANSWER: thank do \n\nrow 14\nQUESTION: it sounds far away\nREAL ANSWER: that pecos baby seen hi\nPREDICTED ANSWER: i i \n\nrow 15\nQUESTION: debbie klein cried a lot\nREAL ANSWER: is will srai not\nPREDICTED ANSWER: radiation at knots things \n\nrow 16\nQUESTION: what are you doing here\nREAL ANSWER: is know look i\nPREDICTED ANSWER: that small plane \n\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0527baf9a36a2ea4f3b91fba00206557a8b3d32 | 10,162 | ipynb | Jupyter Notebook | self-serve-storage/python/s3Fs Examples.ipynb | DennisH3/jupyter-notebooks | dd13b480978373c29914b650a0d03ac98d8f5dde | [
"MIT"
] | 6 | 2020-06-07T18:10:04.000Z | 2021-05-27T15:39:33.000Z | self-serve-storage/python/s3Fs Examples.ipynb | DennisH3/jupyter-notebooks | dd13b480978373c29914b650a0d03ac98d8f5dde | [
"MIT"
] | 34 | 2020-04-15T16:48:45.000Z | 2021-08-12T19:42:00.000Z | self-serve-storage/python/s3Fs Examples.ipynb | DennisH3/jupyter-notebooks | dd13b480978373c29914b650a0d03ac98d8f5dde | [
"MIT"
] | 10 | 2020-04-10T15:06:47.000Z | 2021-08-12T19:27:58.000Z | 28.951567 | 265 | 0.561208 | [
[
[
"# S3Fs Notebook Example\n\nS3Fs is a Pythonic file interface to S3. It builds on top of botocore.\n\nThe top-level class S3FileSystem holds connection information and allows typical file-system style operations like cp, mv, ls, du, glob, etc., as well as put/get of local files to/from S3.\n\nThe connection can be anonymous - in which case only publicly-available, read-only buckets are accessible - or via credentials explicitly supplied or in configuration files.\n\nAPI Version 2021.06.0\nhttps://buildmedia.readthedocs.org/media/pdf/s3fs/latest/s3fs.pdfhttps://buildmedia.readthedocs.org/media/pdf/s3fs/latest/s3fs.pdf",
"_____no_output_____"
],
[
"Note: If you get errors like `ModuleNotFoundError: No module named 's3fs'`, try `pip install s3fs` in a terminal and then restart your notebook:\n",
"_____no_output_____"
]
],
[
[
"import json\nimport os\nimport s3fs",
"_____no_output_____"
]
],
[
[
"Load the credentials file .json to make a connection to `S3FileSystem`",
"_____no_output_____"
]
],
[
[
"tenant=\"standard\"\nwith open(f'/vault/secrets/minio-{tenant}-tenant-1.json') as f:\n creds = json.load(f)\n",
"_____no_output_____"
]
],
[
[
"The connection can be anonymous- in which case only publicly-available, read-only buckets are accessible - or via credentials explicitly supplied or in configuration files. \n\nCalling open() on a S3FileSystem (typically using a context manager) provides an S3File for read or write access to a particular key. The object emulates the standard File protocol (read, write, tell, seek), such that functions expecting a file can access S3.",
"_____no_output_____"
]
],
[
[
"HOST = creds['MINIO_URL']\nSECURE = HOST.startswith('https')\nfs = s3fs.S3FileSystem(\n anon=False,\n use_ssl=SECURE,\n client_kwargs=\n {\n \"region_name\": \"us-east-1\",\n \"endpoint_url\": creds['MINIO_URL'],\n \"aws_access_key_id\": creds['AWS_ACCESS_KEY_ID'],\n \"aws_secret_access_key\": creds['AWS_SECRET_ACCESS_KEY']\n }\n)",
"_____no_output_____"
]
],
[
[
"## Upload a file\n\nNow that your personal bucket exists you can upload your files! We can use\n`example.txt` from the same folder as this notebook.\n\n**Note:** Bucket storage doesn't actually have real directories, so you won't\nfind any functions for creating them. But some software will show you a\ndirectory structure by looking at the slashes (`/`) in the file names. We'll use\nthis to put `example.txt` under an `/s3fs-examples` faux directory.",
"_____no_output_____"
]
],
[
[
"# Desired location in the bucket\n#NB_NAMESPACE: namespace of user e.g. rohan-katkar\nLOCAL_FILE='example.txt'\nREMOTE_FILE= os.environ['NB_NAMESPACE']+'/s3fs-examples/Happy-DAaaS-Bird.txt'\n\nfs.put(LOCAL_FILE,REMOTE_FILE)",
"_____no_output_____"
]
],
[
[
"## Check path exists in bucket",
"_____no_output_____"
]
],
[
[
"fs.exists(os.environ['NB_NAMESPACE']+'/s3fs-examples')",
"_____no_output_____"
]
],
[
[
"## List objects in bucket",
"_____no_output_____"
]
],
[
[
"fs.ls(os.environ['NB_NAMESPACE'])",
"_____no_output_____"
]
],
[
[
"## List objects in path\n",
"_____no_output_____"
]
],
[
[
"x = []\nx= fs.ls(os.environ['NB_NAMESPACE'] +'/s3fs-examples')\nfor obj in x:\n print(f'Name: {obj}')",
"Name: rohan-katkar/s3fs-examples/Happy-DAaaS-Bird.txt\n"
]
],
[
[
"## Download a file\nThere is another method `download(rpath, lpath[, recursive])`. S3Fs has issues with this method. Get is an equivalent method.",
"_____no_output_____"
]
],
[
[
"from shutil import copyfileobj\nDL_FILE='downloaded_s3fsexample.txt'\nfs.get(os.environ['NB_NAMESPACE']+'/s3fs-examples/Happy-DAaaS-Bird.txt', DL_FILE)\nwith open(DL_FILE, 'r') as file:\n print(file.read())",
" ________________\n / \\\n | Go DAaaS!!!! |\n | _______________/\n |/\n ^____, \n /` `\\ \n / ^ > \n / / , /\n «^` // /=/ %\n ««.~ «_/ %\n ««\\,___%\n ``\\ \\\n ^ ^\n\n"
]
],
[
[
"# That's it!\n\nYou've seen how to upload, list, and download files. You can do more things! For\nmore advanced usage, check out the full API documentation for the\n[S3Fs Python SDK](https://s3fs.readthedocs.io/en/latest/api.html).\n\nAnd don't forget that you can also do this all on the commandline with `mc`.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0528b9254f898b796e10e507982e80721dcfdf5 | 168,375 | ipynb | Jupyter Notebook | special_orthogonalization/svd_vs_gs_simulations.ipynb | wy-go/google-research | a0a609c6f3ca969a686927672f3c533f7344ba36 | [
"Apache-2.0"
] | 23,901 | 2018-10-04T19:48:53.000Z | 2022-03-31T21:27:42.000Z | special_orthogonalization/svd_vs_gs_simulations.ipynb | wy-go/google-research | a0a609c6f3ca969a686927672f3c533f7344ba36 | [
"Apache-2.0"
] | 891 | 2018-11-10T06:16:13.000Z | 2022-03-31T10:42:34.000Z | special_orthogonalization/svd_vs_gs_simulations.ipynb | wy-go/google-research | a0a609c6f3ca969a686927672f3c533f7344ba36 | [
"Apache-2.0"
] | 6,047 | 2018-10-12T06:31:02.000Z | 2022-03-31T13:59:28.000Z | 266.416139 | 31,158 | 0.905604 | [
[
[
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License",
"_____no_output_____"
]
],
[
[
"# Imports and Functions",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom scipy.stats import special_ortho_group\nfrom scipy.spatial.transform import Rotation\nfrom scipy.linalg import svd\nimport matplotlib.pyplot as plt\n\nplt.style.use('seaborn-whitegrid')\nFIGURE_SCALE = 1.0\nFONT_SIZE = 20\nplt.rcParams.update({\n 'figure.figsize': np.array((8, 6)) * FIGURE_SCALE,\n 'axes.labelsize': FONT_SIZE,\n 'axes.titlesize': FONT_SIZE,\n 'xtick.labelsize': FONT_SIZE,\n 'ytick.labelsize': FONT_SIZE,\n 'legend.fontsize': FONT_SIZE,\n 'lines.linewidth': 3,\n 'lines.markersize': 10,\n})",
"_____no_output_____"
],
[
"def SO3_via_svd(A):\n \"\"\"Map 3x3 matrix onto SO(3) via SVD.\"\"\"\n u, s, vt = np.linalg.svd(A)\n s_SO3 = [1, 1, np.sign(np.linalg.det(np.matmul(u, vt)))]\n return np.matmul(np.matmul(u, np.diag(s_SO3)), vt)",
"_____no_output_____"
],
[
"def SO3_via_gramschmidt(A):\n \"\"\"Map 3x3 matrix on SO(3) via GS, ignores last column.\"\"\"\n x_normalized = A[:, 0] / np.linalg.norm(A[:, 0])\n z = np.cross(x_normalized, A[:, 1])\n z_normalized = z / np.linalg.norm(z)\n y_normalized = np.cross(z_normalized, x_normalized)\n return np.stack([x_normalized, y_normalized, z_normalized], axis=1)",
"_____no_output_____"
],
[
"def rotate_from_z(v):\n \"\"\"Construct a rotation matrix R such that R * [0,0,||v||]^T = v.\n\n Input v is shape (3,), output shape is 3x3 \"\"\"\n vn = v / np.linalg.norm(v)\n theta = np.arccos(vn[2])\n phi = np.arctan2(vn[1], vn[0])\n r = Rotation.from_euler('zyz', [0, theta, phi])\n R = np.squeeze(r.as_dcm()) # Maps Z to vn\n return R\n\ndef perturb_rotation_matrix(R, kappa):\n \"\"\"Perturb a random rotation matrix with noise.\n\n Noise is random small rotation applied to each of the three\n column vectors of R. Angle of rotation is sampled from the\n von-Mises distribution on the circle (with uniform random azimuth).\n\n The von-Mises distribution is analagous to Gaussian distribution on the circle.\n Note, the concentration parameter kappa is inversely related to variance,\n so higher kappa means less variance, less noise applied. Good ranges for\n kappa are 64 (high noise) up to 512 (low noise).\n \"\"\"\n R_perturb = []\n theta = np.random.vonmises(mu=0.0, kappa=kappa, size=(3,))\n phi = np.random.uniform(low=0.0, high=np.pi*2.0, size=(3,))\n for i in range(3):\n v = R[:, i]\n R_z_to_v = rotate_from_z(v)\n r_noise_z = np.squeeze(Rotation.from_euler('zyz', [0, theta[i], phi[i]]).as_dcm())\n\n v_perturb = np.matmul(R_z_to_v, np.matmul(r_noise_z, np.array([0,0,1])))\n R_perturb.append(v_perturb)\n\n R_perturb = np.stack(R_perturb, axis=-1)\n return R_perturb\n\n\ndef sigma_to_kappa(sigma):\n return ((0.5 - sigma) * 1024) + 64\n",
"_____no_output_____"
],
[
"# We create a ground truth special orthogonal matrix and perturb it with\n# additive noise. We then see which orthogonalization process (SVD or GS) is\n# better at recovering the ground truth matrix.\n\n\ndef run_expt(sigmas, num_trials, noise_type='gaussian'):\n # Always use identity as ground truth, or pick random matrix.\n # Nothing should change if we pick random (can verify by setting to True) since\n # SVD and Gram-Schmidt are both Equivariant to rotations.\n pick_random_ground_truth=False\n\n all_errs_svd = []\n all_errs_gs = []\n all_geo_errs_svd = []\n all_geo_errs_gs = []\n all_noise_norms = []\n all_noise_sq_norms = []\n\n for sig in sigmas:\n svd_errors = np.zeros(num_trials)\n gs_errors = np.zeros(num_trials)\n svd_geo_errors = np.zeros(num_trials)\n gs_geo_errors = np.zeros(num_trials)\n noise_norms = np.zeros(num_trials)\n noise_sq_norms = np.zeros(num_trials)\n\n for t in range(num_trials):\n if pick_random_ground_truth:\n A = special_ortho_group.rvs(3) # Pick a random ground truth matrix\n else:\n A = np.eye(3) # Our ground truth matrix in SO(3)\n\n N = None\n if noise_type == 'gaussian':\n N = np.random.standard_normal(size=(3,3)) * sig\n if noise_type == 'uniform':\n N = np.random.uniform(-1, 1, (3, 3)) * sig\n if noise_type == 'rademacher':\n N = np.sign(np.random.uniform(-1, 1, (3, 3))) * sig\n if noise_type == 'rotation':\n A_perturb = perturb_rotation_matrix(A, kappa=sigma_to_kappa(sig))\n N = A_perturb - A\n if N is None:\n print ('Error: unknown noise_type: %s', noise_type)\n return\n\n AplusN = A + N # Ground-truth plus noise\n noise_norm = np.linalg.norm(N)\n noise_norm_sq = noise_norm**2\n\n # Compute SVD result and error.\n res_svd = SO3_via_svd(AplusN)\n error_svd = np.linalg.norm(res_svd - A, ord='fro')**2\n error_geodesic_svd = np.arccos(\n (np.trace(np.matmul(np.transpose(res_svd), A))-1.0)/2.0);\n\n # Compute GS result and error.\n res_gs = SO3_via_gramschmidt(AplusN)\n error_gs = np.linalg.norm(res_gs - A, ord='fro')**2\n error_geodesic_gs = np.arccos(\n (np.trace(np.matmul(np.transpose(res_gs), A))-1.0)/2.0);\n\n svd_errors[t] = error_svd\n gs_errors[t] = error_gs\n svd_geo_errors[t] = error_geodesic_svd\n gs_geo_errors[t] = error_geodesic_gs\n noise_norms[t] = noise_norm\n noise_sq_norms[t] = noise_norm_sq\n\n all_errs_svd.append(svd_errors)\n all_errs_gs.append(gs_errors)\n all_geo_errs_svd.append(svd_geo_errors)\n all_geo_errs_gs.append(gs_geo_errors)\n all_noise_norms.append(noise_norms)\n all_noise_sq_norms.append(noise_sq_norms)\n print('finished sigma = %f / kappa = %f' % (sig, sigma_to_kappa(sig)))\n\n return [np.array(x) for x in (\n all_errs_svd, all_errs_gs,\n all_geo_errs_svd, all_geo_errs_gs,\n all_noise_norms, all_noise_sq_norms)]",
"_____no_output_____"
],
[
"boxprops = dict(linewidth=2)\nmedianprops = dict(linewidth=2)\nwhiskerprops = dict(linewidth=2)\ncapprops = dict(linewidth=2)\n\ndef make_diff_plot(svd_errs, gs_errs, xvalues, title='', ytitle='', xtitle=''):\n plt.figure(figsize=(8,6))\n plt.title(title, fontsize=16)\n diff = gs_errs - svd_errs\n step_size = np.abs(xvalues[1] - xvalues[0])\n plt.boxplot(diff.T, positions=xvalues, widths=step_size/2, whis=[5, 95],\n boxprops=boxprops, medianprops=medianprops, whiskerprops=whiskerprops, capprops=capprops,\n showmeans=False, meanline=True, showfliers=False)\n plt.plot(xvalues, np.max(diff, axis=1), 'kx', markeredgewidth=2)\n plt.plot(xvalues, np.min(diff, axis=1), 'kx', markeredgewidth=2)\n xlim = [np.min(xvalues) - (step_size / 3), np.max(xvalues) + (step_size / 3)]\n plt.xlim(xlim)\n plt.plot(xlim, [0, 0], 'k--', linewidth=1)\n plt.xlabel(xtitle, fontsize=16)\n plt.ylabel(ytitle, fontsize=16)\n plt.tight_layout()",
"_____no_output_____"
]
],
[
[
"# Global Params",
"_____no_output_____"
]
],
[
[
"num_trials = 100000 # Num trials at each sigma\nsigmas = np.linspace(0.125, 0.5, 4)",
"_____no_output_____"
]
],
[
[
"# Gaussian Noise\nHere we generate a noise matrix with iid Gaussian entries drawn from\n$\\sigma N(0,1)$.\n\nThe \"Frobenius Error Diff\" shows the distributions of the error differences\n$\\|A - \\textrm{GS}(\\tilde A)\\|_F^2 - \\|A - \\textrm{SVD}(\\tilde A)\\|_F^2$ for\ndifferent values of $\\sigma$. The \"Geodesic Error Diff\" plot shows the\nanalagous data, but in terms of the geodesic error.",
"_____no_output_____"
]
],
[
[
"(all_errs_svd, all_errs_gs,\n all_geo_errs_svd, all_geo_errs_gs,\n all_noise_norms, all_noise_sq_norms\n ) = run_expt(sigmas, num_trials, noise_type='gaussian')",
"finished sigma = 0.125000 / kappa = 448.000000\nfinished sigma = 0.250000 / kappa = 320.000000\nfinished sigma = 0.375000 / kappa = 192.000000\nfinished sigma = 0.500000 / kappa = 64.000000\n"
],
[
"plt.plot(sigmas,\n 3*sigmas**2,\n '--b',\n label='3 $\\\\sigma^2$')\nplt.errorbar(sigmas,\n all_errs_svd.mean(axis=1),\n color='b',\n label='E[$\\\\|\\\\|\\\\mathrm{SVD}^+(M) - R\\\\|\\\\|_F^2]$')\n\nplt.plot(sigmas, 6*sigmas**2,\n '--r',\n label='6 $\\\\sigma^2$')\nplt.errorbar(sigmas,\n all_errs_gs.mean(axis=1),\n color='r',\n label='E[$\\\\|\\\\|\\\\mathrm{GS}^+(M) - R\\\\|\\\\|_F^2$]')\n\nplt.xlabel('$\\\\sigma$')\nplt.legend(loc='upper left')",
"_____no_output_____"
],
[
"make_diff_plot(all_errs_svd, all_errs_gs, sigmas, title='Gaussian Noise', ytitle='Frobenius Error Diff', xtitle='$\\\\sigma$')\nmake_diff_plot(all_geo_errs_svd, all_geo_errs_gs, sigmas, title='Gaussian Noise', ytitle='Geodesic Error Diff', xtitle='$\\\\sigma$')",
"_____no_output_____"
]
],
[
[
"# Uniform Noise\nHere, the noise matrix is constructed with iid entries drawn from $\\sigma \\textrm{Unif}(-1, 1)$.",
"_____no_output_____"
]
],
[
[
"(all_errs_svd, all_errs_gs,\n all_geo_errs_svd, all_geo_errs_gs,\n all_noise_norms, all_noise_sq_norms\n ) = run_expt(sigmas, num_trials, noise_type='uniform')",
"finished sigma = 0.125000 / kappa = 448.000000\nfinished sigma = 0.250000 / kappa = 320.000000\nfinished sigma = 0.375000 / kappa = 192.000000\nfinished sigma = 0.500000 / kappa = 64.000000\n"
],
[
"make_diff_plot(all_errs_svd, all_errs_gs, sigmas, title='Uniform Noise', ytitle='Frobenius Error Diff', xtitle='$\\\\phi$')\nmake_diff_plot(all_geo_errs_svd, all_geo_errs_gs, sigmas, title='Uniform Noise', ytitle='Geodesic Error Diff', xtitle='$\\\\phi$')",
"_____no_output_____"
]
],
[
[
"#Rotation Noise",
"_____no_output_____"
]
],
[
[
"(all_errs_svd, all_errs_gs,\n all_geo_errs_svd, all_geo_errs_gs,\n all_noise_norms, all_noise_sq_norms\n ) = run_expt(sigmas, num_trials, noise_type='rotation')",
"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:9: DeprecationWarning: `as_dcm` is deprecated!\nas_dcm is renamed to as_matrix in scipy 1.4.0 and will be removed in scipy 1.6.0\n if __name__ == '__main__':\n/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:30: DeprecationWarning: `as_dcm` is deprecated!\nas_dcm is renamed to as_matrix in scipy 1.4.0 and will be removed in scipy 1.6.0\n"
],
[
"make_diff_plot(all_errs_svd, all_errs_gs, sigma_to_kappa(sigmas), title='Rotation Noise', ytitle='Frobenius Error Diff', xtitle='$\\\\kappa$')\nmake_diff_plot(all_geo_errs_svd, all_geo_errs_gs, sigma_to_kappa(sigmas), title='Rotation Noise', ytitle='Geodesic Error Diff', xtitle='$\\\\kappa$')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0528f8f9e88f91d5a159822b049b58733e15239 | 262,279 | ipynb | Jupyter Notebook | kings/data/Untitled.ipynb | jkamiya5/flask | 5928ccf7ca4ffbae3266e30c369efde1706d922b | [
"MIT"
] | null | null | null | kings/data/Untitled.ipynb | jkamiya5/flask | 5928ccf7ca4ffbae3266e30c369efde1706d922b | [
"MIT"
] | null | null | null | kings/data/Untitled.ipynb | jkamiya5/flask | 5928ccf7ca4ffbae3266e30c369efde1706d922b | [
"MIT"
] | null | null | null | 111.51318 | 127,508 | 0.7797 | [
[
[
"import matplotlib.pyplot as plt\nfrom matplotlib.backends.backend_agg import FigureCanvasAgg\nimport random\nimport string\nimport os\nimport pandas as pd\nfrom matplotlib.backends.backend_agg import FigureCanvasAgg\nimport numpy as np",
"_____no_output_____"
],
[
"player = \"hamilton\"\ndf = pd.read_csv('Kings_Game_HIST_' + player + '.csv', encoding='shift-jis')\ndf2 = pd.read_csv('Kings_Game_HISTORY.csv', encoding='shift-jis')\nleft = pd.DataFrame(df)\nright = pd.DataFrame(df2)\noutput = pd.merge(left, right, how='left', on=['Time'])",
"_____no_output_____"
],
[
"lose = output.where(output.WinLose == 0)\nwin = output.where(output.WinLose == 1)\nwin = win.dropna(subset=['No'])\nlose = lose.dropna(subset=['No'])",
"_____no_output_____"
],
[
"win.columns",
"_____no_output_____"
],
[
"win = output.loc[:, ['WinLose', 'PTS', 'TRB', 'AST', 'ST', 'BLK', 'BSR', 'TOV', 'PF','FO']]",
"_____no_output_____"
],
[
"win = win[0:35]\nwin",
"_____no_output_____"
],
[
"ppp = pd.tools.plotting.scatter_matrix(win, diagonal=\"kde\", figsize=(10,10))\nplt.show()\nppp",
"_____no_output_____"
],
[
"win",
"_____no_output_____"
],
[
"from sklearn.decomposition import PCA",
"_____no_output_____"
],
[
"pca = PCA().fit(win)\npca",
"_____no_output_____"
],
[
"print('load: ', pca.explained_variance_ratio_)",
"load: [ 0.59874416 0.15547728 0.0727403 0.06379112 0.04330902 0.02535846\n 0.01925496 0.01475331 0.00395409 0.00261729]\n"
],
[
"print('cum load: ', pca.explained_variance_ratio_.cumsum())",
"cum load: [ 0.59874416 0.75422144 0.82696174 0.89075287 0.93406189 0.95942035\n 0.97867531 0.99342861 0.99738271 1. ]\n"
],
[
"z = pca.transform(win)",
"_____no_output_____"
],
[
"plt.scatter(z[:, 0], z[:, 1], s=100)\nplt.show()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()",
"_____no_output_____"
],
[
"ax.bar(np.arange(10), pca.components_[0])\nax.set_xticks(np.arange(10) + 0.4)\nax.set_xticklabels(win.columns)\nplt.show()",
"_____no_output_____"
],
[
"from sklearn import cluster\nkmeans = cluster.KMeans(4).fit(win)\nprint(kmeans.cluster_centers_)",
"[[ 0.66666667 23.66666667 8.66666667 6.66666667 0.66666667 0.\n 0.66666667 1.66666667 2.66666667 4.66666667]\n [ 0.14285714 9.64285714 5. 1.64285714 0.28571429 1.\n 0.28571429 1.85714286 2.07142857 2.5 ]\n [ 0.57142857 12.14285714 7.85714286 3.5 1.57142857\n 0.92857143 0.28571429 2.71428571 1.42857143 2.71428571]\n [ 0.75 18.5 7.25 3. 1. 1.\n 0. 1.75 2.75 2. ]]\n"
],
[
"labels = kmeans.predict(win)\nprint(labels)",
"[2 1 2 2 2 3 3 0 1 1 2 1 1 2 2 2 1 1 1 3 2 2 1 1 1 2 0 0 2 1 2 3 1 2 1]\n"
],
[
"cz = pca.transform(kmeans.cluster_centers_)\ncolors = [\"b\", \"g\", \"r\", \"c\"]\nplt.scatter(z[:, 0], z[:, 1], s=100, c=[colors[i] for i in labels])\nplt.scatter(cz[:, 0], cz[:, 1], s=1000, c=\"orange\", marker=\"*\")\nplt.xlabel(\"PC1\")\nplt.ylabel(\"PC2\")\nplt.show()",
"_____no_output_____"
],
[
"from sklearn import model_selection",
"_____no_output_____"
],
[
"win",
"_____no_output_____"
],
[
"y = win[\"WinLose\"]\ny = y.replace('0.0', 'Lose')\ny = y.replace(1, 'Win')\ny",
"_____no_output_____"
],
[
"X = win.iloc[:,1:15]",
"_____no_output_____"
],
[
"x_train, x_test, y_train, y_test = model_selection.train_test_split(X.values, y.values, test_size=0.1)",
"_____no_output_____"
],
[
"x_train",
"_____no_output_____"
],
[
"from sklearn.neighbors import KNeighborsClassifier",
"_____no_output_____"
],
[
"knn = KNeighborsClassifier(3).fit(x_train, y_train)",
"_____no_output_____"
],
[
"for y_pred, y_true in zip(knn.predict(x_test), y_test):\n print(y_pred, y_true)",
"Win Win\nWin Lose\nLose Lose\nLose Lose\n"
],
[
"print(knn.score(x_test, y_test))",
"0.75\n"
],
[
"scores = model_selection.cross_val_score(KNeighborsClassifier(3), X, y, cv=5)\nmean_score = scores.mean()\nprint(mean_score)",
"0.457142857143\n"
],
[
"for k in range(1, 11):\n score = model_selection.cross_val_score(KNeighborsClassifier(k), X, y, cv=5)\n mean_score = scores.mean()\n print(k, mean_score)",
"1 0.457142857143\n2 0.457142857143\n3 0.457142857143\n4 0.457142857143\n5 0.457142857143\n6 0.457142857143\n7 0.457142857143\n8 0.457142857143\n9 0.457142857143\n10 0.457142857143\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d05298d29fb229a9c6c7fcab61f51082d33cbde3 | 8,344 | ipynb | Jupyter Notebook | analyses/dry day comp.ipynb | akuhnregnier/wildfire-analysis | a04deada145cec864051d2fb15aec1a53a0246b9 | [
"MIT"
] | null | null | null | analyses/dry day comp.ipynb | akuhnregnier/wildfire-analysis | a04deada145cec864051d2fb15aec1a53a0246b9 | [
"MIT"
] | null | null | null | analyses/dry day comp.ipynb | akuhnregnier/wildfire-analysis | a04deada145cec864051d2fb15aec1a53a0246b9 | [
"MIT"
] | null | null | null | 22.074074 | 97 | 0.514022 | [
[
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"from wildfires.analysis import *\nfrom wildfires.data.datasets import *",
"_____no_output_____"
],
[
"new = NewERA5_DryDayPeriod()",
"_____no_output_____"
],
[
"old = ERA5_DryDayPeriod()\nold.cubes = iris.cube.CubeList([old.cube[:20]])",
"_____no_output_____"
],
[
"iris.cube.CubeList([new.cube, old.cube]).realise_data()",
"_____no_output_____"
],
[
"diff = new.cube.data - old.cube.data",
"_____no_output_____"
],
[
"rel_abs_diff = np.mean(np.abs(diff) / old.cube.data, axis=0)",
"_____no_output_____"
],
[
"rel_diff = np.mean(diff / old.cube.data, axis=0)",
"_____no_output_____"
],
[
"cube_plotting(new.cube, fig=plt.figure(figsize=(15, 7)), log=True)",
"_____no_output_____"
],
[
"cube_plotting(old.cube, fig=plt.figure(figsize=(15, 7)), log=True)",
"_____no_output_____"
],
[
"cube_plotting(rel_abs_diff, fig=plt.figure(figsize=(15, 7)))",
"_____no_output_____"
],
[
"cube_plotting(rel_diff, cmap_midpoint=0, fig=plt.figure(figsize=(15, 7)))",
"_____no_output_____"
],
[
"np.where(rel_diff == np.min(rel_diff))",
"_____no_output_____"
],
[
"new.cube.coord(\"latitude\").points[449], new.cube.coord(\"longitude\").points[837]",
"_____no_output_____"
],
[
"plt.hist(diff.flatten(), bins=1000)\nplt.yscale(\"log\")",
"_____no_output_____"
],
[
"import glob\nimport os\n\nfrom tqdm import tqdm\n\ntpdir = os.path.join(DATA_DIR, \"ERA5\", \"tp_daily\")\n\n# Sort so that time is increasing.\nfilenames = sorted(\n glob.glob(os.path.join(tpdir, \"**\", \"*_daily_mean.nc\"), recursive=True)\n)\n\nprecip_cubes = iris.cube.CubeList()\n\nprev_dry_day_period = None\nprev_end = None\n\nwith warnings.catch_warnings():\n warnings.filterwarnings(\n \"ignore\",\n message=(\n \"Collapsing a non-contiguous coordinate. Metadata may not \"\n \"be fully descriptive for 'time'.\"\n ),\n )\n for filename in tqdm(filenames[:20]):\n raw_cube = iris.load_cube(filename)\n precip_cubes.append(raw_cube)",
"_____no_output_____"
],
[
"precip_cubes = homogenise_cube_attributes(precip_cubes)",
"_____no_output_____"
],
[
"all_combined = precip_cubes.concatenate_cube()",
"_____no_output_____"
],
[
"iris.cube.CubeList([all_combined]).realise_data()",
"_____no_output_____"
],
[
"combined = all_combined.intersection(latitude=(22.25, 22.26), longitude=(29.25, 29.26))",
"_____no_output_____"
],
[
"N = 400\nplt.figure(figsize=(20, 8))\nplt.plot(combined.data.flatten()[:N], marker=\"o\", linestyle=\"\")\nplt.hlines(y=M_PER_HR_THRES, xmin=0, xmax=N)",
"_____no_output_____"
],
[
"plt.figure(figsize=(20, 8))\nplt.plot(\n old.cube.intersection(\n latitude=(22.25, 22.26), longitude=(29.25, 29.26)\n ).data.flatten()[: N // 30],\n marker=\"o\",\n linestyle=\"\",\n)",
"_____no_output_____"
],
[
"plt.figure(figsize=(20, 8))\nplt.plot(\n new.cube.intersection(\n latitude=(22.25, 22.26), longitude=(29.25, 29.26)\n ).data.flatten()[: N // 30],\n marker=\"o\",\n linestyle=\"\",\n)",
"_____no_output_____"
],
[
"np.where(rel_diff == np.max(rel_diff))",
"_____no_output_____"
],
[
"all_combined.shape, old.cube.shape, new.cube.shape",
"_____no_output_____"
],
[
"old.cube.coord(\"latitude\").points[403]",
"_____no_output_____"
],
[
"old.cube.coord(\"longitude\").points[660]",
"_____no_output_____"
],
[
"plt.figure(figsize=(20, 8))\ndata = all_combined.intersection(latitude=(10.75, 10.76), longitude=(-15, -14.9)).data\nmax_d = np.max(data)\nbelow = data < M_PER_HR_THRES\nplt.scatter(\n list(range(len(data))), data, marker=\"o\", c=[\"r\" if b else \"b\" for b in below]\n)\nplt.hlines(y=M_PER_HR_THRES, xmin=0, xmax=all_combined.shape[0])\nx = 0\nfor cube in precip_cubes:\n d = cube.shape[0]\n plt.vlines(x=[x, x + d], ymin=0, ymax=max_d, colors=\"g\")\n x += d",
"_____no_output_____"
],
[
"plt.figure(figsize=(20, 8))\nplt.plot(old.cube.data[:, 403, 660], marker=\"o\", linestyle=\"\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(20, 8))\nplt.plot(new.cube.data[:, 403, 660], marker=\"o\", linestyle=\"\")",
"_____no_output_____"
],
[
"import scipy.ndimage\n\n# Find contiguous blocks in the time dimension where dry_days is True.\nstructure = np.zeros((3,), dtype=np.int64)\nstructure[:] = 1\nlabelled = scipy.ndimage.label(below, structure=structure)\nslices = scipy.ndimage.find_objects(labelled[0])",
"_____no_output_____"
],
[
"labelled",
"_____no_output_____"
],
[
"slices",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0529b5c6cdec6a8706dbef21510deccf96fef10 | 61,720 | ipynb | Jupyter Notebook | recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb | danielbank/deep-learning-v2-pytorch | 82fffb6696a43d6d8998a596b986e468359e5c19 | [
"MIT"
] | null | null | null | recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb | danielbank/deep-learning-v2-pytorch | 82fffb6696a43d6d8998a596b986e468359e5c19 | [
"MIT"
] | null | null | null | recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb | danielbank/deep-learning-v2-pytorch | 82fffb6696a43d6d8998a596b986e468359e5c19 | [
"MIT"
] | null | null | null | 47.550077 | 564 | 0.556335 | [
[
[
"# Character-Level LSTM in PyTorch\n\nIn this notebook, I'll construct a character-level LSTM with PyTorch. The network will train character by character on some text, then generate new text character by character. As an example, I will train on Anna Karenina. **This model will be able to generate new text based on the text from the book!**\n\nThis network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Below is the general architecture of the character-wise RNN.\n\n<img src=\"assets/charseq.jpeg\" width=\"500\">",
"_____no_output_____"
],
[
"First let's load in our required resources for data loading and model creation.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport torch\nfrom torch import nn\nimport torch.nn.functional as F",
"_____no_output_____"
]
],
[
[
"## Load in Data\n\nThen, we'll load the Anna Karenina text file and convert it into integers for our network to use. ",
"_____no_output_____"
]
],
[
[
"# open text file and read in data as `text`\nwith open('data/anna.txt', 'r') as f:\n text = f.read()",
"_____no_output_____"
]
],
[
[
"Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever.",
"_____no_output_____"
]
],
[
[
"text[:100]",
"_____no_output_____"
]
],
[
[
"### Tokenization\n\nIn the cells, below, I'm creating a couple **dictionaries** to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.",
"_____no_output_____"
]
],
[
[
"# encode the text and map each character to an integer and vice versa\n\n# we create two dictionaries:\n# 1. int2char, which maps integers to characters\n# 2. char2int, which maps characters to unique integers\nchars = tuple(set(text))\nint2char = dict(enumerate(chars))\nchar2int = {ch: ii for ii, ch in int2char.items()}\n\n# encode the text\nencoded = np.array([char2int[ch] for ch in text])",
"_____no_output_____"
]
],
[
[
"And we can see those same characters from above, encoded as integers.",
"_____no_output_____"
]
],
[
[
"encoded[:100]",
"_____no_output_____"
]
],
[
[
"## Pre-processing the data\n\nAs you can see in our char-RNN image above, our LSTM expects an input that is **one-hot encoded** meaning that each character is converted into an integer (via our created dictionary) and *then* converted into a column vector where only it's corresponding integer index will have the value of 1 and the rest of the vector will be filled with 0's. Since we're one-hot encoding the data, let's make a function to do that!\n",
"_____no_output_____"
]
],
[
[
"def one_hot_encode(arr, n_labels):\n \n # Initialize the the encoded array\n one_hot = np.zeros((arr.size, n_labels), dtype=np.float32)\n \n # Fill the appropriate elements with ones\n one_hot[np.arange(one_hot.shape[0]), arr.flatten()] = 1.\n \n # Finally reshape it to get back to the original array\n one_hot = one_hot.reshape((*arr.shape, n_labels))\n \n return one_hot",
"_____no_output_____"
],
[
"# check that the function works as expected\ntest_seq = np.array([[3, 5, 1]])\none_hot = one_hot_encode(test_seq, 8)\n\nprint(one_hot)",
"[[[0. 0. 0. 1. 0. 0. 0. 0.]\n [0. 0. 0. 0. 0. 1. 0. 0.]\n [0. 1. 0. 0. 0. 0. 0. 0.]]]\n"
]
],
[
[
"## Making training mini-batches\n\n\nTo train on this data, we also want to create mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:\n\n<img src=\"assets/[email protected]\" width=500px>\n\n\n<br>\n\nIn this example, we'll take the encoded characters (passed in as the `arr` parameter) and split them into multiple sequences, given by `batch_size`. Each of our sequences will be `seq_length` long.\n\n### Creating Batches\n\n**1. The first thing we need to do is discard some of the text so we only have completely full mini-batches. **\n\nEach batch contains $N \\times M$ characters, where $N$ is the batch size (the number of sequences in a batch) and $M$ is the seq_length or number of time steps in a sequence. Then, to get the total number of batches, $K$, that we can make from the array `arr`, you divide the length of `arr` by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from `arr`, $N * M * K$.\n\n**2. After that, we need to split `arr` into $N$ batches. ** \n\nYou can do this using `arr.reshape(size)` where `size` is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences in a batch, so let's make that the size of the first dimension. For the second dimension, you can use `-1` as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \\times (M * K)$.\n\n**3. Now that we have this array, we can iterate through it to get our mini-batches. **\n\nThe idea is each batch is a $N \\times M$ window on the $N \\times (M * K)$ array. For each subsequent batch, the window moves over by `seq_length`. We also want to create both the input and target arrays. Remember that the targets are just the inputs shifted over by one character. The way I like to do this window is use `range` to take steps of size `n_steps` from $0$ to `arr.shape[1]`, the total number of tokens in each sequence. That way, the integers you get from `range` always point to the start of a batch, and each window is `seq_length` wide.\n\n> **TODO:** Write the code for creating batches in the function below. The exercises in this notebook _will not be easy_. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, **type out the solution code yourself.**",
"_____no_output_____"
]
],
[
[
"def get_batches(arr, batch_size, seq_length):\n '''Create a generator that returns batches of size\n batch_size x seq_length from arr.\n \n Arguments\n ---------\n arr: Array you want to make batches from\n batch_size: Batch size, the number of sequences per batch\n seq_length: Number of encoded chars in a sequence\n '''\n \n batch_size_total = batch_size * seq_length\n # total number of batches we can make\n n_batches = len(arr)//batch_size_total\n \n # Keep only enough characters to make full batches\n arr = arr[:n_batches * batch_size_total]\n # Reshape into batch_size rows\n arr = arr.reshape((batch_size, -1))\n \n # iterate through the array, one sequence at a time\n for n in range(0, arr.shape[1], seq_length):\n # The features\n x = arr[:, n:n+seq_length]\n # The targets, shifted by one\n y = np.zeros_like(x)\n try:\n y[:, :-1], y[:, -1] = x[:, 1:], arr[:, n+seq_length]\n except IndexError:\n y[:, :-1], y[:, -1] = x[:, 1:], arr[:, 0]\n yield x, y",
"_____no_output_____"
]
],
[
[
"### Test Your Implementation\n\nNow I'll make some data sets and we can check out what's going on as we batch data. Here, as an example, I'm going to use a batch size of 8 and 50 sequence steps.",
"_____no_output_____"
]
],
[
[
"batches = get_batches(encoded, 8, 50)\nx, y = next(batches)",
"_____no_output_____"
],
[
"# printing out the first 10 items in a sequence\nprint('x\\n', x[:10, :10])\nprint('\\ny\\n', y[:10, :10])",
"x\n [[54 68 75 18 59 45 37 33 66 48]\n [25 46 26 33 59 68 75 59 33 75]\n [45 26 70 33 46 37 33 75 33 13]\n [25 33 59 68 45 33 39 68 53 45]\n [33 25 75 51 33 68 45 37 33 59]\n [39 43 25 25 53 46 26 33 75 26]\n [33 82 26 26 75 33 68 75 70 33]\n [20 27 7 46 26 25 77 81 76 33]]\n\ny\n [[68 75 18 59 45 37 33 66 48 48]\n [46 26 33 59 68 75 59 33 75 59]\n [26 70 33 46 37 33 75 33 13 46]\n [33 59 68 45 33 39 68 53 45 13]\n [25 75 51 33 68 45 37 33 59 45]\n [43 25 25 53 46 26 33 75 26 70]\n [82 26 26 75 33 68 75 70 33 25]\n [27 7 46 26 25 77 81 76 33 11]]\n"
]
],
[
[
"If you implemented `get_batches` correctly, the above output should look something like \n```\nx\n [[25 8 60 11 45 27 28 73 1 2]\n [17 7 20 73 45 8 60 45 73 60]\n [27 20 80 73 7 28 73 60 73 65]\n [17 73 45 8 27 73 66 8 46 27]\n [73 17 60 12 73 8 27 28 73 45]\n [66 64 17 17 46 7 20 73 60 20]\n [73 76 20 20 60 73 8 60 80 73]\n [47 35 43 7 20 17 24 50 37 73]]\n\ny\n [[ 8 60 11 45 27 28 73 1 2 2]\n [ 7 20 73 45 8 60 45 73 60 45]\n [20 80 73 7 28 73 60 73 65 7]\n [73 45 8 27 73 66 8 46 27 65]\n [17 60 12 73 8 27 28 73 45 27]\n [64 17 17 46 7 20 73 60 20 80]\n [76 20 20 60 73 8 60 80 73 17]\n [35 43 7 20 17 24 50 37 73 36]]\n ```\n although the exact numbers may be different. Check to make sure the data is shifted over one step for `y`.",
"_____no_output_____"
],
[
"---\n## Defining the network with PyTorch\n\nBelow is where you'll define the network.\n\n<img src=\"assets/charRNN.png\" width=500px>\n\nNext, you'll use PyTorch to define the architecture of the network. We start by defining the layers and operations we want. Then, define a method for the forward pass. You've also been given a method for predicting characters.",
"_____no_output_____"
],
[
"### Model Structure\n\nIn `__init__` the suggested structure is as follows:\n* Create and store the necessary dictionaries (this has been done for you)\n* Define an LSTM layer that takes as params: an input size (the number of characters), a hidden layer size `n_hidden`, a number of layers `n_layers`, a dropout probability `drop_prob`, and a batch_first boolean (True, since we are batching)\n* Define a dropout layer with `drop_prob`\n* Define a fully-connected layer with params: input size `n_hidden` and output size (the number of characters)\n* Finally, initialize the weights (again, this has been given)\n\nNote that some parameters have been named and given in the `__init__` function, and we use them and store them by doing something like `self.drop_prob = drop_prob`.",
"_____no_output_____"
],
[
"---\n### LSTM Inputs/Outputs\n\nYou can create a basic [LSTM layer](https://pytorch.org/docs/stable/nn.html#lstm) as follows\n\n```python\nself.lstm = nn.LSTM(input_size, n_hidden, n_layers, \n dropout=drop_prob, batch_first=True)\n```\n\nwhere `input_size` is the number of characters this cell expects to see as sequential input, and `n_hidden` is the number of units in the hidden layers in the cell. And we can add dropout by adding a dropout parameter with a specified probability; this will automatically add dropout to the inputs or outputs. Finally, in the `forward` function, we can stack up the LSTM cells into layers using `.view`. With this, you pass in a list of cells and it will send the output of one cell into the next cell.\n\nWe also need to create an initial hidden state of all zeros. This is done like so\n\n```python\nself.init_hidden()\n```",
"_____no_output_____"
]
],
[
[
"# check if GPU is available\ntrain_on_gpu = torch.cuda.is_available()\nif(train_on_gpu):\n print('Training on GPU!')\nelse: \n print('No GPU available, training on CPU; consider making n_epochs very small.')",
"Training on GPU!\n"
],
[
"class CharRNN(nn.Module):\n \n def __init__(self, tokens, n_hidden=256, n_layers=2,\n drop_prob=0.5, lr=0.001):\n super().__init__()\n self.drop_prob = drop_prob\n self.n_layers = n_layers\n self.n_hidden = n_hidden\n self.lr = lr\n \n # creating character dictionaries\n self.chars = tokens\n self.int2char = dict(enumerate(self.chars))\n self.char2int = {ch: ii for ii, ch in self.int2char.items()}\n \n ## TODO: define the LSTM\n self.lstm = nn.LSTM(len(self.chars), n_hidden, n_layers, \n dropout=drop_prob, batch_first=True)\n \n ## TODO: define a dropout layer\n self.dropout = nn.Dropout(drop_prob)\n \n ## TODO: define the final, fully-connected output layer\n self.fc = nn.Linear(n_hidden, len(self.chars))\n \n \n def forward(self, x, hidden):\n ''' Forward pass through the network. \n These inputs are x, and the hidden/cell state `hidden`. '''\n \n ## TODO: Get the outputs and the new hidden state from the lstm\n r_output, hidden = self.lstm(x, hidden)\n \n ## TODO: pass through a dropout layer\n out = self.dropout(r_output)\n \n # Stack up LSTM outputs using view\n # you may need to use contiguous to reshape the output\n out = out.contiguous().view(-1, self.n_hidden)\n \n ## TODO: put x through the fully-connected layer\n out = self.fc(out)\n \n # return the final output and the hidden state\n return out, hidden\n \n \n def init_hidden(self, batch_size):\n ''' Initializes hidden state '''\n # Create two new tensors with sizes n_layers x batch_size x n_hidden,\n # initialized to zero, for hidden state and cell state of LSTM\n weight = next(self.parameters()).data\n \n if (train_on_gpu):\n hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda(),\n weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda())\n else:\n hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_(),\n weight.new(self.n_layers, batch_size, self.n_hidden).zero_())\n \n return hidden\n ",
"_____no_output_____"
]
],
[
[
"## Time to train\n\nThe train function gives us the ability to set the number of epochs, the learning rate, and other parameters.\n\nBelow we're using an Adam optimizer and cross entropy loss since we are looking at character class scores as output. We calculate the loss and perform backpropagation, as usual!\n\nA couple of details about training: \n>* Within the batch loop, we detach the hidden state from its history; this time setting it equal to a new *tuple* variable because an LSTM has a hidden state that is a tuple of the hidden and cell states.\n* We use [`clip_grad_norm_`](https://pytorch.org/docs/stable/_modules/torch/nn/utils/clip_grad.html) to help prevent exploding gradients.",
"_____no_output_____"
]
],
[
[
"def train(net, data, epochs=10, batch_size=10, seq_length=50, lr=0.001, clip=5, val_frac=0.1, print_every=10):\n ''' Training a network \n \n Arguments\n ---------\n \n net: CharRNN network\n data: text data to train the network\n epochs: Number of epochs to train\n batch_size: Number of mini-sequences per mini-batch, aka batch size\n seq_length: Number of character steps per mini-batch\n lr: learning rate\n clip: gradient clipping\n val_frac: Fraction of data to hold out for validation\n print_every: Number of steps for printing training and validation loss\n \n '''\n net.train()\n \n opt = torch.optim.Adam(net.parameters(), lr=lr)\n criterion = nn.CrossEntropyLoss()\n \n # create training and validation data\n val_idx = int(len(data)*(1-val_frac))\n data, val_data = data[:val_idx], data[val_idx:]\n \n if(train_on_gpu):\n net.cuda()\n \n counter = 0\n n_chars = len(net.chars)\n for e in range(epochs):\n # initialize hidden state\n h = net.init_hidden(batch_size)\n \n for x, y in get_batches(data, batch_size, seq_length):\n counter += 1\n \n # One-hot encode our data and make them Torch tensors\n x = one_hot_encode(x, n_chars)\n inputs, targets = torch.from_numpy(x), torch.from_numpy(y)\n \n if(train_on_gpu):\n inputs, targets = inputs.cuda(), targets.cuda()\n\n # Creating new variables for the hidden state, otherwise\n # we'd backprop through the entire training history\n h = tuple([each.data for each in h])\n\n # zero accumulated gradients\n net.zero_grad()\n \n # get the output from the model\n output, h = net(inputs, h)\n \n # calculate the loss and perform backprop\n loss = criterion(output, targets.view(batch_size*seq_length).long())\n loss.backward()\n # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.\n nn.utils.clip_grad_norm_(net.parameters(), clip)\n opt.step()\n \n # loss stats\n if counter % print_every == 0:\n # Get validation loss\n val_h = net.init_hidden(batch_size)\n val_losses = []\n net.eval()\n for x, y in get_batches(val_data, batch_size, seq_length):\n # One-hot encode our data and make them Torch tensors\n x = one_hot_encode(x, n_chars)\n x, y = torch.from_numpy(x), torch.from_numpy(y)\n \n # Creating new variables for the hidden state, otherwise\n # we'd backprop through the entire training history\n val_h = tuple([each.data for each in val_h])\n \n inputs, targets = x, y\n if(train_on_gpu):\n inputs, targets = inputs.cuda(), targets.cuda()\n\n output, val_h = net(inputs, val_h)\n val_loss = criterion(output, targets.view(batch_size*seq_length).long())\n \n val_losses.append(val_loss.item())\n \n net.train() # reset to train mode after iterationg through validation data\n \n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Step: {}...\".format(counter),\n \"Loss: {:.4f}...\".format(loss.item()),\n \"Val Loss: {:.4f}\".format(np.mean(val_losses)))",
"_____no_output_____"
]
],
[
[
"## Instantiating the model\n\nNow we can actually train the network. First we'll create the network itself, with some given hyperparameters. Then, define the mini-batches sizes, and start training!",
"_____no_output_____"
]
],
[
[
"# define and print the net\nn_hidden=512\nn_layers=2\n\nnet = CharRNN(chars, n_hidden, n_layers)\nprint(net)",
"CharRNN(\n (lstm): LSTM(83, 512, num_layers=2, batch_first=True, dropout=0.5)\n (dropout): Dropout(p=0.5, inplace=False)\n (fc): Linear(in_features=512, out_features=83, bias=True)\n)\n"
],
[
"batch_size = 128\nseq_length = 100\nn_epochs = 20 # start smaller if you are just testing initial behavior\n\n# train the model\ntrain(net, encoded, epochs=n_epochs, batch_size=batch_size, seq_length=seq_length, lr=0.001, print_every=10)",
"Epoch: 1/20... Step: 10... Loss: 3.2482... Val Loss: 3.2114\nEpoch: 1/20... Step: 20... Loss: 3.1410... Val Loss: 3.1354\nEpoch: 1/20... Step: 30... Loss: 3.1360... Val Loss: 3.1238\nEpoch: 1/20... Step: 40... Loss: 3.1139... Val Loss: 3.1195\nEpoch: 1/20... Step: 50... Loss: 3.1408... Val Loss: 3.1170\nEpoch: 1/20... Step: 60... Loss: 3.1161... Val Loss: 3.1144\nEpoch: 1/20... Step: 70... Loss: 3.1051... Val Loss: 3.1113\nEpoch: 1/20... Step: 80... Loss: 3.1133... Val Loss: 3.1029\nEpoch: 1/20... Step: 90... Loss: 3.1048... Val Loss: 3.0833\nEpoch: 1/20... Step: 100... Loss: 3.0508... Val Loss: 3.0351\nEpoch: 1/20... Step: 110... Loss: 2.9844... Val Loss: 2.9579\nEpoch: 1/20... Step: 120... Loss: 2.8520... Val Loss: 2.8698\nEpoch: 1/20... Step: 130... Loss: 2.7709... Val Loss: 2.7311\nEpoch: 2/20... Step: 140... Loss: 2.7156... Val Loss: 2.6316\nEpoch: 2/20... Step: 150... Loss: 2.5914... Val Loss: 2.5473\nEpoch: 2/20... Step: 160... Loss: 2.5292... Val Loss: 2.4892\nEpoch: 2/20... Step: 170... Loss: 2.4596... Val Loss: 2.4423\nEpoch: 2/20... Step: 180... Loss: 2.4390... Val Loss: 2.4093\nEpoch: 2/20... Step: 190... Loss: 2.3830... Val Loss: 2.3769\nEpoch: 2/20... Step: 200... Loss: 2.3723... Val Loss: 2.3445\nEpoch: 2/20... Step: 210... Loss: 2.3436... Val Loss: 2.3148\nEpoch: 2/20... Step: 220... Loss: 2.2939... Val Loss: 2.2818\nEpoch: 2/20... Step: 230... Loss: 2.2846... Val Loss: 2.2509\nEpoch: 2/20... Step: 240... Loss: 2.2627... Val Loss: 2.2227\nEpoch: 2/20... Step: 250... Loss: 2.1919... Val Loss: 2.1996\nEpoch: 2/20... Step: 260... Loss: 2.1661... Val Loss: 2.1744\nEpoch: 2/20... Step: 270... Loss: 2.1747... Val Loss: 2.1523\nEpoch: 3/20... Step: 280... Loss: 2.1612... Val Loss: 2.1336\nEpoch: 3/20... Step: 290... Loss: 2.1422... Val Loss: 2.1000\nEpoch: 3/20... Step: 300... Loss: 2.1086... Val Loss: 2.0798\nEpoch: 3/20... Step: 310... Loss: 2.0786... Val Loss: 2.0613\nEpoch: 3/20... Step: 320... Loss: 2.0523... Val Loss: 2.0378\nEpoch: 3/20... Step: 330... Loss: 2.0238... Val Loss: 2.0222\nEpoch: 3/20... Step: 340... Loss: 2.0444... Val Loss: 1.9995\nEpoch: 3/20... Step: 350... Loss: 2.0152... Val Loss: 1.9814\nEpoch: 3/20... Step: 360... Loss: 1.9430... Val Loss: 1.9665\nEpoch: 3/20... Step: 370... Loss: 1.9763... Val Loss: 1.9481\nEpoch: 3/20... Step: 380... Loss: 1.9566... Val Loss: 1.9345\nEpoch: 3/20... Step: 390... Loss: 1.9172... Val Loss: 1.9153\nEpoch: 3/20... Step: 400... Loss: 1.9015... Val Loss: 1.9021\nEpoch: 3/20... Step: 410... Loss: 1.9104... Val Loss: 1.8867\nEpoch: 4/20... Step: 420... Loss: 1.9027... Val Loss: 1.8719\nEpoch: 4/20... Step: 430... Loss: 1.8848... Val Loss: 1.8577\nEpoch: 4/20... Step: 440... Loss: 1.8724... Val Loss: 1.8445\nEpoch: 4/20... Step: 450... Loss: 1.8158... Val Loss: 1.8321\nEpoch: 4/20... Step: 460... Loss: 1.7973... Val Loss: 1.8220\nEpoch: 4/20... Step: 470... Loss: 1.8302... Val Loss: 1.8081\nEpoch: 4/20... Step: 480... Loss: 1.8078... Val Loss: 1.7975\nEpoch: 4/20... Step: 490... Loss: 1.8182... Val Loss: 1.7851\nEpoch: 4/20... Step: 500... Loss: 1.8034... Val Loss: 1.7736\nEpoch: 4/20... Step: 510... Loss: 1.7886... Val Loss: 1.7640\nEpoch: 4/20... Step: 520... Loss: 1.8058... Val Loss: 1.7547\nEpoch: 4/20... Step: 530... Loss: 1.7575... Val Loss: 1.7445\nEpoch: 4/20... Step: 540... Loss: 1.7292... Val Loss: 1.7370\nEpoch: 4/20... Step: 550... Loss: 1.7692... Val Loss: 1.7253\nEpoch: 5/20... Step: 560... Loss: 1.7331... Val Loss: 1.7184\nEpoch: 5/20... Step: 570... Loss: 1.7250... Val Loss: 1.7056\nEpoch: 5/20... Step: 580... Loss: 1.6994... Val Loss: 1.6949\nEpoch: 5/20... Step: 590... Loss: 1.6999... Val Loss: 1.6887\nEpoch: 5/20... Step: 600... Loss: 1.6930... Val Loss: 1.6822\nEpoch: 5/20... Step: 610... Loss: 1.6770... Val Loss: 1.6757\nEpoch: 5/20... Step: 620... Loss: 1.6782... Val Loss: 1.6705\nEpoch: 5/20... Step: 630... Loss: 1.7051... Val Loss: 1.6594\nEpoch: 5/20... Step: 640... Loss: 1.6475... Val Loss: 1.6531\nEpoch: 5/20... Step: 650... Loss: 1.6617... Val Loss: 1.6462\nEpoch: 5/20... Step: 660... Loss: 1.6298... Val Loss: 1.6378\nEpoch: 5/20... Step: 670... Loss: 1.6466... Val Loss: 1.6343\nEpoch: 5/20... Step: 680... Loss: 1.6483... Val Loss: 1.6273\nEpoch: 5/20... Step: 690... Loss: 1.6326... Val Loss: 1.6203\nEpoch: 6/20... Step: 700... Loss: 1.6298... Val Loss: 1.6155\nEpoch: 6/20... Step: 710... Loss: 1.6189... Val Loss: 1.6099\nEpoch: 6/20... Step: 720... Loss: 1.6038... Val Loss: 1.6019\nEpoch: 6/20... Step: 730... Loss: 1.6189... Val Loss: 1.5949\nEpoch: 6/20... Step: 740... Loss: 1.5844... Val Loss: 1.5916\nEpoch: 6/20... Step: 750... Loss: 1.5705... Val Loss: 1.5838\nEpoch: 6/20... Step: 760... Loss: 1.6029... Val Loss: 1.5829\nEpoch: 6/20... Step: 770... Loss: 1.5919... Val Loss: 1.5786\nEpoch: 6/20... Step: 780... Loss: 1.5683... Val Loss: 1.5708\nEpoch: 6/20... Step: 790... Loss: 1.5492... Val Loss: 1.5678\nEpoch: 6/20... Step: 800... Loss: 1.5784... Val Loss: 1.5631\nEpoch: 6/20... Step: 810... Loss: 1.5611... Val Loss: 1.5589\nEpoch: 6/20... Step: 820... Loss: 1.5152... Val Loss: 1.5521\nEpoch: 6/20... Step: 830... Loss: 1.5756... Val Loss: 1.5487\nEpoch: 7/20... Step: 840... Loss: 1.5236... Val Loss: 1.5427\nEpoch: 7/20... Step: 850... Loss: 1.5457... Val Loss: 1.5427\nEpoch: 7/20... Step: 860... Loss: 1.5223... Val Loss: 1.5339\nEpoch: 7/20... Step: 870... Loss: 1.5323... Val Loss: 1.5283\nEpoch: 7/20... Step: 880... Loss: 1.5344... Val Loss: 1.5250\nEpoch: 7/20... Step: 890... Loss: 1.5340... Val Loss: 1.5217\nEpoch: 7/20... Step: 900... Loss: 1.5128... Val Loss: 1.5206\nEpoch: 7/20... Step: 910... Loss: 1.4882... Val Loss: 1.5201\nEpoch: 7/20... Step: 920... Loss: 1.5208... Val Loss: 1.5138\nEpoch: 7/20... Step: 930... Loss: 1.4947... Val Loss: 1.5096\nEpoch: 7/20... Step: 940... Loss: 1.4995... Val Loss: 1.5051\nEpoch: 7/20... Step: 950... Loss: 1.5136... Val Loss: 1.5007\nEpoch: 7/20... Step: 960... Loss: 1.5143... Val Loss: 1.4966\nEpoch: 7/20... Step: 970... Loss: 1.5095... Val Loss: 1.5004\nEpoch: 8/20... Step: 980... Loss: 1.4829... Val Loss: 1.4945\nEpoch: 8/20... Step: 990... Loss: 1.4891... Val Loss: 1.4878\nEpoch: 8/20... Step: 1000... Loss: 1.4794... Val Loss: 1.4834\nEpoch: 8/20... Step: 1010... Loss: 1.5210... Val Loss: 1.4804\nEpoch: 8/20... Step: 1020... Loss: 1.4882... Val Loss: 1.4778\nEpoch: 8/20... Step: 1030... Loss: 1.4722... Val Loss: 1.4736\nEpoch: 8/20... Step: 1040... Loss: 1.4865... Val Loss: 1.4733\nEpoch: 8/20... Step: 1050... Loss: 1.4553... Val Loss: 1.4747\nEpoch: 8/20... Step: 1060... Loss: 1.4647... Val Loss: 1.4654\nEpoch: 8/20... Step: 1070... Loss: 1.4727... Val Loss: 1.4644\nEpoch: 8/20... Step: 1080... Loss: 1.4652... Val Loss: 1.4622\nEpoch: 8/20... Step: 1090... Loss: 1.4416... Val Loss: 1.4591\nEpoch: 8/20... Step: 1100... Loss: 1.4400... Val Loss: 1.4560\nEpoch: 8/20... Step: 1110... Loss: 1.4567... Val Loss: 1.4523\nEpoch: 9/20... Step: 1120... Loss: 1.4561... Val Loss: 1.4521\nEpoch: 9/20... Step: 1130... Loss: 1.4460... Val Loss: 1.4495\nEpoch: 9/20... Step: 1140... Loss: 1.4466... Val Loss: 1.4437\nEpoch: 9/20... Step: 1150... Loss: 1.4679... Val Loss: 1.4423\nEpoch: 9/20... Step: 1160... Loss: 1.4279... Val Loss: 1.4398\nEpoch: 9/20... Step: 1170... Loss: 1.4303... Val Loss: 1.4372\nEpoch: 9/20... Step: 1180... Loss: 1.4196... Val Loss: 1.4382\nEpoch: 9/20... Step: 1190... Loss: 1.4541... Val Loss: 1.4338\nEpoch: 9/20... Step: 1200... Loss: 1.4059... Val Loss: 1.4305\nEpoch: 9/20... Step: 1210... Loss: 1.4142... Val Loss: 1.4277\nEpoch: 9/20... Step: 1220... Loss: 1.4176... Val Loss: 1.4261\nEpoch: 9/20... Step: 1230... Loss: 1.4006... Val Loss: 1.4275\nEpoch: 9/20... Step: 1240... Loss: 1.4079... Val Loss: 1.4239\nEpoch: 9/20... Step: 1250... Loss: 1.4157... Val Loss: 1.4224\nEpoch: 10/20... Step: 1260... Loss: 1.4191... Val Loss: 1.4196\nEpoch: 10/20... Step: 1270... Loss: 1.4144... Val Loss: 1.4178\nEpoch: 10/20... Step: 1280... Loss: 1.4276... Val Loss: 1.4137\nEpoch: 10/20... Step: 1290... Loss: 1.4112... Val Loss: 1.4160\nEpoch: 10/20... Step: 1300... Loss: 1.3895... Val Loss: 1.4108\nEpoch: 10/20... Step: 1310... Loss: 1.4017... Val Loss: 1.4084\nEpoch: 10/20... Step: 1320... Loss: 1.3792... Val Loss: 1.4094\nEpoch: 10/20... Step: 1330... Loss: 1.3848... Val Loss: 1.4071\nEpoch: 10/20... Step: 1340... Loss: 1.3680... Val Loss: 1.4056\nEpoch: 10/20... Step: 1350... Loss: 1.3753... Val Loss: 1.4014\nEpoch: 10/20... Step: 1360... Loss: 1.3737... Val Loss: 1.3971\nEpoch: 10/20... Step: 1370... Loss: 1.3583... Val Loss: 1.4007\nEpoch: 10/20... Step: 1380... Loss: 1.4051... Val Loss: 1.3960\nEpoch: 10/20... Step: 1390... Loss: 1.4199... Val Loss: 1.3956\nEpoch: 11/20... Step: 1400... Loss: 1.4129... Val Loss: 1.3954\nEpoch: 11/20... Step: 1410... Loss: 1.4208... Val Loss: 1.3943\nEpoch: 11/20... Step: 1420... Loss: 1.4071... Val Loss: 1.3881\nEpoch: 11/20... Step: 1430... Loss: 1.3801... Val Loss: 1.3923\nEpoch: 11/20... Step: 1440... Loss: 1.4088... Val Loss: 1.3927\nEpoch: 11/20... Step: 1450... Loss: 1.3344... Val Loss: 1.3870\nEpoch: 11/20... Step: 1460... Loss: 1.3599... Val Loss: 1.3864\nEpoch: 11/20... Step: 1470... Loss: 1.3470... Val Loss: 1.3850\nEpoch: 11/20... Step: 1480... Loss: 1.3596... Val Loss: 1.3819\nEpoch: 11/20... Step: 1490... Loss: 1.3603... Val Loss: 1.3798\nEpoch: 11/20... Step: 1500... Loss: 1.3483... Val Loss: 1.3807\nEpoch: 11/20... Step: 1510... Loss: 1.3253... Val Loss: 1.3807\nEpoch: 11/20... Step: 1520... Loss: 1.3710... Val Loss: 1.3751\nEpoch: 12/20... Step: 1530... Loss: 1.4196... Val Loss: 1.3775\nEpoch: 12/20... Step: 1540... Loss: 1.3718... Val Loss: 1.3752\nEpoch: 12/20... Step: 1550... Loss: 1.3842... Val Loss: 1.3743\nEpoch: 12/20... Step: 1560... Loss: 1.3866... Val Loss: 1.3698\nEpoch: 12/20... Step: 1570... Loss: 1.3444... Val Loss: 1.3744\nEpoch: 12/20... Step: 1580... Loss: 1.3167... Val Loss: 1.3729\nEpoch: 12/20... Step: 1590... Loss: 1.3057... Val Loss: 1.3692\nEpoch: 12/20... Step: 1600... Loss: 1.3297... Val Loss: 1.3698\nEpoch: 12/20... Step: 1610... Loss: 1.3380... Val Loss: 1.3704\nEpoch: 12/20... Step: 1620... Loss: 1.3254... Val Loss: 1.3650\nEpoch: 12/20... Step: 1630... Loss: 1.3539... Val Loss: 1.3628\nEpoch: 12/20... Step: 1640... Loss: 1.3310... Val Loss: 1.3656\nEpoch: 12/20... Step: 1650... Loss: 1.3040... Val Loss: 1.3641\nEpoch: 12/20... Step: 1660... Loss: 1.3597... Val Loss: 1.3606\nEpoch: 13/20... Step: 1670... Loss: 1.3311... Val Loss: 1.3615\nEpoch: 13/20... Step: 1680... Loss: 1.3349... Val Loss: 1.3575\nEpoch: 13/20... Step: 1690... Loss: 1.3168... Val Loss: 1.3589\nEpoch: 13/20... Step: 1700... Loss: 1.3228... Val Loss: 1.3540\nEpoch: 13/20... Step: 1710... Loss: 1.2991... Val Loss: 1.3595\nEpoch: 13/20... Step: 1720... Loss: 1.3131... Val Loss: 1.3567\nEpoch: 13/20... Step: 1730... Loss: 1.3383... Val Loss: 1.3541\nEpoch: 13/20... Step: 1740... Loss: 1.3161... Val Loss: 1.3528\nEpoch: 13/20... Step: 1750... Loss: 1.2798... Val Loss: 1.3588\nEpoch: 13/20... Step: 1760... Loss: 1.3097... Val Loss: 1.3541\nEpoch: 13/20... Step: 1770... Loss: 1.3252... Val Loss: 1.3523\nEpoch: 13/20... Step: 1780... Loss: 1.3103... Val Loss: 1.3512\nEpoch: 13/20... Step: 1790... Loss: 1.2921... Val Loss: 1.3480\nEpoch: 13/20... Step: 1800... Loss: 1.3165... Val Loss: 1.3468\nEpoch: 14/20... Step: 1810... Loss: 1.3175... Val Loss: 1.3458\nEpoch: 14/20... Step: 1820... Loss: 1.3055... Val Loss: 1.3433\nEpoch: 14/20... Step: 1830... Loss: 1.3234... Val Loss: 1.3466\nEpoch: 14/20... Step: 1840... Loss: 1.2678... Val Loss: 1.3471\nEpoch: 14/20... Step: 1850... Loss: 1.2659... Val Loss: 1.3489\nEpoch: 14/20... Step: 1860... Loss: 1.3215... Val Loss: 1.3451\nEpoch: 14/20... Step: 1870... Loss: 1.3197... Val Loss: 1.3400\nEpoch: 14/20... Step: 1880... Loss: 1.3095... Val Loss: 1.3424\nEpoch: 14/20... Step: 1890... Loss: 1.3336... Val Loss: 1.3434\nEpoch: 14/20... Step: 1900... Loss: 1.3067... Val Loss: 1.3394\nEpoch: 14/20... Step: 1910... Loss: 1.3049... Val Loss: 1.3380\nEpoch: 14/20... Step: 1920... Loss: 1.3004... Val Loss: 1.3395\nEpoch: 14/20... Step: 1930... Loss: 1.2739... Val Loss: 1.3377\nEpoch: 14/20... Step: 1940... Loss: 1.3157... Val Loss: 1.3353\nEpoch: 15/20... Step: 1950... Loss: 1.2943... Val Loss: 1.3358\nEpoch: 15/20... Step: 1960... Loss: 1.2895... Val Loss: 1.3343\nEpoch: 15/20... Step: 1970... Loss: 1.2929... Val Loss: 1.3315\nEpoch: 15/20... Step: 1980... Loss: 1.2891... Val Loss: 1.3335\nEpoch: 15/20... Step: 1990... Loss: 1.2827... Val Loss: 1.3371\nEpoch: 15/20... Step: 2000... Loss: 1.2699... Val Loss: 1.3355\nEpoch: 15/20... Step: 2010... Loss: 1.2878... Val Loss: 1.3301\nEpoch: 15/20... Step: 2020... Loss: 1.3037... Val Loss: 1.3344\nEpoch: 15/20... Step: 2030... Loss: 1.2671... Val Loss: 1.3342\nEpoch: 15/20... Step: 2040... Loss: 1.2919... Val Loss: 1.3325\nEpoch: 15/20... Step: 2050... Loss: 1.2736... Val Loss: 1.3303\nEpoch: 15/20... Step: 2060... Loss: 1.2852... Val Loss: 1.3279\nEpoch: 15/20... Step: 2070... Loss: 1.2926... Val Loss: 1.3213\nEpoch: 15/20... Step: 2080... Loss: 1.2809... Val Loss: 1.3224\nEpoch: 16/20... Step: 2090... Loss: 1.2955... Val Loss: 1.3228\nEpoch: 16/20... Step: 2100... Loss: 1.2697... Val Loss: 1.3226\nEpoch: 16/20... Step: 2110... Loss: 1.2672... Val Loss: 1.3233\nEpoch: 16/20... Step: 2120... Loss: 1.2819... Val Loss: 1.3234\nEpoch: 16/20... Step: 2130... Loss: 1.2560... Val Loss: 1.3248\nEpoch: 16/20... Step: 2140... Loss: 1.2631... Val Loss: 1.3232\nEpoch: 16/20... Step: 2150... Loss: 1.2937... Val Loss: 1.3202\nEpoch: 16/20... Step: 2160... Loss: 1.2618... Val Loss: 1.3238\nEpoch: 16/20... Step: 2170... Loss: 1.2633... Val Loss: 1.3237\nEpoch: 16/20... Step: 2180... Loss: 1.2604... Val Loss: 1.3224\nEpoch: 16/20... Step: 2190... Loss: 1.2807... Val Loss: 1.3211\nEpoch: 16/20... Step: 2200... Loss: 1.2664... Val Loss: 1.3189\nEpoch: 16/20... Step: 2210... Loss: 1.2232... Val Loss: 1.3150\nEpoch: 16/20... Step: 2220... Loss: 1.2737... Val Loss: 1.3184\nEpoch: 17/20... Step: 2230... Loss: 1.2517... Val Loss: 1.3176\nEpoch: 17/20... Step: 2240... Loss: 1.2480... Val Loss: 1.3181\nEpoch: 17/20... Step: 2250... Loss: 1.2364... Val Loss: 1.3136\nEpoch: 17/20... Step: 2260... Loss: 1.2542... Val Loss: 1.3144\nEpoch: 17/20... Step: 2270... Loss: 1.2624... Val Loss: 1.3179\nEpoch: 17/20... Step: 2280... Loss: 1.2746... Val Loss: 1.3178\nEpoch: 17/20... Step: 2290... Loss: 1.2668... Val Loss: 1.3142\nEpoch: 17/20... Step: 2300... Loss: 1.2300... Val Loss: 1.3199\nEpoch: 17/20... Step: 2310... Loss: 1.2596... Val Loss: 1.3183\nEpoch: 17/20... Step: 2320... Loss: 1.2488... Val Loss: 1.3139\nEpoch: 17/20... Step: 2330... Loss: 1.2533... Val Loss: 1.3163\nEpoch: 17/20... Step: 2340... Loss: 1.2689... Val Loss: 1.3139\nEpoch: 17/20... Step: 2350... Loss: 1.2705... Val Loss: 1.3107\nEpoch: 17/20... Step: 2360... Loss: 1.2696... Val Loss: 1.3130\nEpoch: 18/20... Step: 2370... Loss: 1.2372... Val Loss: 1.3079\nEpoch: 18/20... Step: 2380... Loss: 1.2402... Val Loss: 1.3094\nEpoch: 18/20... Step: 2390... Loss: 1.2515... Val Loss: 1.3089\nEpoch: 18/20... Step: 2400... Loss: 1.2753... Val Loss: 1.3081\nEpoch: 18/20... Step: 2410... Loss: 1.2641... Val Loss: 1.3094\nEpoch: 18/20... Step: 2420... Loss: 1.2459... Val Loss: 1.3057\nEpoch: 18/20... Step: 2430... Loss: 1.2597... Val Loss: 1.3067\nEpoch: 18/20... Step: 2440... Loss: 1.2370... Val Loss: 1.3081\nEpoch: 18/20... Step: 2450... Loss: 1.2314... Val Loss: 1.3043\nEpoch: 18/20... Step: 2460... Loss: 1.2521... Val Loss: 1.3043\nEpoch: 18/20... Step: 2470... Loss: 1.2417... Val Loss: 1.3069\nEpoch: 18/20... Step: 2480... Loss: 1.2324... Val Loss: 1.3054\nEpoch: 18/20... Step: 2490... Loss: 1.2297... Val Loss: 1.3019\nEpoch: 18/20... Step: 2500... Loss: 1.2282... Val Loss: 1.3038\nEpoch: 19/20... Step: 2510... Loss: 1.2360... Val Loss: 1.3056\nEpoch: 19/20... Step: 2520... Loss: 1.2464... Val Loss: 1.3032\nEpoch: 19/20... Step: 2530... Loss: 1.2523... Val Loss: 1.2980\nEpoch: 19/20... Step: 2540... Loss: 1.2623... Val Loss: 1.3016\nEpoch: 19/20... Step: 2550... Loss: 1.2296... Val Loss: 1.3026\nEpoch: 19/20... Step: 2560... Loss: 1.2345... Val Loss: 1.2996\nEpoch: 19/20... Step: 2570... Loss: 1.2265... Val Loss: 1.2992\nEpoch: 19/20... Step: 2580... Loss: 1.2649... Val Loss: 1.2984\nEpoch: 19/20... Step: 2590... Loss: 1.2177... Val Loss: 1.2993\nEpoch: 19/20... Step: 2600... Loss: 1.2174... Val Loss: 1.2952\nEpoch: 19/20... Step: 2610... Loss: 1.2284... Val Loss: 1.2975\nEpoch: 19/20... Step: 2620... Loss: 1.2137... Val Loss: 1.2962\nEpoch: 19/20... Step: 2630... Loss: 1.2231... Val Loss: 1.2972\nEpoch: 19/20... Step: 2640... Loss: 1.2337... Val Loss: 1.2998\nEpoch: 20/20... Step: 2650... Loss: 1.2263... Val Loss: 1.2995\nEpoch: 20/20... Step: 2660... Loss: 1.2451... Val Loss: 1.2973\nEpoch: 20/20... Step: 2670... Loss: 1.2533... Val Loss: 1.2932\nEpoch: 20/20... Step: 2680... Loss: 1.2300... Val Loss: 1.2944\nEpoch: 20/20... Step: 2690... Loss: 1.2325... Val Loss: 1.2981\nEpoch: 20/20... Step: 2700... Loss: 1.2327... Val Loss: 1.2951\nEpoch: 20/20... Step: 2710... Loss: 1.2025... Val Loss: 1.2988\nEpoch: 20/20... Step: 2720... Loss: 1.2114... Val Loss: 1.2968\nEpoch: 20/20... Step: 2730... Loss: 1.2085... Val Loss: 1.2936\nEpoch: 20/20... Step: 2740... Loss: 1.2006... Val Loss: 1.2926\nEpoch: 20/20... Step: 2750... Loss: 1.2099... Val Loss: 1.2921\nEpoch: 20/20... Step: 2760... Loss: 1.2045... Val Loss: 1.2917\nEpoch: 20/20... Step: 2770... Loss: 1.2393... Val Loss: 1.2932\nEpoch: 20/20... Step: 2780... Loss: 1.2661... Val Loss: 1.2952\n"
]
],
[
[
"## Getting the best model\n\nTo set your hyperparameters to get the best performance, you'll want to watch the training and validation losses. If your training loss is much lower than the validation loss, you're overfitting. Increase regularization (more dropout) or use a smaller network. If the training and validation losses are close, you're underfitting so you can increase the size of the network.",
"_____no_output_____"
],
[
"## Hyperparameters\n\nHere are the hyperparameters for the network.\n\nIn defining the model:\n* `n_hidden` - The number of units in the hidden layers.\n* `n_layers` - Number of hidden LSTM layers to use.\n\nWe assume that dropout probability and learning rate will be kept at the default, in this example.\n\nAnd in training:\n* `batch_size` - Number of sequences running through the network in one pass.\n* `seq_length` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.\n* `lr` - Learning rate for training\n\nHere's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnn#tips-and-tricks).\n\n> ## Tips and Tricks\n\n>### Monitoring Validation Loss vs. Training Loss\n>If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:\n\n> - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.\n> - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer)\n\n> ### Approximate number of parameters\n\n> The two most important parameters that control the model are `n_hidden` and `n_layers`. I would advise that you always use `n_layers` of either 2/3. The `n_hidden` can be adjusted based on how much data you have. The two important quantities to keep track of here are:\n\n> - The number of parameters in your model. This is printed when you start training.\n> - The size of your dataset. 1MB file is approximately 1 million characters.\n\n>These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:\n\n> - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `n_hidden` larger.\n> - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.\n\n> ### Best models strategy\n\n>The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.\n\n>It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.\n\n>By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.",
"_____no_output_____"
],
[
"## Checkpoint\n\nAfter training, we'll save the model so we can load it again later if we need too. Here I'm saving the parameters needed to create the same architecture, the hidden layer hyperparameters and the text characters.",
"_____no_output_____"
]
],
[
[
"# change the name, for saving multiple files\nmodel_name = 'rnn_20_epoch.net'\n\ncheckpoint = {'n_hidden': net.n_hidden,\n 'n_layers': net.n_layers,\n 'state_dict': net.state_dict(),\n 'tokens': net.chars}\n\nwith open(model_name, 'wb') as f:\n torch.save(checkpoint, f)",
"_____no_output_____"
]
],
[
[
"---\n## Making Predictions\n\nNow that the model is trained, we'll want to sample from it and make predictions about next characters! To sample, we pass in a character and have the network predict the next character. Then we take that character, pass it back in, and get another predicted character. Just keep doing this and you'll generate a bunch of text!\n\n### A note on the `predict` function\n\nThe output of our RNN is from a fully-connected layer and it outputs a **distribution of next-character scores**.\n\n> To actually get the next character, we apply a softmax function, which gives us a *probability* distribution that we can then sample to predict the next character.\n\n### Top K sampling\n\nOur predictions come from a categorical probability distribution over all the possible characters. We can make the sample text and make it more reasonable to handle (with less variables) by only considering some $K$ most probable characters. This will prevent the network from giving us completely absurd characters while allowing it to introduce some noise and randomness into the sampled text. Read more about [topk, here](https://pytorch.org/docs/stable/torch.html#torch.topk).\n",
"_____no_output_____"
]
],
[
[
"def predict(net, char, h=None, top_k=None):\n ''' Given a character, predict the next character.\n Returns the predicted character and the hidden state.\n '''\n \n # tensor inputs\n x = np.array([[net.char2int[char]]])\n x = one_hot_encode(x, len(net.chars))\n inputs = torch.from_numpy(x)\n \n if(train_on_gpu):\n inputs = inputs.cuda()\n \n # detach hidden state from history\n h = tuple([each.data for each in h])\n # get the output of the model\n out, h = net(inputs, h)\n\n # get the character probabilities\n p = F.softmax(out, dim=1).data\n if(train_on_gpu):\n p = p.cpu() # move to cpu\n \n # get top characters\n if top_k is None:\n top_ch = np.arange(len(net.chars))\n else:\n p, top_ch = p.topk(top_k)\n top_ch = top_ch.numpy().squeeze()\n \n # select the likely next character with some element of randomness\n p = p.numpy().squeeze()\n char = np.random.choice(top_ch, p=p/p.sum())\n \n # return the encoded value of the predicted char and the hidden state\n return net.int2char[char], h",
"_____no_output_____"
]
],
[
[
"### Priming and generating text \n\nTypically you'll want to prime the network so you can build up a hidden state. Otherwise the network will start out generating characters at random. In general the first bunch of characters will be a little rough since it hasn't built up a long history of characters to predict from.",
"_____no_output_____"
]
],
[
[
"def sample(net, size, prime='The', top_k=None):\n \n if(train_on_gpu):\n net.cuda()\n else:\n net.cpu()\n \n net.eval() # eval mode\n \n # First off, run through the prime characters\n chars = [ch for ch in prime]\n h = net.init_hidden(1)\n for ch in prime:\n char, h = predict(net, ch, h, top_k=top_k)\n\n chars.append(char)\n \n # Now pass in the previous character and get a new one\n for ii in range(size):\n char, h = predict(net, chars[-1], h, top_k=top_k)\n chars.append(char)\n\n return ''.join(chars)",
"_____no_output_____"
],
[
"print(sample(net, 1000, prime='Anna', top_k=5))",
"Anna had so that an enter strength to be says off and he cared to be an unmarrely sister.\n\nThe children are saying in a place. A smile of their secretary and the sense of a condition. He saw that the princess was the same, the peaciting of his\nbriderous country second still. That she had seen him a little as it was the simminest that he had not been\nthe simple of\nthe passion to see his finger, and\nhis brother and the points he heard this place which\nhe was not\nsense. All had sent him that he could he concealed the steps and that he was to be patied,\nso much at hands, at the servants who had said something with the\nchair.\n \"This is a solitat matter?\"\n\n\"It's not thinking in the more the point is and that he's talking of the drinking of the\ncrain. If I was a memory. Have you\nseen my to thousard more\ncharacteribries, and this, and would be the framing of the most careful towards me, to the country too that they did nothind when she could not see him. What is\nit you want a conviluated more to mo\n"
]
],
[
[
"## Loading a checkpoint",
"_____no_output_____"
]
],
[
[
"# Here we have loaded in a model that trained over 20 epochs `rnn_20_epoch.net`\nwith open('rnn_20_epoch.net', 'rb') as f:\n checkpoint = torch.load(f)\n \nloaded = CharRNN(checkpoint['tokens'], n_hidden=checkpoint['n_hidden'], n_layers=checkpoint['n_layers'])\nloaded.load_state_dict(checkpoint['state_dict'])",
"_____no_output_____"
],
[
"# Sample using a loaded model\nprint(sample(loaded, 2000, top_k=5, prime=\"And Levin said\"))",
"And Levin said those second portryit on the contrast.\n\n\"What is it?\" said Stepan Arkadyevitch,\nletting up his\nshirt and talking to her face. And he had\nnot speak to Levin that his head on the round stop and\ntrouble\nto be faint, as he\nwas not a man who was said, she was the setter times that had been before so much talking in the steps of the door, his force to think of their sense of the sendence, both always bowing about in the country and the same time of her character and all at him with his face, and went out of her hand, sitting down beside\nthe clothes, and\nthe\nsame\nsingle mind and when they seemed to a strange of his\nbrother's.\n\nAnd he\nwas so meched the paints was so standing the man had been a love was the man, and stopped at once in the first step. But he was\na change to\ndo. The sound of the partice say a construnting his\nsteps and telling a single camp of the\nready and three significance of the same forest.\n\n\"Yes, but you see it.\" He carried his face and the condition in their carriage to her, and to go, she\nsaid that had been talking of his forest, a strange world, when Levin came the conversation as sense of her son, and he could not see him to hive answer, which had been saking when at tomere within the\ncounting her face that he was serenely from her she took a counting, there\nwas the since he\nhad too wearted and seemed to her,\" said the member of the cannors in the steps to his\nword.\n\nThe moss of the convincing it had been drawing up the people that there was nothing without this way or a single wife as he did not hear\nhim or that he was not seeing that she would be a court of the sound of some sound of the position, and to spartly she\ncould\nsee her and a sundroup times there was nothing this\nfather and as she stoop serious in the sound, was a steps of the master, a few sistersily play of his husband. The crowd had no carreated herself, and truets, and shaking up, the pases, and the moment that he was not at the marshal, and the starling the secret were stopping to be\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0529d646162d08b07fabeebb9625f4fde66275c | 52,958 | ipynb | Jupyter Notebook | 1 - Sequence to Sequence Learning with Neural Networks.ipynb | RCXD/pytorch-seq2seq | d37d9a2c28488063d8d34713c28eb69b72705176 | [
"MIT"
] | 3,856 | 2018-07-17T13:35:41.000Z | 2022-03-31T15:53:32.000Z | 1 - Sequence to Sequence Learning with Neural Networks.ipynb | RCXD/pytorch-seq2seq | d37d9a2c28488063d8d34713c28eb69b72705176 | [
"MIT"
] | 177 | 2018-11-01T21:51:38.000Z | 2022-03-30T10:48:21.000Z | 1 - Sequence to Sequence Learning with Neural Networks.ipynb | RCXD/pytorch-seq2seq | d37d9a2c28488063d8d34713c28eb69b72705176 | [
"MIT"
] | 1,066 | 2018-10-08T09:02:58.000Z | 2022-03-31T15:54:32.000Z | 48.319343 | 915 | 0.609124 | [
[
[
"# 1 - Sequence to Sequence Learning with Neural Networks\n\nIn this series we'll be building a machine learning model to go from once sequence to another, using PyTorch and torchtext. This will be done on German to English translations, but the models can be applied to any problem that involves going from one sequence to another, such as summarization, i.e. going from a sequence to a shorter sequence in the same language.\n\nIn this first notebook, we'll start simple to understand the general concepts by implementing the model from the [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215) paper. \n\n## Introduction\n\nThe most common sequence-to-sequence (seq2seq) models are *encoder-decoder* models, which commonly use a *recurrent neural network* (RNN) to *encode* the source (input) sentence into a single vector. In this notebook, we'll refer to this single vector as a *context vector*. We can think of the context vector as being an abstract representation of the entire input sentence. This vector is then *decoded* by a second RNN which learns to output the target (output) sentence by generating it one word at a time.\n\n\n\nThe above image shows an example translation. The input/source sentence, \"guten morgen\", is passed through the embedding layer (yellow) and then input into the encoder (green). We also append a *start of sequence* (`<sos>`) and *end of sequence* (`<eos>`) token to the start and end of sentence, respectively. At each time-step, the input to the encoder RNN is both the embedding, $e$, of the current word, $e(x_t)$, as well as the hidden state from the previous time-step, $h_{t-1}$, and the encoder RNN outputs a new hidden state $h_t$. We can think of the hidden state as a vector representation of the sentence so far. The RNN can be represented as a function of both of $e(x_t)$ and $h_{t-1}$:\n\n$$h_t = \\text{EncoderRNN}(e(x_t), h_{t-1})$$\n\nWe're using the term RNN generally here, it could be any recurrent architecture, such as an *LSTM* (Long Short-Term Memory) or a *GRU* (Gated Recurrent Unit). \n\nHere, we have $X = \\{x_1, x_2, ..., x_T\\}$, where $x_1 = \\text{<sos>}, x_2 = \\text{guten}$, etc. The initial hidden state, $h_0$, is usually either initialized to zeros or a learned parameter.\n\nOnce the final word, $x_T$, has been passed into the RNN via the embedding layer, we use the final hidden state, $h_T$, as the context vector, i.e. $h_T = z$. This is a vector representation of the entire source sentence.\n\nNow we have our context vector, $z$, we can start decoding it to get the output/target sentence, \"good morning\". Again, we append start and end of sequence tokens to the target sentence. At each time-step, the input to the decoder RNN (blue) is the embedding, $d$, of current word, $d(y_t)$, as well as the hidden state from the previous time-step, $s_{t-1}$, where the initial decoder hidden state, $s_0$, is the context vector, $s_0 = z = h_T$, i.e. the initial decoder hidden state is the final encoder hidden state. Thus, similar to the encoder, we can represent the decoder as:\n\n$$s_t = \\text{DecoderRNN}(d(y_t), s_{t-1})$$\n\nAlthough the input/source embedding layer, $e$, and the output/target embedding layer, $d$, are both shown in yellow in the diagram they are two different embedding layers with their own parameters.\n\nIn the decoder, we need to go from the hidden state to an actual word, therefore at each time-step we use $s_t$ to predict (by passing it through a `Linear` layer, shown in purple) what we think is the next word in the sequence, $\\hat{y}_t$. \n\n$$\\hat{y}_t = f(s_t)$$\n\nThe words in the decoder are always generated one after another, with one per time-step. We always use `<sos>` for the first input to the decoder, $y_1$, but for subsequent inputs, $y_{t>1}$, we will sometimes use the actual, ground truth next word in the sequence, $y_t$ and sometimes use the word predicted by our decoder, $\\hat{y}_{t-1}$. This is called *teacher forcing*, see a bit more info about it [here](https://machinelearningmastery.com/teacher-forcing-for-recurrent-neural-networks/). \n\nWhen training/testing our model, we always know how many words are in our target sentence, so we stop generating words once we hit that many. During inference it is common to keep generating words until the model outputs an `<eos>` token or after a certain amount of words have been generated.\n\nOnce we have our predicted target sentence, $\\hat{Y} = \\{ \\hat{y}_1, \\hat{y}_2, ..., \\hat{y}_T \\}$, we compare it against our actual target sentence, $Y = \\{ y_1, y_2, ..., y_T \\}$, to calculate our loss. We then use this loss to update all of the parameters in our model.\n\n## Preparing Data\n\nWe'll be coding up the models in PyTorch and using torchtext to help us do all of the pre-processing required. We'll also be using spaCy to assist in the tokenization of the data.",
"_____no_output_____"
]
],
[
[
"import torch\nimport torch.nn as nn\nimport torch.optim as optim\n\nfrom torchtext.legacy.datasets import Multi30k\nfrom torchtext.legacy.data import Field, BucketIterator\n\nimport spacy\nimport numpy as np\n\nimport random\nimport math\nimport time",
"_____no_output_____"
]
],
[
[
"We'll set the random seeds for deterministic results.",
"_____no_output_____"
]
],
[
[
"SEED = 1234\n\nrandom.seed(SEED)\nnp.random.seed(SEED)\ntorch.manual_seed(SEED)\ntorch.cuda.manual_seed(SEED)\ntorch.backends.cudnn.deterministic = True",
"_____no_output_____"
]
],
[
[
"Next, we'll create the tokenizers. A tokenizer is used to turn a string containing a sentence into a list of individual tokens that make up that string, e.g. \"good morning!\" becomes [\"good\", \"morning\", \"!\"]. We'll start talking about the sentences being a sequence of tokens from now, instead of saying they're a sequence of words. What's the difference? Well, \"good\" and \"morning\" are both words and tokens, but \"!\" is a token, not a word. \n\nspaCy has model for each language (\"de_core_news_sm\" for German and \"en_core_web_sm\" for English) which need to be loaded so we can access the tokenizer of each model. \n\n**Note**: the models must first be downloaded using the following on the command line: \n```\npython -m spacy download en_core_web_sm\npython -m spacy download de_core_news_sm\n```\n\nWe load the models as such:",
"_____no_output_____"
]
],
[
[
"spacy_de = spacy.load('de_core_news_sm')\nspacy_en = spacy.load('en_core_web_sm')",
"_____no_output_____"
]
],
[
[
"Next, we create the tokenizer functions. These can be passed to torchtext and will take in the sentence as a string and return the sentence as a list of tokens.\n\nIn the paper we are implementing, they find it beneficial to reverse the order of the input which they believe \"introduces many short term dependencies in the data that make the optimization problem much easier\". We copy this by reversing the German sentence after it has been transformed into a list of tokens.",
"_____no_output_____"
]
],
[
[
"def tokenize_de(text):\n \"\"\"\n Tokenizes German text from a string into a list of strings (tokens) and reverses it\n \"\"\"\n return [tok.text for tok in spacy_de.tokenizer(text)][::-1]\n\ndef tokenize_en(text):\n \"\"\"\n Tokenizes English text from a string into a list of strings (tokens)\n \"\"\"\n return [tok.text for tok in spacy_en.tokenizer(text)]",
"_____no_output_____"
]
],
[
[
"torchtext's `Field`s handle how data should be processed. All of the possible arguments are detailed [here](https://github.com/pytorch/text/blob/master/torchtext/data/field.py#L61). \n\nWe set the `tokenize` argument to the correct tokenization function for each, with German being the `SRC` (source) field and English being the `TRG` (target) field. The field also appends the \"start of sequence\" and \"end of sequence\" tokens via the `init_token` and `eos_token` arguments, and converts all words to lowercase.",
"_____no_output_____"
]
],
[
[
"SRC = Field(tokenize = tokenize_de, \n init_token = '<sos>', \n eos_token = '<eos>', \n lower = True)\n\nTRG = Field(tokenize = tokenize_en, \n init_token = '<sos>', \n eos_token = '<eos>', \n lower = True)",
"/home/ben/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torchtext-0.9.0a0+c38fd42-py3.8-linux-x86_64.egg/torchtext/data/field.py:150: UserWarning: Field class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.\n warnings.warn('{} class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.'.format(self.__class__.__name__), UserWarning)\n"
]
],
[
[
"Next, we download and load the train, validation and test data. \n\nThe dataset we'll be using is the [Multi30k dataset](https://github.com/multi30k/dataset). This is a dataset with ~30,000 parallel English, German and French sentences, each with ~12 words per sentence. \n\n`exts` specifies which languages to use as the source and target (source goes first) and `fields` specifies which field to use for the source and target.",
"_____no_output_____"
]
],
[
[
"train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'), \n fields = (SRC, TRG))",
"/home/ben/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torchtext-0.9.0a0+c38fd42-py3.8-linux-x86_64.egg/torchtext/data/example.py:78: UserWarning: Example class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.\n warnings.warn('Example class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.', UserWarning)\n"
]
],
[
[
"We can double check that we've loaded the right number of examples:",
"_____no_output_____"
]
],
[
[
"print(f\"Number of training examples: {len(train_data.examples)}\")\nprint(f\"Number of validation examples: {len(valid_data.examples)}\")\nprint(f\"Number of testing examples: {len(test_data.examples)}\")",
"Number of training examples: 29000\nNumber of validation examples: 1014\nNumber of testing examples: 1000\n"
]
],
[
[
"We can also print out an example, making sure the source sentence is reversed:",
"_____no_output_____"
]
],
[
[
"print(vars(train_data.examples[0]))",
"{'src': ['.', 'büsche', 'vieler', 'nähe', 'der', 'in', 'freien', 'im', 'sind', 'männer', 'weiße', 'junge', 'zwei'], 'trg': ['two', 'young', ',', 'white', 'males', 'are', 'outside', 'near', 'many', 'bushes', '.']}\n"
]
],
[
[
"The period is at the beginning of the German (src) sentence, so it looks like the sentence has been correctly reversed.\n\nNext, we'll build the *vocabulary* for the source and target languages. The vocabulary is used to associate each unique token with an index (an integer). The vocabularies of the source and target languages are distinct.\n\nUsing the `min_freq` argument, we only allow tokens that appear at least 2 times to appear in our vocabulary. Tokens that appear only once are converted into an `<unk>` (unknown) token.\n\nIt is important to note that our vocabulary should only be built from the training set and not the validation/test set. This prevents \"information leakage\" into our model, giving us artifically inflated validation/test scores.",
"_____no_output_____"
]
],
[
[
"SRC.build_vocab(train_data, min_freq = 2)\nTRG.build_vocab(train_data, min_freq = 2)",
"_____no_output_____"
],
[
"print(f\"Unique tokens in source (de) vocabulary: {len(SRC.vocab)}\")\nprint(f\"Unique tokens in target (en) vocabulary: {len(TRG.vocab)}\")",
"Unique tokens in source (de) vocabulary: 7853\nUnique tokens in target (en) vocabulary: 5893\n"
]
],
[
[
"The final step of preparing the data is to create the iterators. These can be iterated on to return a batch of data which will have a `src` attribute (the PyTorch tensors containing a batch of numericalized source sentences) and a `trg` attribute (the PyTorch tensors containing a batch of numericalized target sentences). Numericalized is just a fancy way of saying they have been converted from a sequence of readable tokens to a sequence of corresponding indexes, using the vocabulary. \n\nWe also need to define a `torch.device`. This is used to tell torchText to put the tensors on the GPU or not. We use the `torch.cuda.is_available()` function, which will return `True` if a GPU is detected on our computer. We pass this `device` to the iterator.\n\nWhen we get a batch of examples using an iterator we need to make sure that all of the source sentences are padded to the same length, the same with the target sentences. Luckily, torchText iterators handle this for us! \n\nWe use a `BucketIterator` instead of the standard `Iterator` as it creates batches in such a way that it minimizes the amount of padding in both the source and target sentences. ",
"_____no_output_____"
]
],
[
[
"device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')",
"_____no_output_____"
],
[
"BATCH_SIZE = 128\n\ntrain_iterator, valid_iterator, test_iterator = BucketIterator.splits(\n (train_data, valid_data, test_data), \n batch_size = BATCH_SIZE, \n device = device)",
"/home/ben/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torchtext-0.9.0a0+c38fd42-py3.8-linux-x86_64.egg/torchtext/data/iterator.py:48: UserWarning: BucketIterator class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.\n warnings.warn('{} class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.'.format(self.__class__.__name__), UserWarning)\n"
]
],
[
[
"## Building the Seq2Seq Model\n\nWe'll be building our model in three parts. The encoder, the decoder and a seq2seq model that encapsulates the encoder and decoder and will provide a way to interface with each.\n\n### Encoder\n\nFirst, the encoder, a 2 layer LSTM. The paper we are implementing uses a 4-layer LSTM, but in the interest of training time we cut this down to 2-layers. The concept of multi-layer RNNs is easy to expand from 2 to 4 layers. \n\nFor a multi-layer RNN, the input sentence, $X$, after being embedded goes into the first (bottom) layer of the RNN and hidden states, $H=\\{h_1, h_2, ..., h_T\\}$, output by this layer are used as inputs to the RNN in the layer above. Thus, representing each layer with a superscript, the hidden states in the first layer are given by:\n\n$$h_t^1 = \\text{EncoderRNN}^1(e(x_t), h_{t-1}^1)$$\n\nThe hidden states in the second layer are given by:\n\n$$h_t^2 = \\text{EncoderRNN}^2(h_t^1, h_{t-1}^2)$$\n\nUsing a multi-layer RNN also means we'll also need an initial hidden state as input per layer, $h_0^l$, and we will also output a context vector per layer, $z^l$.\n\nWithout going into too much detail about LSTMs (see [this](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) blog post to learn more about them), all we need to know is that they're a type of RNN which instead of just taking in a hidden state and returning a new hidden state per time-step, also take in and return a *cell state*, $c_t$, per time-step.\n\n$$\\begin{align*}\nh_t &= \\text{RNN}(e(x_t), h_{t-1})\\\\\n(h_t, c_t) &= \\text{LSTM}(e(x_t), h_{t-1}, c_{t-1})\n\\end{align*}$$\n\nWe can just think of $c_t$ as another type of hidden state. Similar to $h_0^l$, $c_0^l$ will be initialized to a tensor of all zeros. Also, our context vector will now be both the final hidden state and the final cell state, i.e. $z^l = (h_T^l, c_T^l)$.\n\nExtending our multi-layer equations to LSTMs, we get:\n\n$$\\begin{align*}\n(h_t^1, c_t^1) &= \\text{EncoderLSTM}^1(e(x_t), (h_{t-1}^1, c_{t-1}^1))\\\\\n(h_t^2, c_t^2) &= \\text{EncoderLSTM}^2(h_t^1, (h_{t-1}^2, c_{t-1}^2))\n\\end{align*}$$\n\nNote how only our hidden state from the first layer is passed as input to the second layer, and not the cell state.\n\nSo our encoder looks something like this: \n\n\n\nWe create this in code by making an `Encoder` module, which requires we inherit from `torch.nn.Module` and use the `super().__init__()` as some boilerplate code. The encoder takes the following arguments:\n- `input_dim` is the size/dimensionality of the one-hot vectors that will be input to the encoder. This is equal to the input (source) vocabulary size.\n- `emb_dim` is the dimensionality of the embedding layer. This layer converts the one-hot vectors into dense vectors with `emb_dim` dimensions. \n- `hid_dim` is the dimensionality of the hidden and cell states.\n- `n_layers` is the number of layers in the RNN.\n- `dropout` is the amount of dropout to use. This is a regularization parameter to prevent overfitting. Check out [this](https://www.coursera.org/lecture/deep-neural-network/understanding-dropout-YaGbR) for more details about dropout.\n\nWe aren't going to discuss the embedding layer in detail during these tutorials. All we need to know is that there is a step before the words - technically, the indexes of the words - are passed into the RNN, where the words are transformed into vectors. To read more about word embeddings, check these articles: [1](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/), [2](http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.html), [3](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), [4](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/). \n\nThe embedding layer is created using `nn.Embedding`, the LSTM with `nn.LSTM` and a dropout layer with `nn.Dropout`. Check the PyTorch [documentation](https://pytorch.org/docs/stable/nn.html) for more about these.\n\nOne thing to note is that the `dropout` argument to the LSTM is how much dropout to apply between the layers of a multi-layer RNN, i.e. between the hidden states output from layer $l$ and those same hidden states being used for the input of layer $l+1$.\n\nIn the `forward` method, we pass in the source sentence, $X$, which is converted into dense vectors using the `embedding` layer, and then dropout is applied. These embeddings are then passed into the RNN. As we pass a whole sequence to the RNN, it will automatically do the recurrent calculation of the hidden states over the whole sequence for us! Notice that we do not pass an initial hidden or cell state to the RNN. This is because, as noted in the [documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM), that if no hidden/cell state is passed to the RNN, it will automatically create an initial hidden/cell state as a tensor of all zeros. \n\nThe RNN returns: `outputs` (the top-layer hidden state for each time-step), `hidden` (the final hidden state for each layer, $h_T$, stacked on top of each other) and `cell` (the final cell state for each layer, $c_T$, stacked on top of each other).\n\nAs we only need the final hidden and cell states (to make our context vector), `forward` only returns `hidden` and `cell`. \n\nThe sizes of each of the tensors is left as comments in the code. In this implementation `n_directions` will always be 1, however note that bidirectional RNNs (covered in tutorial 3) will have `n_directions` as 2.",
"_____no_output_____"
]
],
[
[
"class Encoder(nn.Module):\n def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):\n super().__init__()\n \n self.hid_dim = hid_dim\n self.n_layers = n_layers\n \n self.embedding = nn.Embedding(input_dim, emb_dim)\n \n self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)\n \n self.dropout = nn.Dropout(dropout)\n \n def forward(self, src):\n \n #src = [src len, batch size]\n \n embedded = self.dropout(self.embedding(src))\n \n #embedded = [src len, batch size, emb dim]\n \n outputs, (hidden, cell) = self.rnn(embedded)\n \n #outputs = [src len, batch size, hid dim * n directions]\n #hidden = [n layers * n directions, batch size, hid dim]\n #cell = [n layers * n directions, batch size, hid dim]\n \n #outputs are always from the top hidden layer\n \n return hidden, cell",
"_____no_output_____"
]
],
[
[
"### Decoder\n\nNext, we'll build our decoder, which will also be a 2-layer (4 in the paper) LSTM.\n\n\n\nThe `Decoder` class does a single step of decoding, i.e. it ouputs single token per time-step. The first layer will receive a hidden and cell state from the previous time-step, $(s_{t-1}^1, c_{t-1}^1)$, and feeds it through the LSTM with the current embedded token, $y_t$, to produce a new hidden and cell state, $(s_t^1, c_t^1)$. The subsequent layers will use the hidden state from the layer below, $s_t^{l-1}$, and the previous hidden and cell states from their layer, $(s_{t-1}^l, c_{t-1}^l)$. This provides equations very similar to those in the encoder.\n\n$$\\begin{align*}\n(s_t^1, c_t^1) = \\text{DecoderLSTM}^1(d(y_t), (s_{t-1}^1, c_{t-1}^1))\\\\\n(s_t^2, c_t^2) = \\text{DecoderLSTM}^2(s_t^1, (s_{t-1}^2, c_{t-1}^2))\n\\end{align*}$$\n\nRemember that the initial hidden and cell states to our decoder are our context vectors, which are the final hidden and cell states of our encoder from the same layer, i.e. $(s_0^l,c_0^l)=z^l=(h_T^l,c_T^l)$.\n\nWe then pass the hidden state from the top layer of the RNN, $s_t^L$, through a linear layer, $f$, to make a prediction of what the next token in the target (output) sequence should be, $\\hat{y}_{t+1}$. \n\n$$\\hat{y}_{t+1} = f(s_t^L)$$\n\nThe arguments and initialization are similar to the `Encoder` class, except we now have an `output_dim` which is the size of the vocabulary for the output/target. There is also the addition of the `Linear` layer, used to make the predictions from the top layer hidden state.\n\nWithin the `forward` method, we accept a batch of input tokens, previous hidden states and previous cell states. As we are only decoding one token at a time, the input tokens will always have a sequence length of 1. We `unsqueeze` the input tokens to add a sentence length dimension of 1. Then, similar to the encoder, we pass through an embedding layer and apply dropout. This batch of embedded tokens is then passed into the RNN with the previous hidden and cell states. This produces an `output` (hidden state from the top layer of the RNN), a new `hidden` state (one for each layer, stacked on top of each other) and a new `cell` state (also one per layer, stacked on top of each other). We then pass the `output` (after getting rid of the sentence length dimension) through the linear layer to receive our `prediction`. We then return the `prediction`, the new `hidden` state and the new `cell` state.\n\n**Note**: as we always have a sequence length of 1, we could use `nn.LSTMCell`, instead of `nn.LSTM`, as it is designed to handle a batch of inputs that aren't necessarily in a sequence. `nn.LSTMCell` is just a single cell and `nn.LSTM` is a wrapper around potentially multiple cells. Using the `nn.LSTMCell` in this case would mean we don't have to `unsqueeze` to add a fake sequence length dimension, but we would need one `nn.LSTMCell` per layer in the decoder and to ensure each `nn.LSTMCell` receives the correct initial hidden state from the encoder. All of this makes the code less concise - hence the decision to stick with the regular `nn.LSTM`.",
"_____no_output_____"
]
],
[
[
"class Decoder(nn.Module):\n def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout):\n super().__init__()\n \n self.output_dim = output_dim\n self.hid_dim = hid_dim\n self.n_layers = n_layers\n \n self.embedding = nn.Embedding(output_dim, emb_dim)\n \n self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)\n \n self.fc_out = nn.Linear(hid_dim, output_dim)\n \n self.dropout = nn.Dropout(dropout)\n \n def forward(self, input, hidden, cell):\n \n #input = [batch size]\n #hidden = [n layers * n directions, batch size, hid dim]\n #cell = [n layers * n directions, batch size, hid dim]\n \n #n directions in the decoder will both always be 1, therefore:\n #hidden = [n layers, batch size, hid dim]\n #context = [n layers, batch size, hid dim]\n \n input = input.unsqueeze(0)\n \n #input = [1, batch size]\n \n embedded = self.dropout(self.embedding(input))\n \n #embedded = [1, batch size, emb dim]\n \n output, (hidden, cell) = self.rnn(embedded, (hidden, cell))\n \n #output = [seq len, batch size, hid dim * n directions]\n #hidden = [n layers * n directions, batch size, hid dim]\n #cell = [n layers * n directions, batch size, hid dim]\n \n #seq len and n directions will always be 1 in the decoder, therefore:\n #output = [1, batch size, hid dim]\n #hidden = [n layers, batch size, hid dim]\n #cell = [n layers, batch size, hid dim]\n \n prediction = self.fc_out(output.squeeze(0))\n \n #prediction = [batch size, output dim]\n \n return prediction, hidden, cell",
"_____no_output_____"
]
],
[
[
"### Seq2Seq\n\nFor the final part of the implemenetation, we'll implement the seq2seq model. This will handle: \n- receiving the input/source sentence\n- using the encoder to produce the context vectors \n- using the decoder to produce the predicted output/target sentence\n\nOur full model will look like this:\n\n\n\nThe `Seq2Seq` model takes in an `Encoder`, `Decoder`, and a `device` (used to place tensors on the GPU, if it exists).\n\nFor this implementation, we have to ensure that the number of layers and the hidden (and cell) dimensions are equal in the `Encoder` and `Decoder`. This is not always the case, we do not necessarily need the same number of layers or the same hidden dimension sizes in a sequence-to-sequence model. However, if we did something like having a different number of layers then we would need to make decisions about how this is handled. For example, if our encoder has 2 layers and our decoder only has 1, how is this handled? Do we average the two context vectors output by the decoder? Do we pass both through a linear layer? Do we only use the context vector from the highest layer? Etc.\n\nOur `forward` method takes the source sentence, target sentence and a teacher-forcing ratio. The teacher forcing ratio is used when training our model. When decoding, at each time-step we will predict what the next token in the target sequence will be from the previous tokens decoded, $\\hat{y}_{t+1}=f(s_t^L)$. With probability equal to the teaching forcing ratio (`teacher_forcing_ratio`) we will use the actual ground-truth next token in the sequence as the input to the decoder during the next time-step. However, with probability `1 - teacher_forcing_ratio`, we will use the token that the model predicted as the next input to the model, even if it doesn't match the actual next token in the sequence. \n\nThe first thing we do in the `forward` method is to create an `outputs` tensor that will store all of our predictions, $\\hat{Y}$.\n\nWe then feed the input/source sentence, `src`, into the encoder and receive out final hidden and cell states.\n\nThe first input to the decoder is the start of sequence (`<sos>`) token. As our `trg` tensor already has the `<sos>` token appended (all the way back when we defined the `init_token` in our `TRG` field) we get our $y_1$ by slicing into it. We know how long our target sentences should be (`max_len`), so we loop that many times. The last token input into the decoder is the one **before** the `<eos>` token - the `<eos>` token is never input into the decoder. \n\nDuring each iteration of the loop, we:\n- pass the input, previous hidden and previous cell states ($y_t, s_{t-1}, c_{t-1}$) into the decoder\n- receive a prediction, next hidden state and next cell state ($\\hat{y}_{t+1}, s_{t}, c_{t}$) from the decoder\n- place our prediction, $\\hat{y}_{t+1}$/`output` in our tensor of predictions, $\\hat{Y}$/`outputs`\n- decide if we are going to \"teacher force\" or not\n - if we do, the next `input` is the ground-truth next token in the sequence, $y_{t+1}$/`trg[t]`\n - if we don't, the next `input` is the predicted next token in the sequence, $\\hat{y}_{t+1}$/`top1`, which we get by doing an `argmax` over the output tensor\n \nOnce we've made all of our predictions, we return our tensor full of predictions, $\\hat{Y}$/`outputs`.\n\n**Note**: our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:\n\n$$\\begin{align*}\n\\text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\\\\n\\text{outputs} = [0, &\\hat{y}_1, \\hat{y}_2, \\hat{y}_3, <eos>]\n\\end{align*}$$\n\nLater on when we calculate the loss, we cut off the first element of each tensor to get:\n\n$$\\begin{align*}\n\\text{trg} = [&y_1, y_2, y_3, <eos>]\\\\\n\\text{outputs} = [&\\hat{y}_1, \\hat{y}_2, \\hat{y}_3, <eos>]\n\\end{align*}$$",
"_____no_output_____"
]
],
[
[
"class Seq2Seq(nn.Module):\n def __init__(self, encoder, decoder, device):\n super().__init__()\n \n self.encoder = encoder\n self.decoder = decoder\n self.device = device\n \n assert encoder.hid_dim == decoder.hid_dim, \\\n \"Hidden dimensions of encoder and decoder must be equal!\"\n assert encoder.n_layers == decoder.n_layers, \\\n \"Encoder and decoder must have equal number of layers!\"\n \n def forward(self, src, trg, teacher_forcing_ratio = 0.5):\n \n #src = [src len, batch size]\n #trg = [trg len, batch size]\n #teacher_forcing_ratio is probability to use teacher forcing\n #e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time\n \n batch_size = trg.shape[1]\n trg_len = trg.shape[0]\n trg_vocab_size = self.decoder.output_dim\n \n #tensor to store decoder outputs\n outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)\n \n #last hidden state of the encoder is used as the initial hidden state of the decoder\n hidden, cell = self.encoder(src)\n \n #first input to the decoder is the <sos> tokens\n input = trg[0,:]\n \n for t in range(1, trg_len):\n \n #insert input token embedding, previous hidden and previous cell states\n #receive output tensor (predictions) and new hidden and cell states\n output, hidden, cell = self.decoder(input, hidden, cell)\n \n #place predictions in a tensor holding predictions for each token\n outputs[t] = output\n \n #decide if we are going to use teacher forcing or not\n teacher_force = random.random() < teacher_forcing_ratio\n \n #get the highest predicted token from our predictions\n top1 = output.argmax(1) \n \n #if teacher forcing, use actual next token as next input\n #if not, use predicted token\n input = trg[t] if teacher_force else top1\n \n return outputs",
"_____no_output_____"
]
],
[
[
"# Training the Seq2Seq Model\n\nNow we have our model implemented, we can begin training it. \n\nFirst, we'll initialize our model. As mentioned before, the input and output dimensions are defined by the size of the vocabulary. The embedding dimesions and dropout for the encoder and decoder can be different, but the number of layers and the size of the hidden/cell states must be the same. \n\nWe then define the encoder, decoder and then our Seq2Seq model, which we place on the `device`.",
"_____no_output_____"
]
],
[
[
"INPUT_DIM = len(SRC.vocab)\nOUTPUT_DIM = len(TRG.vocab)\nENC_EMB_DIM = 256\nDEC_EMB_DIM = 256\nHID_DIM = 512\nN_LAYERS = 2\nENC_DROPOUT = 0.5\nDEC_DROPOUT = 0.5\n\nenc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT)\ndec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)\n\nmodel = Seq2Seq(enc, dec, device).to(device)",
"_____no_output_____"
]
],
[
[
"Next up is initializing the weights of our model. In the paper they state they initialize all weights from a uniform distribution between -0.08 and +0.08, i.e. $\\mathcal{U}(-0.08, 0.08)$.\n\nWe initialize weights in PyTorch by creating a function which we `apply` to our model. When using `apply`, the `init_weights` function will be called on every module and sub-module within our model. For each module we loop through all of the parameters and sample them from a uniform distribution with `nn.init.uniform_`.",
"_____no_output_____"
]
],
[
[
"def init_weights(m):\n for name, param in m.named_parameters():\n nn.init.uniform_(param.data, -0.08, 0.08)\n \nmodel.apply(init_weights)",
"_____no_output_____"
]
],
[
[
"We also define a function that will calculate the number of trainable parameters in the model.",
"_____no_output_____"
]
],
[
[
"def count_parameters(model):\n return sum(p.numel() for p in model.parameters() if p.requires_grad)\n\nprint(f'The model has {count_parameters(model):,} trainable parameters')",
"The model has 13,898,501 trainable parameters\n"
]
],
[
[
"We define our optimizer, which we use to update our parameters in the training loop. Check out [this](http://ruder.io/optimizing-gradient-descent/) post for information about different optimizers. Here, we'll use Adam.",
"_____no_output_____"
]
],
[
[
"optimizer = optim.Adam(model.parameters())",
"_____no_output_____"
]
],
[
[
"Next, we define our loss function. The `CrossEntropyLoss` function calculates both the log softmax as well as the negative log-likelihood of our predictions. \n\nOur loss function calculates the average loss per token, however by passing the index of the `<pad>` token as the `ignore_index` argument we ignore the loss whenever the target token is a padding token. ",
"_____no_output_____"
]
],
[
[
"TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]\n\ncriterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)",
"_____no_output_____"
]
],
[
[
"Next, we'll define our training loop. \n\nFirst, we'll set the model into \"training mode\" with `model.train()`. This will turn on dropout (and batch normalization, which we aren't using) and then iterate through our data iterator.\n\nAs stated before, our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:\n\n$$\\begin{align*}\n\\text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\\\\n\\text{outputs} = [0, &\\hat{y}_1, \\hat{y}_2, \\hat{y}_3, <eos>]\n\\end{align*}$$\n\nHere, when we calculate the loss, we cut off the first element of each tensor to get:\n\n$$\\begin{align*}\n\\text{trg} = [&y_1, y_2, y_3, <eos>]\\\\\n\\text{outputs} = [&\\hat{y}_1, \\hat{y}_2, \\hat{y}_3, <eos>]\n\\end{align*}$$\n\nAt each iteration:\n- get the source and target sentences from the batch, $X$ and $Y$\n- zero the gradients calculated from the last batch\n- feed the source and target into the model to get the output, $\\hat{Y}$\n- as the loss function only works on 2d inputs with 1d targets we need to flatten each of them with `.view`\n - we slice off the first column of the output and target tensors as mentioned above\n- calculate the gradients with `loss.backward()`\n- clip the gradients to prevent them from exploding (a common issue in RNNs)\n- update the parameters of our model by doing an optimizer step\n- sum the loss value to a running total\n\nFinally, we return the loss that is averaged over all batches.",
"_____no_output_____"
]
],
[
[
"def train(model, iterator, optimizer, criterion, clip):\n \n model.train()\n \n epoch_loss = 0\n \n for i, batch in enumerate(iterator):\n \n src = batch.src\n trg = batch.trg\n \n optimizer.zero_grad()\n \n output = model(src, trg)\n \n #trg = [trg len, batch size]\n #output = [trg len, batch size, output dim]\n \n output_dim = output.shape[-1]\n \n output = output[1:].view(-1, output_dim)\n trg = trg[1:].view(-1)\n \n #trg = [(trg len - 1) * batch size]\n #output = [(trg len - 1) * batch size, output dim]\n \n loss = criterion(output, trg)\n \n loss.backward()\n \n torch.nn.utils.clip_grad_norm_(model.parameters(), clip)\n \n optimizer.step()\n \n epoch_loss += loss.item()\n \n return epoch_loss / len(iterator)",
"_____no_output_____"
]
],
[
[
"Our evaluation loop is similar to our training loop, however as we aren't updating any parameters we don't need to pass an optimizer or a clip value.\n\nWe must remember to set the model to evaluation mode with `model.eval()`. This will turn off dropout (and batch normalization, if used).\n\nWe use the `with torch.no_grad()` block to ensure no gradients are calculated within the block. This reduces memory consumption and speeds things up. \n\nThe iteration loop is similar (without the parameter updates), however we must ensure we turn teacher forcing off for evaluation. This will cause the model to only use it's own predictions to make further predictions within a sentence, which mirrors how it would be used in deployment.",
"_____no_output_____"
]
],
[
[
"def evaluate(model, iterator, criterion):\n \n model.eval()\n \n epoch_loss = 0\n \n with torch.no_grad():\n \n for i, batch in enumerate(iterator):\n\n src = batch.src\n trg = batch.trg\n\n output = model(src, trg, 0) #turn off teacher forcing\n\n #trg = [trg len, batch size]\n #output = [trg len, batch size, output dim]\n\n output_dim = output.shape[-1]\n \n output = output[1:].view(-1, output_dim)\n trg = trg[1:].view(-1)\n\n #trg = [(trg len - 1) * batch size]\n #output = [(trg len - 1) * batch size, output dim]\n\n loss = criterion(output, trg)\n \n epoch_loss += loss.item()\n \n return epoch_loss / len(iterator)",
"_____no_output_____"
]
],
[
[
"Next, we'll create a function that we'll use to tell us how long an epoch takes.",
"_____no_output_____"
]
],
[
[
"def epoch_time(start_time, end_time):\n elapsed_time = end_time - start_time\n elapsed_mins = int(elapsed_time / 60)\n elapsed_secs = int(elapsed_time - (elapsed_mins * 60))\n return elapsed_mins, elapsed_secs",
"_____no_output_____"
]
],
[
[
"We can finally start training our model!\n\nAt each epoch, we'll be checking if our model has achieved the best validation loss so far. If it has, we'll update our best validation loss and save the parameters of our model (called `state_dict` in PyTorch). Then, when we come to test our model, we'll use the saved parameters used to achieve the best validation loss. \n\nWe'll be printing out both the loss and the perplexity at each epoch. It is easier to see a change in perplexity than a change in loss as the numbers are much bigger.",
"_____no_output_____"
]
],
[
[
"N_EPOCHS = 10\nCLIP = 1\n\nbest_valid_loss = float('inf')\n\nfor epoch in range(N_EPOCHS):\n \n start_time = time.time()\n \n train_loss = train(model, train_iterator, optimizer, criterion, CLIP)\n valid_loss = evaluate(model, valid_iterator, criterion)\n \n end_time = time.time()\n \n epoch_mins, epoch_secs = epoch_time(start_time, end_time)\n \n if valid_loss < best_valid_loss:\n best_valid_loss = valid_loss\n torch.save(model.state_dict(), 'tut1-model.pt')\n \n print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')\n print(f'\\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')\n print(f'\\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')",
"/home/ben/miniconda3/envs/pytorch17/lib/python3.8/site-packages/torchtext-0.9.0a0+c38fd42-py3.8-linux-x86_64.egg/torchtext/data/batch.py:23: UserWarning: Batch class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.\n warnings.warn('{} class will be retired soon and moved to torchtext.legacy. Please see the most recent release notes for further information.'.format(self.__class__.__name__), UserWarning)\n"
]
],
[
[
"We'll load the parameters (`state_dict`) that gave our model the best validation loss and run it the model on the test set.",
"_____no_output_____"
]
],
[
[
"model.load_state_dict(torch.load('tut1-model.pt'))\n\ntest_loss = evaluate(model, test_iterator, criterion)\n\nprint(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')",
"| Test Loss: 3.951 | Test PPL: 52.001 |\n"
]
],
[
[
"In the following notebook we'll implement a model that achieves improved test perplexity, but only uses a single layer in the encoder and the decoder.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d052d9a0fe20d2786e257528a524ddb257f39c14 | 238,778 | ipynb | Jupyter Notebook | content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb | scapape/ibm-quantum-challenge-fall-2021 | fe6099e3af18ef2f5598ac4b835751874c9960f3 | [
"Apache-2.0"
] | null | null | null | content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb | scapape/ibm-quantum-challenge-fall-2021 | fe6099e3af18ef2f5598ac4b835751874c9960f3 | [
"Apache-2.0"
] | null | null | null | content/challenge-3/.ipynb_checkpoints/challenge-3-checkpoint.ipynb | scapape/ibm-quantum-challenge-fall-2021 | fe6099e3af18ef2f5598ac4b835751874c9960f3 | [
"Apache-2.0"
] | null | null | null | 153.554984 | 66,892 | 0.87111 | [
[
[
"## IBM Quantum Challenge Fall 2021\n\n# Challenge 3: Classify images with quantum machine learning\n\n<div class=\"alert alert-block alert-info\">\n \nWe recommend that you switch to **light** workspace theme under the Account menu in the upper right corner for optimal experience.",
"_____no_output_____"
],
[
"## Introduction\n\nMachine learning is a technology that has attracted a great deal of attention due to its high performance and versatility. In fact, it has been put to practical use in many industries with the recent development of algorithms and the increase of computational resources. A typical example is computer vision, where machine learning is now able to classify images with the same or better accuracy than humans. For example, the ability to automatically classify clothing images has made online shopping for clothes more convenient.\n\nThe application of quantum computation to machine learning has recently been shown to have the potential for even greater capabilities. Various algorithms have been proposed for quantum machine learning, such as the quantum support vector machine (QSVM) and quantum generative adversarial networks (QGANs). In this challenge, you will use QSVM to tackle the clothing image classification task.\n\nQSVM is a quantum version of the support vector machine (SVM), a classical machine learning algorithm. There are various approaches to QSVM, some aim to accelerate computation assuming fault-tolerant quantum computers, while others aim to achieve higher expressive power assuming noisy, near-term devices. In this challenge, we will focus on the latter, and the details will be explained later.\n\nFor this implementation of QSVM, you will be able to make choices on how you want to compose your quantum model, in particular focusing on the quantum feature map. This is motivated by the tradeoff that a more complex feature map would have greater representation power but be more susceptible to noise, which could be especially critical when using noisy, near-term devices.\n\nMany of the concepts that appear in this challenge are explained in the 2021 Qiskit Global Summer School (QGSS). The materials and lecture videos are available, and it is recommended that you study them as well. Refer to the links in each part for the corresponding lectures.\n\n<center><img src=\"./resources/ecommerce.jpg\" width=\"640\" /></center>",
"_____no_output_____"
],
[
"## Challenge\n<div class=\"alert alert-block alert-success\">\n\n**Goal**\n\nImplement a QSVM model for multiclass classification and predict labels accurately. \n \n**Plan**\n\nFirst, you will learn how to construct QSVM for binary classification of a simple dataset. Then apply what you have learned to a more complex problem, 3-class classification of a different dataset.\n\n**1. Tutorial - QSVM for binary classification of MNIST:** familiarize yourself with a typical workflow for QSVM and find the best combination of dimentions/feature maps.\n\n**2. Challenge - QSVM for 3-class classification of Fashion-MNIST:** implement a 3-class classifier using binary QSVM classifers. Perform similar investigation as in the first part to find the best combination of dimentions/feature maps. Achieve better accuracy with smaller feature map circuits.\n\n</div>\n\n<div class=\"alert alert-block alert-info\">\n\nBefore you begin, we recommend watching the [**Qiskit Machine Learning Demo Session with Anton Dekusar**](https://youtu.be/claoY57eVIc?t=1814) and check out the corresponding [**demo notebook**](https://github.com/qiskit-community/qiskit-application-modules-demo-sessions/tree/main/qiskit-machine-learning) to learn how to do classifications using QSVM.\n\n</div>",
"_____no_output_____"
]
],
[
[
"# General imports\nimport os\nimport gzip\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom pylab import cm\nimport warnings\n\nwarnings.filterwarnings(\"ignore\")\n\n# scikit-learn imports\nfrom sklearn import datasets\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler\nfrom sklearn.decomposition import PCA\nfrom sklearn.svm import SVC\nfrom sklearn.metrics import accuracy_score\n\n# Qiskit imports\nfrom qiskit import Aer, execute\nfrom qiskit.circuit import QuantumCircuit, Parameter, ParameterVector\nfrom qiskit.circuit.library import PauliFeatureMap, ZFeatureMap, ZZFeatureMap\nfrom qiskit.circuit.library import TwoLocal, NLocal, RealAmplitudes, EfficientSU2\nfrom qiskit.circuit.library import HGate, RXGate, RYGate, RZGate, CXGate, CRXGate, CRZGate\nfrom qiskit_machine_learning.kernels import QuantumKernel",
"_____no_output_____"
]
],
[
[
"## Part 1: Tutorial - QSVM for binary classification of MNIST\n\nIn this part, you will apply QSVM to the binary classification of handwritten numbers 4 and 9. Through this tutorial, you will learn the workflow of applying QSVM to binary classification. Find better combinations and achieve higher accuracy.\n\nRelated QGSS material:\n- [**Lab 3**](https://www.youtube.com/watch?v=GVhCOTzAkCM&list=PLOFEBzvs-VvqJwybFxkTiDzhf5E11p8BI&index=17)\n\n### 1. Data preparation\n\nThe data we are going to work with at the beginning is a small subset of the well known handwritten digits dataset, which is available publicly. We will be aiming to differentiate between '4' and '9'. \n\nThere are a total of 100 data in the dataset. Of these, eighty are labeled training data, and the remaining twenty are unlabeled test data. Each data is a 28x28 image of a digit, collapsed into an array, where each element is an integer between 0 (white) and 255 (black). To use the dataset for quantum classification, we need to scale the range to between -1 and 1, and reduce the dimensionality to the number of qubits we want to use (here N_DIM=5).",
"_____no_output_____"
]
],
[
[
"# Load MNIST dataset\nDATA_PATH = './resources/ch3_part1.npz'\ndata = np.load(DATA_PATH)\n\nsample_train = data['sample_train']\nlabels_train = data['labels_train']\nsample_test = data['sample_test']\n\n# Split train data\nsample_train, sample_val, labels_train, labels_val = train_test_split(\n sample_train, labels_train, test_size=0.2, random_state=42)\n\n# Visualize samples\nfig = plt.figure()\n\nLABELS = [4, 9]\nnum_labels = len(LABELS)\nfor i in range(num_labels):\n ax = fig.add_subplot(1, num_labels, i+1)\n img = sample_train[labels_train==LABELS[i]][0].reshape((28, 28))\n ax.imshow(img, cmap=\"Greys\")",
"_____no_output_____"
],
[
"# Standardize\nss = StandardScaler()\nsample_train = ss.fit_transform(sample_train)\nsample_val = ss.transform(sample_val)\nsample_test = ss.transform(sample_test)\n\n# Reduce dimensions\nN_DIM = 5\npca = PCA(n_components=N_DIM)\nsample_train = pca.fit_transform(sample_train)\nsample_val = pca.transform(sample_val)\nsample_test = pca.transform(sample_test)\n\n# Normalize\nmms = MinMaxScaler((-1, 1))\nsample_train = mms.fit_transform(sample_train)\nsample_val = mms.transform(sample_val)\nsample_test = mms.transform(sample_test)",
"_____no_output_____"
]
],
[
[
"### 2. Data Encoding\n\nWe will take the classical data and encode it to the quantum state space using a quantum feature map. The choice of which feature map to use is important and may depend on the given dataset we want to classify. Here we'll look at the feature maps available in Qiskit, before selecting and customising one to encode our data.\n\n### 2.1 Quantum Feature Maps\nAs the name suggests, a quantum feature map $\\phi(\\mathbf{x})$ is a map from the classical feature vector $\\mathbf{x}$ to the quantum state $|\\Phi(\\mathbf{x})\\rangle\\langle\\Phi(\\mathbf{x})|$. This is facilitated by applying the unitary operation $\\mathcal{U}_{\\Phi(\\mathbf{x})}$ on the initial state $|0\\rangle^{n}$ where _n_ is the number of qubits being used for encoding.\n\nThe following feature maps currently available in Qiskit are those introduced in [**_Havlicek et al_. Nature **567**, 209-212 (2019)**](https://www.nature.com/articles/s41586-019-0980-2), in particular the `ZZFeatureMap` is conjectured to be hard to simulate classically and can be implemented as short-depth circuits on near-term quantum devices.\n\n- [**`PauliFeatureMap`**](https://qiskit.org/documentation/stubs/qiskit.circuit.library.PauliFeatureMap.html)\n- [**`ZZFeatureMap`**](https://qiskit.org/documentation/stubs/qiskit.circuit.library.ZFeatureMap.html)\n- [**`ZFeatureMap`**](https://qiskit.org/documentation/stubs/qiskit.circuit.library.ZZFeatureMap.html)\n\nThe `PauliFeatureMap` is defined as:\n\n```python\nPauliFeatureMap(feature_dimension=None, reps=2, \n entanglement='full', paulis=None, \n data_map_func=None, parameter_prefix='x',\n insert_barriers=False)\n```\n\nand describes the unitary operator of depth $d$:\n\n$$ \\mathcal{U}_{\\Phi(\\mathbf{x})}=\\prod_d U_{\\Phi(\\mathbf{x})}H^{\\otimes n},\\ U_{\\Phi(\\mathbf{x})}=\\exp\\left(i\\sum_{S\\subseteq[n]}\\phi_S(\\mathbf{x})\\prod_{k\\in S} P_i\\right), $$\n\nwhich contains layers of Hadamard gates interleaved with entangling blocks, $U_{\\Phi(\\mathbf{x})}$, encoding the classical data as shown in circuit diagram below for $d=2$.\n\n<center><img src=\"./resources/featuremap.png\" width=\"1000\" /></center>\n\nWithin the entangling blocks, $U_{\\Phi(\\mathbf{x})}$: $P_i \\in \\{ I, X, Y, Z \\}$ denotes the Pauli matrices, the index $S$ describes connectivities between different qubits or datapoints: $S \\in \\{\\binom{n}{k}\\ combinations,\\ k = 1,... n \\}$, and by default the data mapping function $\\phi_S(\\mathbf{x})$ is \n$$\\phi_S:\\mathbf{x}\\mapsto \\Bigg\\{\\begin{array}{ll}\n x_i & \\mbox{if}\\ S=\\{i\\} \\\\\n (\\pi-x_i)(\\pi-x_j) & \\mbox{if}\\ S=\\{i,j\\}\n \\end{array}$$\n\nwhen $k = 1, P_0 = Z$, this is the `ZFeatureMap`: \n$$\\mathcal{U}_{\\Phi(\\mathbf{x})} = \\left( \\exp\\left(i\\sum_j \\phi_{\\{j\\}}(\\mathbf{x}) \\, Z_j\\right) \\, H^{\\otimes n} \\right)^d.$$\n\nwhich is defined as:\n```python\nZFeatureMap(feature_dimension, reps=2, \n data_map_func=None, insert_barriers=False)\n```",
"_____no_output_____"
]
],
[
[
"# 3 features, depth 2\nmap_z = ZFeatureMap(feature_dimension=3, reps=2)\nmap_z.decompose().draw('mpl')",
"/Users/scapape/miniconda3/envs/qiskit_env/lib/python3.8/site-packages/sympy/core/expr.py:2451: SymPyDeprecationWarning: \n\nexpr_free_symbols method has been deprecated since SymPy 1.9. See\nhttps://github.com/sympy/sympy/issues/21494 for more info.\n\n SymPyDeprecationWarning(feature=\"expr_free_symbols method\",\n"
]
],
[
[
"Note the lack of entanglement in this feature map, this means that this feature map is simple to simulate classically and will not provide quantum advantage.\n\nand when $k = 2, P_0 = Z, P_1 = ZZ$, this is the `ZZFeatureMap`: \n$$\\mathcal{U}_{\\Phi(\\mathbf{x})} = \\left( \\exp\\left(i\\sum_{jk} \\phi_{\\{j,k\\}}(\\mathbf{x}) \\, Z_j \\otimes Z_k\\right) \\, \\exp\\left(i\\sum_j \\phi_{\\{j\\}}(\\mathbf{x}) \\, Z_j\\right) \\, H^{\\otimes n} \\right)^d.$$ \n\nwhich is defined as:\n```python\nZZFeatureMap(feature_dimension, reps=2, \n entanglement='full', data_map_func=None, \n insert_barriers=False)\n```",
"_____no_output_____"
]
],
[
[
"# 3 features, depth 1, linear entanglement\nmap_zz = ZZFeatureMap(feature_dimension=3, reps=1, entanglement='linear')\nmap_zz.decompose().draw('mpl')",
"_____no_output_____"
]
],
[
[
"Note that there is entanglement in the feature map, we can define the entanglement map:",
"_____no_output_____"
]
],
[
[
"# 3 features, depth 1, circular entanglement\nmap_zz = ZZFeatureMap(feature_dimension=3, reps=1, entanglement='circular')\nmap_zz.decompose().draw('mpl')",
"_____no_output_____"
]
],
[
[
"We can customise the Pauli gates in the feature map, for example, $P_0 = X, P_1 = Y, P_2 = ZZ$:\n$$\\mathcal{U}_{\\Phi(\\mathbf{x})} = \\left( \\exp\\left(i\\sum_{jk} \\phi_{\\{j,k\\}}(\\mathbf{x}) \\, Z_j \\otimes Z_k\\right) \\, \\exp\\left(i\\sum_{j} \\phi_{\\{j\\}}(\\mathbf{x}) \\, Y_j\\right) \\, \\exp\\left(i\\sum_j \\phi_{\\{j\\}}(\\mathbf{x}) \\, X_j\\right) \\, H^{\\otimes n} \\right)^d.$$ ",
"_____no_output_____"
]
],
[
[
"# 3 features, depth 1\nmap_pauli = PauliFeatureMap(feature_dimension=3, reps=1, paulis = ['X', 'Y', 'ZZ'])\nmap_pauli.decompose().draw('mpl')",
"_____no_output_____"
]
],
[
[
"The [`NLocal`](https://qiskit.org/documentation/stubs/qiskit.circuit.library.NLocal.html) and [`TwoLocal`](https://qiskit.org/documentation/stubs/qiskit.circuit.library.TwoLocal.html) functions in Qiskit's circuit library can also be used to create parameterised quantum circuits as feature maps. \n\n```python\nTwoLocal(num_qubits=None, reps=3, rotation_blocks=None, \n entanglement_blocks=None, entanglement='full', \n skip_unentangled_qubits=False, \n skip_final_rotation_layer=False, \n parameter_prefix='θ', insert_barriers=False, \n initial_state=None)\n```\n\n```python\nNLocal(num_qubits=None, reps=1, rotation_blocks=None, \n entanglement_blocks=None, entanglement=None, \n skip_unentangled_qubits=False, \n skip_final_rotation_layer=False, \n overwrite_block_parameters=True, \n parameter_prefix='θ', insert_barriers=False, \n initial_state=None, name='nlocal')\n```\n\nBoth functions create parameterised circuits of alternating rotation and entanglement layers. In both layers, parameterised circuit-blocks act on the circuit in a defined way. In the rotation layer, the blocks are applied stacked on top of each other, while in the entanglement layer according to the entanglement strategy. Each layer is repeated a number of times, and by default a final rotation layer is appended.\n\nIn `NLocal`, the circuit blocks can have arbitrary sizes (smaller equal to the number of qubits in the circuit), while in `TwoLocal`, the rotation layers are single qubit gates applied on all qubits and the entanglement layer uses two-qubit gates.\n\nFor example, here is a `TwoLocal` circuit, with $R_y$ and $R_Z$ gates in the rotation layer and $CX$ gates in the entangling layer with circular entanglement:",
"_____no_output_____"
]
],
[
[
"twolocal = TwoLocal(num_qubits=3, reps=2, rotation_blocks=['ry','rz'], \n entanglement_blocks='cx', entanglement='circular', insert_barriers=True)\ntwolocal.decompose().draw('mpl')",
"_____no_output_____"
]
],
[
[
"and the equivalent `NLocal` circuit:",
"_____no_output_____"
]
],
[
[
"twolocaln = NLocal(num_qubits=3, reps=2,\n rotation_blocks=[RYGate(Parameter('a')), RZGate(Parameter('a'))], \n entanglement_blocks=CXGate(), \n entanglement='circular', insert_barriers=True)\ntwolocaln.decompose().draw('mpl')",
"_____no_output_____"
]
],
[
[
"Let's encode the first training sample using the `PauliFeatureMap`:",
"_____no_output_____"
]
],
[
[
"print(f'First training data: {sample_train[0]}')",
"First training data: [-0.47556122 -0.42255871 0.30059177 0.00509783 -0.89471914]\n"
],
[
"encode_map = PauliFeatureMap(feature_dimension=N_DIM, reps=1, paulis = ['X', 'Y', 'ZZ'])\nencode_circuit = encode_map.bind_parameters(sample_train[0])\nencode_circuit.decompose().draw(output='mpl')",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-block alert-success\">\n\n**Challenge 3a**\n\nConstruct a feature map to encode a 5-dimensionally embedded data, using 'ZZFeatureMap' with 3 repetitions, 'circular' entanglement and the rest as default.\n \n</div>\n\nSubmission format:\n```python\nex3a_fmap = ZZFeatureMap(...)\n```",
"_____no_output_____"
]
],
[
[
"##############################\n# Provide your code here\n\n\nex3a_fmap = ZZFeatureMap(feature_dimension=N_DIM,\n reps=3, \n entanglement='circular',\n data_map_func=None, \n insert_barriers=False)\n\n\n##############################",
"_____no_output_____"
],
[
"# Check your answer and submit using the following code\nfrom qc_grader import grade_ex3a\ngrade_ex3a(ex3a_fmap)",
"_____no_output_____"
]
],
[
[
"### 2.2 Quantum Kernel Estimation\n\nA quantum feature map, $\\phi(\\mathbf{x})$, naturally gives rise to a quantum kernel, $k(\\mathbf{x}_i,\\mathbf{x}_j)= \\phi(\\mathbf{x}_j)^\\dagger\\phi(\\mathbf{x}_i)$, which can be seen as a measure of similarity: $k(\\mathbf{x}_i,\\mathbf{x}_j)$ is large when $\\mathbf{x}_i$ and $\\mathbf{x}_j$ are close. \n\nWhen considering finite data, we can represent the quantum kernel as a matrix: \n$K_{ij} = \\left| \\langle \\phi^\\dagger(\\mathbf{x}_j)| \\phi(\\mathbf{x}_i) \\rangle \\right|^{2}$. We can calculate each element of this kernel matrix on a quantum computer by calculating the transition amplitude:\n$$\n\\left| \\langle \\phi^\\dagger(\\mathbf{x}_j)| \\phi(\\mathbf{x}_i) \\rangle \\right|^{2} = \n\\left| \\langle 0^{\\otimes n} | \\mathbf{U_\\phi^\\dagger}(\\mathbf{x}_j) \\mathbf{U_\\phi}(\\mathbf{x_i}) | 0^{\\otimes n} \\rangle \\right|^{2}\n$$\nassuming the feature map is a parameterized quantum circuit, which can be described as a unitary transformation $\\mathbf{U_\\phi}(\\mathbf{x})$ on $n$ qubits. \n\nThis provides us with an estimate of the quantum kernel matrix, which we can then use in a kernel machine learning algorithm, such as support vector classification.\n\nAs discussed in [***Havlicek et al*. Nature 567, 209-212 (2019)**](https://www.nature.com/articles/s41586-019-0980-2), quantum kernel machine algorithms only have the potential of quantum advantage over classical approaches if the corresponding quantum kernel is hard to estimate classically. \n\nAs we will see later, the hardness of estimating the kernel with classical resources is of course only a necessary and not always sufficient condition to obtain a quantum advantage. \n\nHowever, it was proven recently in [***Liu et al.* arXiv:2010.02174 (2020)**](https://arxiv.org/abs/2010.02174) that learning problems exist for which learners with access to quantum kernel methods have a quantum advantage over all classical learners.\n\nWith our training and testing datasets ready, we set up the `QuantumKernel` class with the PauliFeatureMap, and use the `BasicAer` `statevector_simulator` to estimate the training and testing kernel matrices.",
"_____no_output_____"
]
],
[
[
"pauli_map = PauliFeatureMap(feature_dimension=N_DIM, reps=1, paulis = ['X', 'Y', 'ZZ'])\npauli_kernel = QuantumKernel(feature_map=pauli_map, quantum_instance=Aer.get_backend('statevector_simulator'))",
"_____no_output_____"
]
],
[
[
"Let's calculate the transition amplitude between the first and second training data samples, one of the entries in the training kernel matrix.",
"_____no_output_____"
]
],
[
[
"print(f'First training data : {sample_train[0]}')\nprint(f'Second training data: {sample_train[1]}')",
"_____no_output_____"
]
],
[
[
"First we create and draw the circuit:",
"_____no_output_____"
]
],
[
[
"pauli_circuit = pauli_kernel.construct_circuit(sample_train[0], sample_train[1])\npauli_circuit.decompose().decompose().draw(output='mpl')",
"_____no_output_____"
]
],
[
[
"The parameters in the gates are a little difficult to read, but notice how the circuit is symmetrical, with one half encoding one of the data samples, the other half encoding the other. \n\nWe then simulate the circuit. We will use the `qasm_simulator` since the circuit contains measurements, but increase the number of shots to reduce the effect of sampling noise. ",
"_____no_output_____"
]
],
[
[
"backend = Aer.get_backend('qasm_simulator')\njob = execute(pauli_circuit, backend, shots=8192, \n seed_simulator=1024, seed_transpiler=1024)\ncounts = job.result().get_counts(pauli_circuit)",
"_____no_output_____"
],
[
"counts['0'*N_DIM]\ncounts",
"_____no_output_____"
]
],
[
[
"The transition amplitude is the proportion of counts in the zero state:",
"_____no_output_____"
]
],
[
[
"print(f\"Transition amplitude: {counts['0'*N_DIM]/sum(counts.values())}\")",
"_____no_output_____"
]
],
[
[
"This process is then repeated for each pair of training data samples to fill in the training kernel matrix, and between each training and testing data sample to fill in the testing kernel matrix. Note that each matrix is symmetric, so to reduce computation time, only half the entries are calculated explicitly. \n\nHere we compute and plot the training and testing kernel matrices:",
"_____no_output_____"
]
],
[
[
"matrix_train = pauli_kernel.evaluate(x_vec=sample_train)\nmatrix_val = pauli_kernel.evaluate(x_vec=sample_val, y_vec=sample_train)\n\nfig, axs = plt.subplots(1, 2, figsize=(10, 5))\naxs[0].imshow(np.asmatrix(matrix_train),\n interpolation='nearest', origin='upper', cmap='Blues')\naxs[0].set_title(\"training kernel matrix\")\naxs[1].imshow(np.asmatrix(matrix_val),\n interpolation='nearest', origin='upper', cmap='Reds')\naxs[1].set_title(\"validation kernel matrix\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"</div>\n \n<div class=\"alert alert-block alert-success\">\n\n**Challenge 3b**\n\nCalculate the transition amplitude between $x = (-0.5, -0.4, 0.3, 0, -0.9)$ and $y = (0, -0.7, -0.3, 0, -0.4)$ using the 'ZZFeatureMap' with 3 repetitions, 'circular' entanglement and the rest as default. Use the 'qasm_simulator' with 'shots=8192', 'seed_simulator=1024' and 'seed_transpiler=1024'.\n \n</div>",
"_____no_output_____"
]
],
[
[
"sample_train[0]\nnp.array([-0.5,-0.4,0.3,0,-0.9])",
"_____no_output_____"
],
[
"x = [-0.5, -0.4, 0.3, 0, -0.9]\ny = [0, -0.7, -0.3, 0, -0.4]\n\n##############################\n# Provide your code here\n\npauli_map = ZZFeatureMap(feature_dimension=N_DIM,\n reps=3, \n entanglement='circular',\n data_map_func=None, \n insert_barriers=False)\npauli_kernel = QuantumKernel(feature_map=pauli_map, quantum_instance=Aer.get_backend('statevector_simulator'))\npauli_circuit = pauli_kernel.construct_circuit(x, y)\nbackend = Aer.get_backend('qasm_simulator')\njob = execute(pauli_circuit, backend, shots=8192, \n seed_simulator=1024, seed_transpiler=1024)\ncounts = job.result().get_counts(pauli_circuit)\n\nex3b_amp = counts['0'*N_DIM]/sum(counts.values())\n\n\n##############################",
"_____no_output_____"
],
[
"# Check your answer and submit using the following code\nfrom qc_grader import grade_ex3b\ngrade_ex3b(ex3b_amp)",
"_____no_output_____"
]
],
[
[
"Related QGSS materials:\n- [**Kernel Trick (Lecture 6.1)**](https://www.youtube.com/watch?v=m6EzmYsEOiI&list=PLOFEBzvs-VvqJwybFxkTiDzhf5E11p8BI&index=14)\n- [**Kernel Trick (Lecture 6.2)**](https://www.youtube.com/watch?v=zw3JYUrS-v8&list=PLOFEBzvs-VvqJwybFxkTiDzhf5E11p8BI&index=15)",
"_____no_output_____"
],
[
"### 2.3 Quantum Support Vector Machine (QSVM)\n\nIntroduced in [***Havlicek et al*. Nature 567, 209-212 (2019)**](https://www.nature.com/articles/s41586-019-0980-2), the quantum kernel support vector classification algorithm consists of these steps:\n\n\n<center><img src=\"./resources/qsvc.png\" width=\"1000\"></center> \n\n1. Build the train and test quantum kernel matrices.\n 1. For each pair of datapoints in the training dataset $\\mathbf{x}_{i},\\mathbf{x}_j$, apply the feature map and measure the transition probability: $ K_{ij} = \\left| \\langle 0 | \\mathbf{U}^\\dagger_{\\Phi(\\mathbf{x_j})} \\mathbf{U}_{\\Phi(\\mathbf{x_i})} | 0 \\rangle \\right|^2 $.\n 2. For each training datapoint $\\mathbf{x_i}$ and testing point $\\mathbf{y_j}$, apply the feature map and measure the transition probability: $ K_{ij} = \\left| \\langle 0 | \\mathbf{U}^\\dagger_{\\Phi(\\mathbf{y_j})} \\mathbf{U}_{\\Phi(\\mathbf{x_i})} | 0 \\rangle \\right|^2 $.\n2. Use the train and test quantum kernel matrices in a classical support vector machine classification algorithm.\n\nThe `scikit-learn` `svc` algorithm allows us to [**define a custom kernel**](https://scikit-learn.org/stable/modules/svm.html#custom-kernels) in two ways: by providing the kernel as a callable function or by precomputing the kernel matrix. We can do either of these using the `QuantumKernel` class in Qiskit.\n\nThe following code takes the training and testing kernel matrices we calculated earlier and provides them to the `scikit-learn` `svc` algorithm:",
"_____no_output_____"
]
],
[
[
"pauli_svc = SVC(kernel='precomputed')\npauli_svc.fit(matrix_train, labels_train)\npauli_score = pauli_svc.score(matrix_val, labels_val)\n\nprint(f'Precomputed kernel classification test score: {pauli_score*100}%')",
"_____no_output_____"
]
],
[
[
"Related QGSS materials:\n- [**Classical SVM (Lecture 4.2)**](https://www.youtube.com/watch?v=lpPij21jnZ4&list=PLOFEBzvs-VvqJwybFxkTiDzhf5E11p8BI&index=9)\n- [**Quantum Classifier (Lecture 5.1)**](https://www.youtube.com/watch?v=-sxlXNz7ZxU&list=PLOFEBzvs-VvqJwybFxkTiDzhf5E11p8BI&index=11)",
"_____no_output_____"
],
[
"## Part 2: Challenge - QSVM for 3-class classification of Fashion-MNIST\n\nIn this part, you will use what your have learned so far to implement 3-class classification of clothing images and work on improving its accuracy. \n \n<div class=\"alert alert-block alert-success\">\n\n**Challenge 3c**\n\n**Goal**: Implement a 3-class classifier using QSVM and achieve 70% accuracy on clothing image dataset with smaller feature map circuits.\n\n**Dataset**: Fashion-MNIST clothing image dataset. There are following three dataset in this challnge. \n- Train: Both images and labels are given.\n- Public test: Images are given and labels are hidden.\n- Private test: Both images and labels are hidden.\n \nGrading will be performed on both public test and private test data. The purpose of this is to make sure that quantum methods are used, so that cheating is not possible.\n \n</div>\n\n### How to implement a multi-class classifier using binary classifiers\n\nSo far, you have learned how to implement binary classification with QSVM. Now, how can you scale it up to multi-class classification? There are two approaches to do so. One is the One-vs-Rest approach, and the other is the One-vs-One approach.\n\n1. One-vs-Rest: In this approach, multi-class classification is achieved by combining classifiers for each class that classifies the class as positive and the others as negative. Since one classifier is required for each class, the total number of classifiers required for N-class classification is N. The advantage is that fewer classifiers are needed, and the disadvantage is that the labels are likely to be imbalanced in each classification.\n2. One-vs-One: In this approach, multi-class classification is achieved by combining classifiers for each pair of two classes, where one is positive and the other is negative. Since one classifier is required for each label pair, the total number of classifiers required for N-class classification is N(N-1)/2. The advantage is that labels are less likely to be imbalanced in each classification, and the disadvantage is that the number of classifiers required is larger.\n\nBoth approaches can be used to solve this problem, but here you will be given hints based on the One-vs-Rest approach. Please follow the hints to solve it.\n\n<center><img src=\"./resources/onevsrest.png\" width=\"800\"></center>\n\nFigure via [cc.gatech.edu](https://www.cc.gatech.edu/classes/AY2016/cs4476_fall/results/proj4/html/jnanda3/index.html)\n\n### 1. Data preparation\nThe data we are working with here is a small subset of clothing image dataset called Fashion-MNIST, which is a variant of the MNIST dataset. We aim to classify the following labels.\n- label 0: T-shirt/top\n- label 2: pullover\n- label 3: dress\n\nFirst, let's load the dataset and display one image for each class.",
"_____no_output_____"
]
],
[
[
"# Load MNIST dataset\nDATA_PATH = './resources/ch3_part2.npz'\ndata = np.load(DATA_PATH)\n\nsample_train = data['sample_train']\nlabels_train = data['labels_train']\nsample_test = data['sample_test']\n\n# Split train data\nsample_train, sample_val, labels_train, labels_val = train_test_split(\n sample_train, labels_train, test_size=0.2, random_state=42)\n\n# Visualize samples\nfig = plt.figure()\n\nLABELS = [0, 2, 3]\nnum_labels = len(LABELS)\nfor i in range(num_labels):\n ax = fig.add_subplot(1, num_labels, i+1)\n img = sample_train[labels_train==LABELS[i]][0].reshape((28, 28))\n ax.imshow(img, cmap=\"Greys\")",
"_____no_output_____"
]
],
[
[
"Then, preprocess the dataset in the same way as before.\n- Standardization\n- PCA\n- Normalization\n\nNote that you can change the number of features here by changing N_DIM.",
"_____no_output_____"
]
],
[
[
"# Standardize\nstandard_scaler = StandardScaler()\nsample_train = standard_scaler.fit_transform(sample_train)\nsample_val = standard_scaler.transform(sample_val)\nsample_test = standard_scaler.transform(sample_test)\n\n# Reduce dimensions\nN_DIM = 5\npca = PCA(n_components=N_DIM)\nsample_train = pca.fit_transform(sample_train)\nsample_val = pca.transform(sample_val)\nsample_test = pca.transform(sample_test)\n\n# Normalize\nmin_max_scaler = MinMaxScaler((-1, 1))\nsample_train = min_max_scaler.fit_transform(sample_train)\nsample_val = min_max_scaler.transform(sample_val)\nsample_test = min_max_scaler.transform(sample_test)",
"_____no_output_____"
]
],
[
[
"### 2. Modeling\nBased on the One-vs-Rest approach, you need to create the following three QSVM binary classifiers\n- the label 0 and the rest\n- the label 2 and the rest\n- the label 3 and the rest\n\nHere is the first one as a hint.\n\n### 2.1: Label 0 vs Rest\nCreate new labels with label 0 as positive(1) and the rest as negative(0) as follows.",
"_____no_output_____"
]
],
[
[
"labels_train_0 = np.where(labels_train==0, 1, 0)\nlabels_val_0 = np.where(labels_val==0, 1, 0)\n\nprint(f'Original validation labels: {labels_val}')\nprint(f'Validation labels for 0 vs Rest: {labels_val_0}')",
"Original validation labels: [3 3 2 0 3 0 3 2 3 2 2 3 2 2 2 3 0 2 3 3]\nValidation labels for 0 vs Rest: [0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0]\n"
]
],
[
[
"See only places where the original label was 0 are set to 1. \n\nNext, construct a binary classifier using QSVM as before. \nNote that PauliFeatureMap is used in this hint but you can use a different feature map.",
"_____no_output_____"
]
],
[
[
"pauli_map_0 = PauliFeatureMap(feature_dimension=N_DIM, reps=2, paulis = ['X', 'Y', 'ZZ'])\npauli_kernel_0 = QuantumKernel(feature_map=pauli_map_0, quantum_instance=Aer.get_backend('statevector_simulator'))\n\npauli_svc_0 = SVC(kernel='precomputed', probability=True)\n\nmatrix_train_0 = pauli_kernel_0.evaluate(x_vec=sample_train)\npauli_svc_0.fit(matrix_train_0, labels_train_0)\n\nmatrix_val_0 = pauli_kernel_0.evaluate(x_vec=sample_val, y_vec=sample_train)\npauli_score_0 = pauli_svc_0.score(matrix_val_0, labels_val_0)\nprint(f'Accuracy of discriminating between label 0 and others: {pauli_score_0*100}%')",
"Accuracy of discriminating between label 0 and others: 75.0%\n"
],
[
"# Var 1\nmap_0 = ZZFeatureMap(feature_dimension=N_DIM, reps=1, entanglement='linear')\nkernel_0 = QuantumKernel(feature_map=map_0, quantum_instance=Aer.get_backend('statevector_simulator'))\n\nsvc_0 = SVC(kernel='precomputed', probability=True)\n\nmatrix_train_0 = kernel_0.evaluate(x_vec=sample_train)\nsvc_0.fit(matrix_train_0, labels_train_0)\n\nmatrix_val_0 = pauli_kernel_0.evaluate(x_vec=sample_val, y_vec=sample_train)\npauli_score_0 = svc_0.score(matrix_val_0, labels_val_0)\nprint(f'Accuracy of discriminating between label 0 and others: {pauli_score_0*100}%')",
"Accuracy of discriminating between label 0 and others: 75.0%\n"
]
],
[
[
"You can see that the QSVM binary classifier is able to distinguish between label 0 and the rest with a reasonable probability.\n\nFinally, for each of the test data, calculate the probability that it has label 0. It can be obtained by ```predict_proba``` method.",
"_____no_output_____"
]
],
[
[
"matrix_test_0 = pauli_kernel_0.evaluate(x_vec=sample_test, y_vec=sample_train)\npred_0 = pauli_svc_0.predict_proba(matrix_test_0)[:, 1]\nprint(f'Probability of label 0: {np.round(pred_0, 2)}')",
"Probability of label 0: [0.31 0.32 0.25 0.46 0.21 0.3 0.24 0.23 0.34 0.51 0.38 0.3 0.22 0.26\n 0.41 0.49 0.38 0.47 0.33 0.22]\n"
]
],
[
[
"These probabilities are important clues for multiclass classification. \nObtain the probabilities for the remaining two labels in the same way.\n\n### 2.2: Label 2 vs Rest\nBuild a binary classifier using QSVM and get the probability of label 2 for test dataset.",
"_____no_output_____"
]
],
[
[
"labels_train_2 = np.where(labels_train==2, 1, 0)\nlabels_val_2 = np.where(labels_val==2, 1, 0)\n\nprint(f'Original validation labels: {labels_val}')\nprint(f'Validation labels for 2 vs Rest: {labels_val_2}')",
"Original validation labels: [3 3 2 0 3 0 3 2 3 2 2 3 2 2 2 3 0 2 3 3]\nValidation labels for 2 vs Rest: [0 0 1 0 0 0 0 1 0 1 1 0 1 1 1 0 0 1 0 0]\n"
],
[
"pauli_map_2 = PauliFeatureMap(feature_dimension=N_DIM, reps=2, paulis = ['X', 'Y', 'ZZ'])\npauli_kernel_2 = QuantumKernel(feature_map=pauli_map_2, quantum_instance=Aer.get_backend('statevector_simulator'))\n\npauli_svc_2 = SVC(kernel='precomputed', probability=True)\n\nmatrix_train_2 = pauli_kernel_2.evaluate(x_vec=sample_train)\npauli_svc_2.fit(matrix_train_2, labels_train_2)\n\nmatrix_val_2 = pauli_kernel_2.evaluate(x_vec=sample_val, y_vec=sample_train)\npauli_score_2 = pauli_svc_2.score(matrix_val_2, labels_val_2)\nprint(f'Accuracy of discriminating between label 2 and others: {pauli_score_2*100}%')",
"_____no_output_____"
],
[
"# Var 2\nmap_2 = ZZFeatureMap(feature_dimension=N_DIM, reps=1, entanglement='linear')\nkernel_2 = QuantumKernel(feature_map=map_2, quantum_instance=Aer.get_backend('statevector_simulator'))\n\nsvc_2 = SVC(kernel='precomputed', probability=True)\n\nmatrix_train_2 = kernel_2.evaluate(x_vec=sample_train)\nsvc_2.fit(matrix_train_2, labels_train_2)\n\nmatrix_val_2 = pauli_kernel_2.evaluate(x_vec=sample_val, y_vec=sample_train)\npauli_score_2 = svc_2.score(matrix_val_2, labels_val_2)\nprint(f'Accuracy of discriminating between label 2 and others: {pauli_score_2*100}%')",
"_____no_output_____"
],
[
"##############################\n# Provide your code here\n\n\nmatrix_test_2 = pauli_kernel_2.evaluate(x_vec=sample_test, y_vec=sample_train)\npred_2 = pauli_svc_2.predict_proba(matrix_test_2)[:, 1]\n\n\n##############################",
"_____no_output_____"
]
],
[
[
"### 2.3 Label 3 vs Rest\nBuild a binary classifier using QSVM and get the probability of label 3 for test dataset.",
"_____no_output_____"
]
],
[
[
"labels_train_3 = np.where(labels_train==3, 1, 0)\nlabels_val_3 = np.where(labels_val==3, 1, 0)\n\nprint(f'Original validation labels: {labels_val}')\nprint(f'Validation labels for 3 vs Rest: {labels_val_3}')",
"_____no_output_____"
],
[
"pauli_map_3 = PauliFeatureMap(feature_dimension=N_DIM, reps=2, paulis = ['X', 'Y', 'ZZ'])\npauli_kernel_3 = QuantumKernel(feature_map=pauli_map_3, quantum_instance=Aer.get_backend('statevector_simulator'))\n\npauli_svc_3 = SVC(kernel='precomputed', probability=True)\n\nmatrix_train_3 = pauli_kernel_3.evaluate(x_vec=sample_train)\npauli_svc_3.fit(matrix_train_3, labels_train_3)\n\nmatrix_val_3 = pauli_kernel_3.evaluate(x_vec=sample_val, y_vec=sample_train)\npauli_score_3 = pauli_svc_3.score(matrix_val_3, labels_val_3)\nprint(f'Accuracy of discriminating between label 3 and others: {pauli_score_3*100}%')",
"_____no_output_____"
],
[
"# Var 3\nmap_3 = ZZFeatureMap(feature_dimension=N_DIM, reps=1, entanglement='linear')\nkernel_3 = QuantumKernel(feature_map=map_3, quantum_instance=Aer.get_backend('statevector_simulator'))\n\nsvc_3 = SVC(kernel='precomputed', probability=True)\n\nmatrix_train_3 = kernel_3.evaluate(x_vec=sample_train)\nsvc_3.fit(matrix_train_3, labels_train_3)\n\nmatrix_val_3 = pauli_kernel_3.evaluate(x_vec=sample_val, y_vec=sample_train)\npauli_score_3 = svc_3.score(matrix_val_3, labels_val_3)\nprint(f'Accuracy of discriminating between label 3 and others: {pauli_score_3*100}%')",
"_____no_output_____"
],
[
"##############################\n# Provide your code here\n\n\nmatrix_test_3 = pauli_kernel_3.evaluate(x_vec=sample_test, y_vec=sample_train)\npred_3 = pauli_svc_3.predict_proba(matrix_test_3)[:, 1]\n\n\n##############################",
"_____no_output_____"
],
[
"print(f'Probability of label 0: {np.round(pred_0, 2)}')\nprint(f'Probability of label 2: {np.round(pred_2, 2)}')\nprint(f'Probability of label 3: {np.round(pred_3, 2)}')",
"_____no_output_____"
]
],
[
[
"### 3. Prediction\nLastly, make a final prediction based on the probability of each label. \nThe prediction you submit should be in the following format.",
"_____no_output_____"
]
],
[
[
"sample_pred = np.load('./resources/ch3_part2_sub.npy')\nprint(f'Sample prediction: {sample_pred}')",
"_____no_output_____"
]
],
[
[
"In order to understand the method to make predictions for multiclass classification, let's begin with the case of making predictions for just two labels, label 2 and label 3.\n\nIf probabilities are as follows for a certain data, label 2 should be considered the most plausible.\n- probability of label 2: 0.7\n- probability of label 3: 0.2\n\nYou can implement this with ```np.where``` function. (Of course, you can use different methods.)",
"_____no_output_____"
]
],
[
[
"pred_2_ex = np.array([0.7])\npred_3_ex = np.array([0.2])\n\npred_test_ex = np.where((pred_2_ex > pred_3_ex), 2, 3)\nprint(f'Prediction: {pred_test_ex}')",
"_____no_output_____"
]
],
[
[
"You can apply this method as is to multiple data.\n\nIf second data has probabilities for each label as follows, it should be classified as label 3.\n- probability of label 2: 0.1\n- probability of label 3: 0.6",
"_____no_output_____"
]
],
[
[
"pred_2_ex = np.array([0.7, 0.1])\npred_3_ex = np.array([0.2, 0.6])\n\npred_test_ex = np.where((pred_2_ex > pred_3_ex), 2, 3)\nprint(f'Prediction: {pred_test_ex}')",
"_____no_output_____"
]
],
[
[
"This method can be extended to make predictions for 3-class classification.\n\nImplement such an extended method and make the final 3-class predictions.",
"_____no_output_____"
]
],
[
[
"##############################\n# Provide your code here\n\npred_test = np.array([0 if ((pred_0[i] > pred_2[i]) & (pred_0[i] > pred_3[i]))\n else 2 if ((pred_2[i] > pred_0[i]) & (pred_2[i] > pred_3[i]))\n else 3 if ((pred_3[i] > pred_0[i]) & (pred_3[i] > pred_2[i]))\n else -1 for i in range(len(pred_0))])\n\n##############################",
"_____no_output_____"
],
[
"print(f'Original validation labels: {labels_val}')\nprint(f'Prediction: {pred_test}')",
"_____no_output_____"
]
],
[
[
"### 4. Submission\n \n<div class=\"alert alert-block alert-success\">\n\n**Challenge 3c**\n\n**Submission**: Submit the following 11 items.\n- **pred_test**: prediction for the public test dataset\n- **sample_train**: train data used to obtain kernels\n- **standard_scaler**: the one used to standardize data\n- **pca**: the one used to reduce dimention\n- **min_max_scaler**: the one used to normalize data\n- **kernel_0**: the kernel for the \"label 0 vs rest\" classifier\n- **kernel_2**: the kernel for the \"label 2 vs rest\" classifier\n- **kernel_3**: the kernel for the \"label 3 vs rest\" classifier\n- **svc_0**: the SVC trained to classify \"label 0 vs rest\"\n- **svc_2**: the SVC trained to classify \"label 2 vs rest\"\n- **svc_3**: the SVC trained to classify \"label 3 vs rest\"\n\n**Criteria**: Accuracy of 70% or better on both public and private test data.\n\n**Score**: Solutions that pass the criteria will be scored as follows. The smaller this final score is, the better.\n1. Each feature map gets transpiled with:\n - basis_gates=['u1', 'u2', 'u3', 'cx']\n - optimization_level=0\n2. Calculate the cost for each transpiled circuit: \n cost = 10 * #cx + (#u1 + #u2 + #u3)\n3. The sum of the costs will be the final score.\n\n</div>\n\nAgain, the prediction you submit should be in the following format.\n- prediction for the public test data (**sample_test**)\n- type: numpy.ndarray\n- shape: (20,)",
"_____no_output_____"
]
],
[
[
"print(f'Sample prediction: {sample_pred}')",
"Sample prediction: [0 0 0 0 0 0 2 2 2 2 2 2 3 3 3 3 3 3 3 3]\n"
],
[
"# Check your answer and submit using the following code\nfrom qc_grader import grade_ex3c\ngrade_ex3c(pred_test, sample_train, \n standard_scaler, pca, min_max_scaler,\n kernel_0, kernel_2, kernel_3,\n svc_0, svc_2, svc_3)",
"_____no_output_____"
]
],
[
[
"## Additional information\n\n**Created by:** Shota Nakasuji, Anna Phan\n\n**Version:** 1.0.0",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d052e02b278411bd3597ce9b80ed4a103ef29039 | 65,182 | ipynb | Jupyter Notebook | session-2/session-2.ipynb | takitsuba/kadenze_cadl | 7d413965fecd7be2d482147831faeea321d929ac | [
"Apache-2.0"
] | null | null | null | session-2/session-2.ipynb | takitsuba/kadenze_cadl | 7d413965fecd7be2d482147831faeea321d929ac | [
"Apache-2.0"
] | null | null | null | session-2/session-2.ipynb | takitsuba/kadenze_cadl | 7d413965fecd7be2d482147831faeea321d929ac | [
"Apache-2.0"
] | null | null | null | 44.860289 | 1,298 | 0.602175 | [
[
[
"# Session 2 - Training a Network w/ Tensorflow\n<p class=\"lead\">\nAssignment: Teach a Deep Neural Network to Paint\n</p>\n\n<p class=\"lead\">\nParag K. Mital<br />\n<a href=\"https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info\">Creative Applications of Deep Learning w/ Tensorflow</a><br />\n<a href=\"https://www.kadenze.com/partners/kadenze-academy\">Kadenze Academy</a><br />\n<a href=\"https://twitter.com/hashtag/CADL\">#CADL</a>\n</p>\n\nThis work is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.\n\n# Learning Goals\n\n* Learn how to create a Neural Network\n* Learn to use a neural network to paint an image\n* Apply creative thinking to the inputs, outputs, and definition of a network\n\n# Outline\n\n<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->\n\n- [Assignment Synopsis](#assignment-synopsis)\n- [Part One - Fully Connected Network](#part-one---fully-connected-network)\n - [Instructions](#instructions)\n - [Code](#code)\n - [Variable Scopes](#variable-scopes)\n- [Part Two - Image Painting Network](#part-two---image-painting-network)\n - [Instructions](#instructions-1)\n - [Preparing the Data](#preparing-the-data)\n - [Cost Function](#cost-function)\n - [Explore](#explore)\n - [A Note on Crossvalidation](#a-note-on-crossvalidation)\n- [Part Three - Learning More than One Image](#part-three---learning-more-than-one-image)\n - [Instructions](#instructions-2)\n - [Code](#code-1)\n- [Part Four - Open Exploration \\(Extra Credit\\)](#part-four---open-exploration-extra-credit)\n- [Assignment Submission](#assignment-submission)\n\n<!-- /MarkdownTOC -->\n\nThis next section will just make sure you have the right version of python and the libraries that we'll be using. Don't change the code here but make sure you \"run\" it (use \"shift+enter\")!",
"_____no_output_____"
]
],
[
[
"# First check the Python version\nimport sys\nif sys.version_info < (3,4):\n print('You are running an older version of Python!\\n\\n' \\\n 'You should consider updating to Python 3.4.0 or ' \\\n 'higher as the libraries built for this course ' \\\n 'have only been tested in Python 3.4 and higher.\\n')\n print('Try installing the Python 3.5 version of anaconda '\n 'and then restart `jupyter notebook`:\\n' \\\n 'https://www.continuum.io/downloads\\n\\n')\n\n# Now get necessary libraries\ntry:\n import os\n import numpy as np\n import matplotlib.pyplot as plt\n from skimage.transform import resize\n from skimage import data\n from scipy.misc import imresize\nexcept ImportError:\n print('You are missing some packages! ' \\\n 'We will try installing them before continuing!')\n !pip install \"numpy>=1.11.0\" \"matplotlib>=1.5.1\" \"scikit-image>=0.11.3\" \"scikit-learn>=0.17\" \"scipy>=0.17.0\"\n import os\n import numpy as np\n import matplotlib.pyplot as plt\n from skimage.transform import resize\n from skimage import data\n from scipy.misc import imresize\n print('Done!')\n\n# Import Tensorflow\ntry:\n import tensorflow as tf\nexcept ImportError:\n print(\"You do not have tensorflow installed!\")\n print(\"Follow the instructions on the following link\")\n print(\"to install tensorflow before continuing:\")\n print(\"\")\n print(\"https://github.com/pkmital/CADL#installation-preliminaries\")\n\n# This cell includes the provided libraries from the zip file\n# and a library for displaying images from ipython, which\n# we will use to display the gif\ntry:\n from libs import utils, gif\n import IPython.display as ipyd\nexcept ImportError:\n print(\"Make sure you have started notebook in the same directory\" +\n \" as the provided zip file which includes the 'libs' folder\" +\n \" and the file 'utils.py' inside of it. You will NOT be able\"\n \" to complete this assignment unless you restart jupyter\"\n \" notebook inside the directory created by extracting\"\n \" the zip file or cloning the github repo.\")\n\n# We'll tell matplotlib to inline any drawn figures like so:\n%matplotlib inline\nplt.style.use('ggplot')",
"_____no_output_____"
],
[
"# Bit of formatting because I don't like the default inline code style:\nfrom IPython.core.display import HTML\nHTML(\"\"\"<style> .rendered_html code { \n padding: 2px 4px;\n color: #c7254e;\n background-color: #f9f2f4;\n border-radius: 4px;\n} </style>\"\"\")",
"_____no_output_____"
]
],
[
[
"<a name=\"assignment-synopsis\"></a>\n# Assignment Synopsis\n\nIn this assignment, we're going to create our first neural network capable of taking any two continuous values as inputs. Those two values will go through a series of multiplications, additions, and nonlinearities, coming out of the network as 3 outputs. Remember from the last homework, we used convolution to filter an image so that the representations in the image were accentuated. We're not going to be using convolution w/ Neural Networks until the next session, but we're effectively doing the same thing here: using multiplications to accentuate the representations in our data, in order to minimize whatever our cost function is. To find out what those multiplications need to be, we're going to use Gradient Descent and Backpropagation, which will take our cost, and find the appropriate updates to all the parameters in our network to best optimize the cost. In the next session, we'll explore much bigger networks and convolution. This \"toy\" network is really to help us get up and running with neural networks, and aid our exploration of the different components that make up a neural network. You will be expected to explore manipulations of the neural networks in this notebook as much as possible to help aid your understanding of how they effect the final result.\n\nWe're going to build our first neural network to understand what color \"to paint\" given a location in an image, or the row, col of the image. So in goes a row/col, and out goes a R/G/B. In the next lesson, we'll learn what this network is really doing is performing regression. For now, we'll focus on the creative applications of such a network to help us get a better understanding of the different components that make up the neural network. You'll be asked to explore many of the different components of a neural network, including changing the inputs/outputs (i.e. the dataset), the number of layers, their activation functions, the cost functions, learning rate, and batch size. You'll also explore a modification to this same network which takes a 3rd input: an index for an image. This will let us try to learn multiple images at once, though with limited success.\n\nWe'll now dive right into creating deep neural networks, and I'm going to show you the math along the way. Don't worry if a lot of it doesn't make sense, and it really takes a bit of practice before it starts to come together.",
"_____no_output_____"
],
[
"<a name=\"part-one---fully-connected-network\"></a>\n# Part One - Fully Connected Network\n\n<a name=\"instructions\"></a>\n## Instructions\nCreate the operations necessary for connecting an input to a network, defined by a `tf.Placeholder`, to a series of fully connected, or linear, layers, using the formula: \n\n$$\\textbf{H} = \\phi(\\textbf{X}\\textbf{W} + \\textbf{b})$$\n\nwhere $\\textbf{H}$ is an output layer representing the \"hidden\" activations of a network, $\\phi$ represents some nonlinearity, $\\textbf{X}$ represents an input to that layer, $\\textbf{W}$ is that layer's weight matrix, and $\\textbf{b}$ is that layer's bias. \n\nIf you're thinking, what is going on? Where did all that math come from? Don't be afraid of it. Once you learn how to \"speak\" the symbolic representation of the equation, it starts to get easier. And once we put it into practice with some code, it should start to feel like there is some association with what is written in the equation, and what we've written in code. Practice trying to say the equation in a meaningful way: \"The output of a hidden layer is equal to some input multiplied by another matrix, adding some bias, and applying a non-linearity\". Or perhaps: \"The hidden layer is equal to a nonlinearity applied to an input multiplied by a matrix and adding some bias\". Explore your own interpretations of the equation, or ways of describing it, and it starts to become much, much easier to apply the equation.\n\nThe first thing that happens in this equation is the input matrix $\\textbf{X}$ is multiplied by another matrix, $\\textbf{W}$. This is the most complicated part of the equation. It's performing matrix multiplication, as we've seen from last session, and is effectively scaling and rotating our input. The bias $\\textbf{b}$ allows for a global shift in the resulting values. Finally, the nonlinearity of $\\phi$ allows the input space to be nonlinearly warped, allowing it to express a lot more interesting distributions of data. Have a look below at some common nonlinearities. If you're unfamiliar with looking at graphs like this, it is common to read the horizontal axis as X, as the input, and the vertical axis as Y, as the output.",
"_____no_output_____"
]
],
[
[
"xs = np.linspace(-6, 6, 100)\nplt.plot(xs, np.maximum(xs, 0), label='relu')\nplt.plot(xs, 1 / (1 + np.exp(-xs)), label='sigmoid')\nplt.plot(xs, np.tanh(xs), label='tanh')\nplt.xlabel('Input')\nplt.xlim([-6, 6])\nplt.ylabel('Output')\nplt.ylim([-1.5, 1.5])\nplt.title('Common Activation Functions/Nonlinearities')\nplt.legend(loc='lower right')",
"_____no_output_____"
]
],
[
[
"Remember, having series of linear followed by nonlinear operations is what makes neural networks expressive. By stacking a lot of \"linear\" + \"nonlinear\" operations in a series, we can create a deep neural network! Have a look at the output ranges of the above nonlinearity when considering which nonlinearity seems most appropriate. For instance, the `relu` is always above 0, but does not saturate at any value above 0, meaning it can be anything above 0. That's unlike the `sigmoid` which does saturate at both 0 and 1, meaning its values for a single output neuron will always be between 0 and 1. Similarly, the `tanh` saturates at -1 and 1.\n\nChoosing between these is often a matter of trial and error. Though you can make some insights depending on your normalization scheme. For instance, if your output is expected to be in the range of 0 to 1, you may not want to use a `tanh` function, which ranges from -1 to 1, but likely would want to use a `sigmoid`. Keep the ranges of these activation functions in mind when designing your network, especially the final output layer of your network.",
"_____no_output_____"
],
[
"<a name=\"code\"></a>\n## Code\n\nIn this section, we're going to work out how to represent a fully connected neural network with code. First, create a 2D `tf.placeholder` called $\\textbf{X}$ with `None` for the batch size and 2 features. Make its `dtype` `tf.float32`. Recall that we use the dimension of `None` for the batch size dimension to say that this dimension can be any number. Here is the docstring for the `tf.placeholder` function, have a look at what args it takes:\n\nHelp on function placeholder in module `tensorflow.python.ops.array_ops`:\n\n```python\nplaceholder(dtype, shape=None, name=None)\n```\n\n Inserts a placeholder for a tensor that will be always fed.\n\n **Important**: This tensor will produce an error if evaluated. Its value must\n be fed using the `feed_dict` optional argument to `Session.run()`,\n `Tensor.eval()`, or `Operation.run()`.\n\n For example:\n\n```python\nx = tf.placeholder(tf.float32, shape=(1024, 1024))\ny = tf.matmul(x, x)\n\nwith tf.Session() as sess:\n print(sess.run(y)) # ERROR: will fail because x was not fed.\n\n rand_array = np.random.rand(1024, 1024)\n print(sess.run(y, feed_dict={x: rand_array})) # Will succeed.\n```\n\n Args:\n dtype: The type of elements in the tensor to be fed.\n shape: The shape of the tensor to be fed (optional). If the shape is not\n specified, you can feed a tensor of any shape.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` that may be used as a handle for feeding a value, but not\n evaluated directly.",
"_____no_output_____"
],
[
"<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"# Create a placeholder with None x 2 dimensions of dtype tf.float32, and name it \"X\":\nX = ...",
"_____no_output_____"
]
],
[
[
"Now multiply the tensor using a new variable, $\\textbf{W}$, which has 2 rows and 20 columns, so that when it is left mutiplied by $\\textbf{X}$, the output of the multiplication is None x 20, giving you 20 output neurons. Recall that the `tf.matmul` function takes two arguments, the left hand ($\\textbf{X}$) and right hand side ($\\textbf{W}$) of a matrix multiplication.\n\nTo create $\\textbf{W}$, you will use `tf.get_variable` to create a matrix which is `2 x 20` in dimension. Look up the docstrings of functions `tf.get_variable` and `tf.random_normal_initializer` to get familiar with these functions. There are many options we will ignore for now. Just be sure to set the `name`, `shape` (this is the one that has to be [2, 20]), `dtype` (i.e. tf.float32), and `initializer` (the `tf.random_normal_intializer` you should create) when creating your $\\textbf{W}$ variable with `tf.get_variable(...)`.\n\nFor the random normal initializer, often the mean is set to 0, and the standard deviation is set based on the number of neurons. But that really depends on the input and outputs of your network, how you've \"normalized\" your dataset, what your nonlinearity/activation function is, and what your expected range of inputs/outputs are. Don't worry about the values for the initializer for now, as this part will take a bit more experimentation to understand better!\n\nThis part is to encourage you to learn how to look up the documentation on Tensorflow, ideally using `tf.get_variable?` in the notebook. If you are really stuck, just scroll down a bit and I've shown you how to use it. \n\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"W = tf.get_variable(...\nh = tf.matmul(...",
"_____no_output_____"
]
],
[
[
"And add to this result another new variable, $\\textbf{b}$, which has [20] dimensions. These values will be added to every output neuron after the multiplication above. Instead of the `tf.random_normal_initializer` that you used for creating $\\textbf{W}$, now use the `tf.constant_initializer`. Often for bias, you'll set the constant bias initialization to 0 or 1.\n\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"b = tf.get_variable(...\nh = tf.nn.bias_add(...",
"_____no_output_____"
]
],
[
[
"So far we have done:\n$$\\textbf{X}\\textbf{W} + \\textbf{b}$$\n\nFinally, apply a nonlinear activation to this output, such as `tf.nn.relu`, to complete the equation:\n\n$$\\textbf{H} = \\phi(\\textbf{X}\\textbf{W} + \\textbf{b})$$\n\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"h = ...",
"_____no_output_____"
]
],
[
[
"Now that we've done all of this work, let's stick it inside a function. I've already done this for you and placed it inside the `utils` module under the function name `linear`. We've already imported the `utils` module so we can call it like so, `utils.linear(...)`. The docstring is copied below, and the code itself. Note that this function is slightly different to the one in the lecture. It does not require you to specify `n_input`, and the input `scope` is called `name`. It also has a few more extras in there including automatically converting a 4-d input tensor to a 2-d tensor so that you can fully connect the layer with a matrix multiply (don't worry about what this means if it doesn't make sense!).\n\n```python\nutils.linear??\n```\n\n```python\ndef linear(x, n_output, name=None, activation=None, reuse=None):\n \"\"\"Fully connected layer\n\n Parameters\n ----------\n x : tf.Tensor\n Input tensor to connect\n n_output : int\n Number of output neurons\n name : None, optional\n Scope to apply\n\n Returns\n -------\n op : tf.Tensor\n Output of fully connected layer.\n \"\"\"\n if len(x.get_shape()) != 2:\n x = flatten(x, reuse=reuse)\n\n n_input = x.get_shape().as_list()[1]\n\n with tf.variable_scope(name or \"fc\", reuse=reuse):\n W = tf.get_variable(\n name='W',\n shape=[n_input, n_output],\n dtype=tf.float32,\n initializer=tf.contrib.layers.xavier_initializer())\n\n b = tf.get_variable(\n name='b',\n shape=[n_output],\n dtype=tf.float32,\n initializer=tf.constant_initializer(0.0))\n\n h = tf.nn.bias_add(\n name='h',\n value=tf.matmul(x, W),\n bias=b)\n\n if activation:\n h = activation(h)\n\n return h, W\n```",
"_____no_output_____"
],
[
"<a name=\"variable-scopes\"></a>\n## Variable Scopes\n\nNote that since we are using `variable_scope` and explicitly telling the scope which name we would like, if there is *already* a variable created with the same name, then Tensorflow will raise an exception! If this happens, you should consider one of three possible solutions:\n\n1. If this happens while you are interactively editing a graph, you may need to reset the current graph:\n```python\n tf.reset_default_graph()\n```\nYou should really only have to use this if you are in an interactive console! If you are creating Python scripts to run via command line, you should really be using solution 3 listed below, and be explicit with your graph contexts! \n2. If this happens and you were not expecting any name conflicts, then perhaps you had a typo and created another layer with the same name! That's a good reason to keep useful names for everything in your graph!\n3. More likely, you should be using context managers when creating your graphs and running sessions. This works like so:\n\n ```python\n g = tf.Graph()\n with tf.Session(graph=g) as sess:\n Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu)\n ```\n\n or:\n\n ```python\n g = tf.Graph()\n with tf.Session(graph=g) as sess, g.as_default():\n Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu)\n ```",
"_____no_output_____"
],
[
"You can now write the same process as the above steps by simply calling:",
"_____no_output_____"
]
],
[
[
"h, W = utils.linear(\n x=X, n_output=20, name='linear', activation=tf.nn.relu)",
"_____no_output_____"
]
],
[
[
"<a name=\"part-two---image-painting-network\"></a>\n# Part Two - Image Painting Network\n\n<a name=\"instructions-1\"></a>\n## Instructions\n\nFollow along the steps below, first setting up input and output data of the network, $\\textbf{X}$ and $\\textbf{Y}$. Then work through building the neural network which will try to compress the information in $\\textbf{X}$ through a series of linear and non-linear functions so that whatever it is given as input, it minimized the error of its prediction, $\\hat{\\textbf{Y}}$, and the true output $\\textbf{Y}$ through its training process. You'll also create an animated GIF of the training which you'll need to submit for the homework!\n\nThrough this, we'll explore our first creative application: painting an image. This network is just meant to demonstrate how easily networks can be scaled to more complicated tasks without much modification. It is also meant to get you thinking about neural networks as building blocks that can be reconfigured, replaced, reorganized, and get you thinking about how the inputs and outputs can be anything you can imagine.",
"_____no_output_____"
],
[
"<a name=\"preparing-the-data\"></a>\n## Preparing the Data\n\nWe'll follow an example that Andrej Karpathy has done in his online demonstration of \"image inpainting\". What we're going to do is teach the network to go from the location on an image frame to a particular color. So given any position in an image, the network will need to learn what color to paint. Let's first get an image that we'll try to teach a neural network to paint.\n\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"# First load an image\nimg = ...\n\n# Be careful with the size of your image.\n# Try a fairly small image to begin with,\n# then come back here and try larger sizes.\nimg = imresize(img, (100, 100))\nplt.figure(figsize=(5, 5))\nplt.imshow(img)\n\n# Make sure you save this image as \"reference.png\"\n# and include it in your zipped submission file\n# so we can tell what image you are trying to paint!\nplt.imsave(fname='reference.png', arr=img)",
"_____no_output_____"
]
],
[
[
"In the lecture, I showed how to aggregate the pixel locations and their colors using a loop over every pixel position. I put that code into a function `split_image` below. Feel free to experiment with other features for `xs` or `ys`.",
"_____no_output_____"
]
],
[
[
"def split_image(img):\n # We'll first collect all the positions in the image in our list, xs\n xs = []\n\n # And the corresponding colors for each of these positions\n ys = []\n\n # Now loop over the image\n for row_i in range(img.shape[0]):\n for col_i in range(img.shape[1]):\n # And store the inputs\n xs.append([row_i, col_i])\n # And outputs that the network needs to learn to predict\n ys.append(img[row_i, col_i])\n\n # we'll convert our lists to arrays\n xs = np.array(xs)\n ys = np.array(ys)\n return xs, ys",
"_____no_output_____"
]
],
[
[
"Let's use this function to create the inputs (xs) and outputs (ys) to our network as the pixel locations (xs) and their colors (ys):",
"_____no_output_____"
]
],
[
[
"xs, ys = split_image(img)\n\n# and print the shapes\nxs.shape, ys.shape",
"_____no_output_____"
]
],
[
[
"Also remember, we should normalize our input values!\n\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"# Normalize the input (xs) using its mean and standard deviation\nxs = ...\n\n# Just to make sure you have normalized it correctly:\nprint(np.min(xs), np.max(xs))\nassert(np.min(xs) > -3.0 and np.max(xs) < 3.0)",
"_____no_output_____"
]
],
[
[
"Similarly for the output:",
"_____no_output_____"
]
],
[
[
"print(np.min(ys), np.max(ys))",
"_____no_output_____"
]
],
[
[
"We'll normalize the output using a simpler normalization method, since we know the values range from 0-255:",
"_____no_output_____"
]
],
[
[
"ys = ys / 255.0\nprint(np.min(ys), np.max(ys))",
"_____no_output_____"
]
],
[
[
"Scaling the image values like this has the advantage that it is still interpretable as an image, unlike if we have negative values.\n\nWhat we're going to do is use regression to predict the value of a pixel given its (row, col) position. So the input to our network is `X = (row, col)` value. And the output of the network is `Y = (r, g, b)`.\n\nWe can get our original image back by reshaping the colors back into the original image shape. This works because the `ys` are still in order:",
"_____no_output_____"
]
],
[
[
"plt.imshow(ys.reshape(img.shape))",
"_____no_output_____"
]
],
[
[
"But when we give inputs of (row, col) to our network, it won't know what order they are, because we will randomize them. So it will have to *learn* what color value should be output for any given (row, col).\n\nCreate 2 placeholders of `dtype` `tf.float32`: one for the input of the network, a `None x 2` dimension placeholder called $\\textbf{X}$, and another for the true output of the network, a `None x 3` dimension placeholder called $\\textbf{Y}$.\n\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"# Let's reset the graph:\ntf.reset_default_graph()\n\n# Create a placeholder of None x 2 dimensions and dtype tf.float32\n# This will be the input to the network which takes the row/col\nX = tf.placeholder(...\n\n# Create the placeholder, Y, with 3 output dimensions instead of 2.\n# This will be the output of the network, the R, G, B values.\nY = tf.placeholder(...",
"_____no_output_____"
]
],
[
[
"Now create a deep neural network that takes your network input $\\textbf{X}$ of 2 neurons, multiplies it by a linear and non-linear transformation which makes its shape [None, 20], meaning it will have 20 output neurons. Then repeat the same process again to give you 20 neurons again, and then again and again until you've done 6 layers of 20 neurons. Then finally one last layer which will output 3 neurons, your predicted output, which I've been denoting mathematically as $\\hat{\\textbf{Y}}$, for a total of 6 hidden layers, or 8 layers total including the input and output layers. Mathematically, we'll be creating a deep neural network that looks just like the previous fully connected layer we've created, but with a few more connections. So recall the first layer's connection is:\n\n\\begin{align}\n\\textbf{H}_1=\\phi(\\textbf{X}\\textbf{W}_1 + \\textbf{b}_1) \\\\\n\\end{align}\n\nSo the next layer will take that output, and connect it up again:\n\n\\begin{align}\n\\textbf{H}_2=\\phi(\\textbf{H}_1\\textbf{W}_2 + \\textbf{b}_2) \\\\\n\\end{align}\n\nAnd same for every other layer:\n\n\\begin{align}\n\\textbf{H}_3=\\phi(\\textbf{H}_2\\textbf{W}_3 + \\textbf{b}_3) \\\\\n\\textbf{H}_4=\\phi(\\textbf{H}_3\\textbf{W}_4 + \\textbf{b}_4) \\\\\n\\textbf{H}_5=\\phi(\\textbf{H}_4\\textbf{W}_5 + \\textbf{b}_5) \\\\\n\\textbf{H}_6=\\phi(\\textbf{H}_5\\textbf{W}_6 + \\textbf{b}_6) \\\\\n\\end{align}\n\nIncluding the very last layer, which will be the prediction of the network:\n\n\\begin{align}\n\\hat{\\textbf{Y}}=\\phi(\\textbf{H}_6\\textbf{W}_7 + \\textbf{b}_7)\n\\end{align}\n\nRemember if you run into issues with variable scopes/names, that you cannot recreate a variable with the same name! Revisit the section on <a href='#Variable-Scopes'>Variable Scopes</a> if you get stuck with name issues.\n\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"# We'll create 6 hidden layers. Let's create a variable\n# to say how many neurons we want for each of the layers\n# (try 20 to begin with, then explore other values)\nn_neurons = ...\n\n# Create the first linear + nonlinear layer which will\n# take the 2 input neurons and fully connects it to 20 neurons.\n# Use the `utils.linear` function to do this just like before,\n# but also remember to give names for each layer, such as\n# \"1\", \"2\", ... \"5\", or \"layer1\", \"layer2\", ... \"layer6\".\nh1, W1 = ...\n\n# Create another one:\nh2, W2 = ...\n\n# and four more (or replace all of this with a loop if you can!):\nh3, W3 = ...\nh4, W4 = ...\nh5, W5 = ...\nh6, W6 = ...\n\n# Now, make one last layer to make sure your network has 3 outputs:\nY_pred, W7 = utils.linear(h6, 3, activation=None, name='pred')",
"_____no_output_____"
],
[
"assert(X.get_shape().as_list() == [None, 2])\nassert(Y_pred.get_shape().as_list() == [None, 3])\nassert(Y.get_shape().as_list() == [None, 3])",
"_____no_output_____"
]
],
[
[
"<a name=\"cost-function\"></a>\n## Cost Function\n\nNow we're going to work on creating a `cost` function. The cost should represent how much `error` there is in the network, and provide the optimizer this value to help it train the network's parameters using gradient descent and backpropagation.\n\nLet's say our error is `E`, then the cost will be:\n\n$$cost(\\textbf{Y}, \\hat{\\textbf{Y}}) = \\frac{1}{\\text{B}} \\displaystyle\\sum\\limits_{b=0}^{\\text{B}} \\textbf{E}_b\n$$\n\nwhere the error is measured as, e.g.:\n\n$$\\textbf{E} = \\displaystyle\\sum\\limits_{c=0}^{\\text{C}} (\\textbf{Y}_{c} - \\hat{\\textbf{Y}}_{c})^2$$\n\nDon't worry if this scares you. This is mathematically expressing the same concept as: \"the cost of an actual $\\textbf{Y}$, and a predicted $\\hat{\\textbf{Y}}$ is equal to the mean across batches, of which there are $\\text{B}$ total batches, of the sum of distances across $\\text{C}$ color channels of every predicted output and true output\". Basically, we're trying to see on average, or at least within a single minibatches average, how wrong was our prediction? We create a measure of error for every output feature by squaring the predicted output and the actual output it should have, i.e. the actual color value it should have output for a given input pixel position. By squaring it, we penalize large distances, but not so much small distances.\n\nConsider how the square function (i.e., $f(x) = x^2$) changes for a given error. If our color values range between 0-255, then a typical amount of error would be between $0$ and $128^2$. For example if my prediction was (120, 50, 167), and the color should have been (0, 100, 120), then the error for the Red channel is (120 - 0) or 120. And the Green channel is (50 - 100) or -50, and for the Blue channel, (167 - 120) = 47. When I square this result, I get: (120)^2, (-50)^2, and (47)^2. I then add all of these and that is my error, $\\textbf{E}$, for this one observation. But I will have a few observations per minibatch. So I add all the error in my batch together, then divide by the number of observations in the batch, essentially finding the mean error of my batch. \n\nLet's try to see what the square in our measure of error is doing graphically.",
"_____no_output_____"
]
],
[
[
"error = np.linspace(0.0, 128.0**2, 100)\nloss = error**2.0\nplt.plot(error, loss)\nplt.xlabel('error')\nplt.ylabel('loss')",
"_____no_output_____"
]
],
[
[
"This is known as the $l_2$ (pronounced el-two) loss. It doesn't penalize small errors as much as it does large errors. This is easier to see when we compare it with another common loss, the $l_1$ (el-one) loss. It is linear in error, by taking the absolute value of the error. We'll compare the $l_1$ loss with normalized values from $0$ to $1$. So instead of having $0$ to $255$ for our RGB values, we'd have $0$ to $1$, simply by dividing our color values by $255.0$.",
"_____no_output_____"
]
],
[
[
"error = np.linspace(0.0, 1.0, 100)\nplt.plot(error, error**2, label='l_2 loss')\nplt.plot(error, np.abs(error), label='l_1 loss')\nplt.xlabel('error')\nplt.ylabel('loss')\nplt.legend(loc='lower right')",
"_____no_output_____"
]
],
[
[
"So unlike the $l_2$ loss, the $l_1$ loss is really quickly upset if there is *any* error at all: as soon as error moves away from $0.0$, to $0.1$, the $l_1$ loss is $0.1$. But the $l_2$ loss is $0.1^2 = 0.01$. Having a stronger penalty on smaller errors often leads to what the literature calls \"sparse\" solutions, since it favors activations that try to explain as much of the data as possible, rather than a lot of activations that do a sort of good job, but when put together, do a great job of explaining the data. Don't worry about what this means if you are more unfamiliar with Machine Learning. There is a lot of literature surrounding each of these loss functions that we won't have time to get into, but look them up if they interest you.\n\nDuring the lecture, we've seen how to create a cost function using Tensorflow. To create a $l_2$ loss function, you can for instance use tensorflow's `tf.squared_difference` or for an $l_1$ loss function, `tf.abs`. You'll need to refer to the `Y` and `Y_pred` variables only, and your resulting cost should be a single value. Try creating the $l_1$ loss to begin with, and come back here after you have trained your network, to compare the performance with a $l_2$ loss.\n\nThe equation for computing cost I mentioned above is more succintly written as, for $l_2$ norm:\n\n$$cost(\\textbf{Y}, \\hat{\\textbf{Y}}) = \\frac{1}{\\text{B}} \\displaystyle\\sum\\limits_{b=0}^{\\text{B}} \\displaystyle\\sum\\limits_{c=0}^{\\text{C}} (\\textbf{Y}_{c} - \\hat{\\textbf{Y}}_{c})^2$$\n\nFor $l_1$ norm, we'd have:\n\n$$cost(\\textbf{Y}, \\hat{\\textbf{Y}}) = \\frac{1}{\\text{B}} \\displaystyle\\sum\\limits_{b=0}^{\\text{B}} \\displaystyle\\sum\\limits_{c=0}^{\\text{C}} \\text{abs}(\\textbf{Y}_{c} - \\hat{\\textbf{Y}}_{c})$$\n\nRemember, to understand this equation, try to say it out loud: the $cost$ given two variables, $\\textbf{Y}$, the actual output we want the network to have, and $\\hat{\\textbf{Y}}$ the predicted output from the network, is equal to the mean across $\\text{B}$ batches, of the sum of $\\textbf{C}$ color channels distance between the actual and predicted outputs. If you're still unsure, refer to the lecture where I've computed this, or scroll down a bit to where I've included the answer.\n\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"# first compute the error, the inner part of the summation.\n# This should be the l1-norm or l2-norm of the distance\n# between each color channel.\nerror = ...\nassert(error.get_shape().as_list() == [None, 3])",
"_____no_output_____"
]
],
[
[
"<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"# Now sum the error for each feature in Y. \n# If Y is [Batch, Features], the sum should be [Batch]:\nsum_error = ...\nassert(sum_error.get_shape().as_list() == [None])",
"_____no_output_____"
]
],
[
[
"<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"# Finally, compute the cost, as the mean error of the batch.\n# This should be a single value.\ncost = ...\nassert(cost.get_shape().as_list() == [])",
"_____no_output_____"
]
],
[
[
"We now need an `optimizer` which will take our `cost` and a `learning_rate`, which says how far along the gradient to move. This optimizer calculates all the gradients in our network with respect to the `cost` variable and updates all of the weights in our network using backpropagation. We'll then create mini-batches of our training data and run the `optimizer` using a `session`.\n\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"# Refer to the help for the function\noptimizer = tf.train....minimize(cost)\n\n# Create parameters for the number of iterations to run for (< 100)\nn_iterations = ...\n\n# And how much data is in each minibatch (< 500)\nbatch_size = ...\n\n# Then create a session\nsess = tf.Session()",
"_____no_output_____"
]
],
[
[
"We'll now train our network! The code below should do this for you if you've setup everything else properly. Please read through this and make sure you understand each step! Note that this can take a VERY LONG time depending on the size of your image (make it < 100 x 100 pixels), the number of neurons per layer (e.g. < 30), the number of layers (e.g. < 8), and number of iterations (< 1000). Welcome to Deep Learning :)",
"_____no_output_____"
]
],
[
[
"# Initialize all your variables and run the operation with your session\nsess.run(tf.initialize_all_variables())\n\n# Optimize over a few iterations, each time following the gradient\n# a little at a time\nimgs = []\ncosts = []\ngif_step = n_iterations // 10\nstep_i = 0\n\nfor it_i in range(n_iterations):\n \n # Get a random sampling of the dataset\n idxs = np.random.permutation(range(len(xs)))\n \n # The number of batches we have to iterate over\n n_batches = len(idxs) // batch_size\n \n # Now iterate over our stochastic minibatches:\n for batch_i in range(n_batches):\n \n # Get just minibatch amount of data\n idxs_i = idxs[batch_i * batch_size: (batch_i + 1) * batch_size]\n\n # And optimize, also returning the cost so we can monitor\n # how our optimization is doing.\n training_cost = sess.run(\n [cost, optimizer],\n feed_dict={X: xs[idxs_i], Y: ys[idxs_i]})[0]\n\n # Also, every 20 iterations, we'll draw the prediction of our\n # input xs, which should try to recreate our image!\n if (it_i + 1) % gif_step == 0:\n costs.append(training_cost / n_batches)\n ys_pred = Y_pred.eval(feed_dict={X: xs}, session=sess)\n img = np.clip(ys_pred.reshape(img.shape), 0, 1)\n imgs.append(img)\n # Plot the cost over time\n fig, ax = plt.subplots(1, 2)\n ax[0].plot(costs)\n ax[0].set_xlabel('Iteration')\n ax[0].set_ylabel('Cost')\n ax[1].imshow(img)\n fig.suptitle('Iteration {}'.format(it_i))\n plt.show()",
"_____no_output_____"
],
[
"# Save the images as a GIF\n_ = gif.build_gif(imgs, saveto='single.gif', show_gif=False)",
"_____no_output_____"
]
],
[
[
"Let's now display the GIF we've just created:",
"_____no_output_____"
]
],
[
[
"ipyd.Image(url='single.gif?{}'.format(np.random.rand()),\n height=500, width=500)",
"_____no_output_____"
]
],
[
[
"<a name=\"explore\"></a>\n## Explore\n\nGo back over the previous cells and exploring changing different parameters of the network. I would suggest first trying to change the `learning_rate` parameter to different values and see how the cost curve changes. What do you notice? Try exponents of $10$, e.g. $10^1$, $10^2$, $10^3$... and so on. Also try changing the `batch_size`: $50, 100, 200, 500, ...$ How does it effect how the cost changes over time?\n\nBe sure to explore other manipulations of the network, such as changing the loss function to $l_2$ or $l_1$. How does it change the resulting learning? Also try changing the activation functions, the number of layers/neurons, different optimizers, and anything else that you may think of, and try to get a basic understanding on this toy problem of how it effects the network's training. Also try comparing creating a fairly shallow/wide net (e.g. 1-2 layers with many neurons, e.g. > 100), versus a deep/narrow net (e.g. 6-20 layers with fewer neurons, e.g. < 20). What do you notice?",
"_____no_output_____"
],
[
"<a name=\"a-note-on-crossvalidation\"></a>\n## A Note on Crossvalidation\n\nThe cost curve plotted above is only showing the cost for our \"training\" dataset. Ideally, we should split our dataset into what are called \"train\", \"validation\", and \"test\" sets. This is done by taking random subsets of the entire dataset. For instance, we partition our dataset by saying we'll only use 80% of it for training, 10% for validation, and the last 10% for testing. Then when training as above, you would only use the 80% of the data you had partitioned, and then monitor accuracy on both the data you have used to train, but also that new 10% of unseen validation data. This gives you a sense of how \"general\" your network is. If it is performing just as well on that 10% of data, then you know it is doing a good job. Finally, once you are done training, you would test one last time on your \"test\" dataset. Ideally, you'd do this a number of times, so that every part of the dataset had a chance to be the test set. This would also give you a measure of the variance of the accuracy on the final test. If it changes a lot, you know something is wrong. If it remains fairly stable, then you know that it is a good representation of the model's accuracy on unseen data.\n\nWe didn't get a chance to cover this in class, as it is less useful for exploring creative applications, though it is very useful to know and to use in practice, as it avoids overfitting/overgeneralizing your network to all of the data. Feel free to explore how to do this on the application above!",
"_____no_output_____"
],
[
"<a name=\"part-three---learning-more-than-one-image\"></a>\n# Part Three - Learning More than One Image\n\n<a name=\"instructions-2\"></a>\n## Instructions\n\nWe're now going to make use of our Dataset from Session 1 and apply what we've just learned to try and paint every single image in our dataset. How would you guess is the best way to approach this? We could for instance feed in every possible image by having multiple row, col -> r, g, b values. So for any given row, col, we'd have 100 possible r, g, b values. This likely won't work very well as there are many possible values a pixel could take, not just one. What if we also tell the network *which* image's row and column we wanted painted? We're going to try and see how that does.\n\nYou can execute all of the cells below unchanged to see how this works with the first 100 images of the celeb dataset. But you should replace the images with your own dataset, and vary the parameters of the network to get the best results!\n\nI've placed the same code for running the previous algorithm into two functions, `build_model` and `train`. You can directly call the function `train` with a 4-d image shaped as N x H x W x C, and it will collect all of the points of every image and try to predict the output colors of those pixels, just like before. The only difference now is that you are able to try this with a few images at a time. There are a few ways we could have tried to handle multiple images. The way I've shown in the `train` function is to include an additional input neuron for *which* image it is. So as well as receiving the row and column, the network will also receive as input which image it is as a number. This should help the network to better distinguish the patterns it uses, as it has knowledge that helps it separates its process based on which image is fed as input.",
"_____no_output_____"
]
],
[
[
"def build_model(xs, ys, n_neurons, n_layers, activation_fn,\n final_activation_fn, cost_type):\n \n xs = np.asarray(xs)\n ys = np.asarray(ys)\n \n if xs.ndim != 2:\n raise ValueError(\n 'xs should be a n_observates x n_features, ' +\n 'or a 2-dimensional array.')\n if ys.ndim != 2:\n raise ValueError(\n 'ys should be a n_observates x n_features, ' +\n 'or a 2-dimensional array.')\n \n n_xs = xs.shape[1]\n n_ys = ys.shape[1]\n \n X = tf.placeholder(name='X', shape=[None, n_xs],\n dtype=tf.float32)\n Y = tf.placeholder(name='Y', shape=[None, n_ys],\n dtype=tf.float32)\n\n current_input = X\n for layer_i in range(n_layers):\n current_input = utils.linear(\n current_input, n_neurons,\n activation=activation_fn,\n name='layer{}'.format(layer_i))[0]\n\n Y_pred = utils.linear(\n current_input, n_ys,\n activation=final_activation_fn,\n name='pred')[0]\n \n if cost_type == 'l1_norm':\n cost = tf.reduce_mean(tf.reduce_sum(\n tf.abs(Y - Y_pred), 1))\n elif cost_type == 'l2_norm':\n cost = tf.reduce_mean(tf.reduce_sum(\n tf.squared_difference(Y, Y_pred), 1))\n else:\n raise ValueError(\n 'Unknown cost_type: {}. '.format(\n cost_type) + 'Use only \"l1_norm\" or \"l2_norm\"')\n \n return {'X': X, 'Y': Y, 'Y_pred': Y_pred, 'cost': cost}",
"_____no_output_____"
],
[
"def train(imgs,\n learning_rate=0.0001,\n batch_size=200,\n n_iterations=10,\n gif_step=2,\n n_neurons=30,\n n_layers=10,\n activation_fn=tf.nn.relu,\n final_activation_fn=tf.nn.tanh,\n cost_type='l2_norm'):\n\n N, H, W, C = imgs.shape\n all_xs, all_ys = [], []\n for img_i, img in enumerate(imgs):\n xs, ys = split_image(img)\n all_xs.append(np.c_[xs, np.repeat(img_i, [xs.shape[0]])])\n all_ys.append(ys)\n xs = np.array(all_xs).reshape(-1, 3)\n xs = (xs - np.mean(xs, 0)) / np.std(xs, 0)\n ys = np.array(all_ys).reshape(-1, 3)\n ys = ys / 127.5 - 1\n\n g = tf.Graph()\n with tf.Session(graph=g) as sess:\n model = build_model(xs, ys, n_neurons, n_layers,\n activation_fn, final_activation_fn,\n cost_type)\n optimizer = tf.train.AdamOptimizer(\n learning_rate=learning_rate).minimize(model['cost'])\n sess.run(tf.initialize_all_variables())\n gifs = []\n costs = []\n step_i = 0\n for it_i in range(n_iterations):\n # Get a random sampling of the dataset\n idxs = np.random.permutation(range(len(xs)))\n\n # The number of batches we have to iterate over\n n_batches = len(idxs) // batch_size\n training_cost = 0\n\n # Now iterate over our stochastic minibatches:\n for batch_i in range(n_batches):\n\n # Get just minibatch amount of data\n idxs_i = idxs[batch_i * batch_size:\n (batch_i + 1) * batch_size]\n\n # And optimize, also returning the cost so we can monitor\n # how our optimization is doing.\n cost = sess.run(\n [model['cost'], optimizer],\n feed_dict={model['X']: xs[idxs_i],\n model['Y']: ys[idxs_i]})[0]\n training_cost += cost\n\n print('iteration {}/{}: cost {}'.format(\n it_i + 1, n_iterations, training_cost / n_batches))\n\n # Also, every 20 iterations, we'll draw the prediction of our\n # input xs, which should try to recreate our image!\n if (it_i + 1) % gif_step == 0:\n costs.append(training_cost / n_batches)\n ys_pred = model['Y_pred'].eval(\n feed_dict={model['X']: xs}, session=sess)\n img = ys_pred.reshape(imgs.shape)\n gifs.append(img)\n return gifs",
"_____no_output_____"
]
],
[
[
"<a name=\"code-1\"></a>\n## Code\n\nBelow, I've shown code for loading the first 100 celeb files. Run through the next few cells to see how this works with the celeb dataset, and then come back here and replace the `imgs` variable with your own set of images. For instance, you can try your entire sorted dataset from Session 1 as an N x H x W x C array. Explore!\n\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"celeb_imgs = utils.get_celeb_imgs()\nplt.figure(figsize=(10, 10))\nplt.imshow(utils.montage(celeb_imgs).astype(np.uint8))\n# It doesn't have to be 100 images, explore!\nimgs = np.array(celeb_imgs).copy()",
"_____no_output_____"
]
],
[
[
"Explore changing the parameters of the `train` function and your own dataset of images. Note, you do not have to use the dataset from the last assignment! Explore different numbers of images, whatever you prefer.\n\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"# Change the parameters of the train function and\n# explore changing the dataset\ngifs = train(imgs=imgs)",
"_____no_output_____"
]
],
[
[
"Now we'll create a gif out of the training process. Be sure to call this 'multiple.gif' for your homework submission:",
"_____no_output_____"
]
],
[
[
"montage_gifs = [np.clip(utils.montage(\n (m * 127.5) + 127.5), 0, 255).astype(np.uint8)\n for m in gifs]\n_ = gif.build_gif(montage_gifs, saveto='multiple.gif')",
"_____no_output_____"
]
],
[
[
"And show it in the notebook",
"_____no_output_____"
]
],
[
[
"ipyd.Image(url='multiple.gif?{}'.format(np.random.rand()),\n height=500, width=500)",
"_____no_output_____"
]
],
[
[
"What we're seeing is the training process over time. We feed in our `xs`, which consist of the pixel values of each of our 100 images, it goes through the neural network, and out come predicted color values for every possible input value. We visualize it above as a gif by seeing how at each iteration the network has predicted the entire space of the inputs. We can visualize just the last iteration as a \"latent\" space, going from the first image (the top left image in the montage), to the last image, (the bottom right image).",
"_____no_output_____"
]
],
[
[
"final = gifs[-1]\nfinal_gif = [np.clip(((m * 127.5) + 127.5), 0, 255).astype(np.uint8) for m in final]\ngif.build_gif(final_gif, saveto='final.gif')",
"_____no_output_____"
],
[
"ipyd.Image(url='final.gif?{}'.format(np.random.rand()),\n height=200, width=200)",
"_____no_output_____"
]
],
[
[
"<a name=\"part-four---open-exploration-extra-credit\"></a>\n# Part Four - Open Exploration (Extra Credit)\n\nI now what you to explore what other possible manipulations of the network and/or dataset you could imagine. Perhaps a process that does the reverse, tries to guess where a given color should be painted? What if it was only taught a certain palette, and had to reason about other colors, how it would interpret those colors? Or what if you fed it pixel locations that weren't part of the training set, or outside the frame of what it was trained on? Or what happens with different activation functions, number of layers, increasing number of neurons or lesser number of neurons? I leave any of these as an open exploration for you.\n\nTry exploring this process with your own ideas, materials, and networks, and submit something you've created as a gif! To aid exploration, be sure to scale the image down quite a bit or it will require a much larger machine, and much more time to train. Then whenever you think you may be happy with the process you've created, try scaling up the resolution and leave the training to happen over a few hours/overnight to produce something truly stunning!\n\nMake sure to name the result of your gif: \"explore.gif\", and be sure to include it in your zip file.",
"_____no_output_____"
],
[
"<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>",
"_____no_output_____"
]
],
[
[
"# Train a network to produce something, storing every few\n# iterations in the variable gifs, then export the training\n# over time as a gif.\n...\n\n\ngif.build_gif(montage_gifs, saveto='explore.gif')",
"_____no_output_____"
],
[
"ipyd.Image(url='explore.gif?{}'.format(np.random.rand()),\n height=500, width=500)",
"_____no_output_____"
]
],
[
[
"<a name=\"assignment-submission\"></a>\n# Assignment Submission\n\nAfter you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as:\n\n<pre>\n session-2/\n session-2.ipynb\n single.gif\n multiple.gif\n final.gif\n explore.gif*\n libs/\n utils.py\n \n * = optional/extra-credit\n</pre>\n\nYou'll then submit this zip file for your second assignment on Kadenze for \"Assignment 2: Teach a Deep Neural Network to Paint\"! If you have any questions, remember to reach out on the forums and connect with your peers or with me.\n\nTo get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the [#CADL](https://twitter.com/hashtag/CADL) community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info\n\nAlso, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the #CADL hashtag so that other students can find your work!",
"_____no_output_____"
]
],
[
[
"utils.build_submission('session-2.zip',\n ('reference.png',\n 'single.gif',\n 'multiple.gif',\n 'final.gif',\n 'session-2.ipynb'),\n ('explore.gif'))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d052e880a0111724f473f0e437a14e977145884b | 294,931 | ipynb | Jupyter Notebook | CustSeg.ipynb | pranjalAI/Segmentation-of-Credit-Card-Customers | 703bddde5478b2667ad3f7d6684ea4ab61780592 | [
"MIT"
] | 3 | 2020-08-29T13:10:42.000Z | 2021-06-12T09:58:46.000Z | CustSeg.ipynb | pranjalAI/Segmentation-of-Credit-Card-Customers | 703bddde5478b2667ad3f7d6684ea4ab61780592 | [
"MIT"
] | null | null | null | CustSeg.ipynb | pranjalAI/Segmentation-of-Credit-Card-Customers | 703bddde5478b2667ad3f7d6684ea4ab61780592 | [
"MIT"
] | 3 | 2021-06-06T14:05:49.000Z | 2021-06-12T10:03:51.000Z | 55.106689 | 59,872 | 0.552095 | [
[
[
"#!pip install pandas_profiling\n#!pip install matplotlib",
"_____no_output_____"
],
[
"import sys\nsys.version",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\nimport scipy.stats as stats\nimport pandas_profiling\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = 10, 7.5\nplt.rcParams['axes.grid'] = True\n\nfrom matplotlib.backends.backend_pdf import PdfPages",
"_____no_output_____"
],
[
"from sklearn.cluster import KMeans\n\n# center and scale the data\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.decomposition import PCA\nimport sklearn.metrics as metrics",
"_____no_output_____"
],
[
"# reading data into dataframe\nCust= pd.read_csv(\"CC_GENERAL.csv\")",
"_____no_output_____"
],
[
"Cust.head()",
"_____no_output_____"
],
[
"### Exporting pandas profiling output to html file\n\noutput = pandas_profiling.ProfileReport(Cust)\n\noutput.to_file(output_file='pandas_profiling.html')",
"_____no_output_____"
]
],
[
[
"### Cols to drop",
"_____no_output_____"
]
],
[
[
"# CUST_ID,ONEOFF_PURCHASES",
"_____no_output_____"
],
[
"Cust.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 8950 entries, 0 to 8949\nData columns (total 18 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 CUST_ID 8950 non-null object \n 1 BALANCE 8950 non-null float64\n 2 BALANCE_FREQUENCY 8950 non-null float64\n 3 PURCHASES 8950 non-null float64\n 4 ONEOFF_PURCHASES 8950 non-null float64\n 5 INSTALLMENTS_PURCHASES 8950 non-null float64\n 6 CASH_ADVANCE 8950 non-null float64\n 7 PURCHASES_FREQUENCY 8950 non-null float64\n 8 ONEOFF_PURCHASES_FREQUENCY 8950 non-null float64\n 9 PURCHASES_INSTALLMENTS_FREQUENCY 8950 non-null float64\n 10 CASH_ADVANCE_FREQUENCY 8950 non-null float64\n 11 CASH_ADVANCE_TRX 8950 non-null int64 \n 12 PURCHASES_TRX 8950 non-null int64 \n 13 CREDIT_LIMIT 8949 non-null float64\n 14 PAYMENTS 8950 non-null float64\n 15 MINIMUM_PAYMENTS 8637 non-null float64\n 16 PRC_FULL_PAYMENT 8950 non-null float64\n 17 TENURE 8950 non-null int64 \ndtypes: float64(14), int64(3), object(1)\nmemory usage: 1.2+ MB\n"
],
[
"Cust.drop([\"CUST_ID\",\"ONEOFF_PURCHASES\"], axis=1, inplace=True)",
"_____no_output_____"
],
[
"Cust.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 8950 entries, 0 to 8949\nData columns (total 16 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 BALANCE 8950 non-null float64\n 1 BALANCE_FREQUENCY 8950 non-null float64\n 2 PURCHASES 8950 non-null float64\n 3 INSTALLMENTS_PURCHASES 8950 non-null float64\n 4 CASH_ADVANCE 8950 non-null float64\n 5 PURCHASES_FREQUENCY 8950 non-null float64\n 6 ONEOFF_PURCHASES_FREQUENCY 8950 non-null float64\n 7 PURCHASES_INSTALLMENTS_FREQUENCY 8950 non-null float64\n 8 CASH_ADVANCE_FREQUENCY 8950 non-null float64\n 9 CASH_ADVANCE_TRX 8950 non-null int64 \n 10 PURCHASES_TRX 8950 non-null int64 \n 11 CREDIT_LIMIT 8949 non-null float64\n 12 PAYMENTS 8950 non-null float64\n 13 MINIMUM_PAYMENTS 8637 non-null float64\n 14 PRC_FULL_PAYMENT 8950 non-null float64\n 15 TENURE 8950 non-null int64 \ndtypes: float64(13), int64(3)\nmemory usage: 1.1 MB\n"
],
[
"Cust.TENURE.unique()",
"_____no_output_____"
],
[
"#Handling Outliers - Method2\ndef outlier_capping(x):\n x = x.clip(upper=x.quantile(0.99), lower=x.quantile(0.01))\n return x\n\nCust=Cust.apply(lambda x: outlier_capping(x))",
"_____no_output_____"
],
[
"#Handling missings - Method2\ndef Missing_imputation(x):\n x = x.fillna(x.median())\n return x\n\nCust=Cust.apply(lambda x: Missing_imputation(x))",
"_____no_output_____"
],
[
"Cust.corr()",
"_____no_output_____"
],
[
"# visualize correlation matrix in Seaborn using a heatmap\nsns.heatmap(Cust.corr())",
"_____no_output_____"
]
],
[
[
"### Standardrizing data \n- To put data on the same scale ",
"_____no_output_____"
]
],
[
[
"sc=StandardScaler()",
"_____no_output_____"
],
[
"Cust_scaled=sc.fit_transform(Cust)",
"_____no_output_____"
],
[
"pd.DataFrame(Cust_scaled).shape",
"_____no_output_____"
]
],
[
[
"### Applyting PCA",
"_____no_output_____"
]
],
[
[
"pc = PCA(n_components=16)",
"_____no_output_____"
],
[
"pc.fit(Cust_scaled)",
"_____no_output_____"
],
[
"pc.explained_variance_",
"_____no_output_____"
],
[
"#Eigen values\nsum(pc.explained_variance_)",
"_____no_output_____"
],
[
"#The amount of variance that each PC explains\nvar= pc.explained_variance_ratio_",
"_____no_output_____"
],
[
"var",
"_____no_output_____"
],
[
"#Cumulative Variance explains\nvar1=np.cumsum(np.round(pc.explained_variance_ratio_, decimals=4)*100)",
"_____no_output_____"
],
[
"var1",
"_____no_output_____"
]
],
[
[
"number of components have choosen as 6 based on cumulative variacne is explaining >75 % and individual component explaining >0.8 variance\n",
"_____no_output_____"
]
],
[
[
"pc_final=PCA(n_components=6).fit(Cust_scaled)",
"_____no_output_____"
],
[
"pc_final.explained_variance_",
"_____no_output_____"
],
[
"reduced_cr=pc_final.transform(Cust_scaled) ",
"_____no_output_____"
],
[
"dimensions = pd.DataFrame(reduced_cr)",
"_____no_output_____"
],
[
"dimensions",
"_____no_output_____"
],
[
"dimensions.columns = [\"C1\", \"C2\", \"C3\", \"C4\", \"C5\", \"C6\"]",
"_____no_output_____"
],
[
"dimensions.head()",
"_____no_output_____"
]
],
[
[
"#### Factor Loading Matrix\n\nLoadings=Eigenvectors * sqrt(Eigenvalues)\n\nloadings are the covariances/correlations between the original variables and the unit-scaled components.",
"_____no_output_____"
]
],
[
[
"Loadings = pd.DataFrame((pc_final.components_.T * np.sqrt(pc_final.explained_variance_)).T,columns=Cust.columns).T",
"_____no_output_____"
],
[
"Loadings.to_csv(\"Loadings.csv\")",
"_____no_output_____"
]
],
[
[
"### Clustering ",
"_____no_output_____"
]
],
[
[
"#selected the list variables from PCA based on factor loading matrics\nlist_var = ['PURCHASES_TRX','INSTALLMENTS_PURCHASES','PURCHASES_INSTALLMENTS_FREQUENCY','MINIMUM_PAYMENTS','BALANCE','CREDIT_LIMIT','CASH_ADVANCE','PRC_FULL_PAYMENT','ONEOFF_PURCHASES_FREQUENCY']",
"_____no_output_____"
],
[
"Cust_scaled1=pd.DataFrame(Cust_scaled, columns=Cust.columns)\nCust_scaled1.head(5)\n\nCust_scaled2=Cust_scaled1[list_var]\nCust_scaled2.head(5)",
"_____no_output_____"
]
],
[
[
"## Segmentation",
"_____no_output_____"
]
],
[
[
"km_3=KMeans(n_clusters=3,random_state=123)",
"_____no_output_____"
],
[
"km_3.fit(Cust_scaled2)",
"_____no_output_____"
],
[
"print(km_3.labels_)",
"[1 2 1 ... 1 1 1]\n"
],
[
"km_3.cluster_centers_",
"_____no_output_____"
],
[
"km_4=KMeans(n_clusters=4,random_state=123).fit(Cust_scaled2)\n#km_5.labels_a\n\nkm_5=KMeans(n_clusters=5,random_state=123).fit(Cust_scaled2)\n#km_5.labels_\n\nkm_6=KMeans(n_clusters=6,random_state=123).fit(Cust_scaled2)\n#km_6.labels_\n\nkm_7=KMeans(n_clusters=7,random_state=123).fit(Cust_scaled2)\n#km_7.labels_\n\nkm_8=KMeans(n_clusters=8,random_state=123).fit(Cust_scaled2)\n#km_5.labels_",
"_____no_output_____"
],
[
"metrics.silhouette_score(Cust_scaled2, km_3.labels_)",
"_____no_output_____"
],
[
"# 5 clusters are better",
"_____no_output_____"
],
[
"# Conactenating labels found through Kmeans with data \n\n# save the cluster labels and sort by cluster\nCust['cluster_3'] = km_3.labels_\nCust['cluster_4'] = km_4.labels_\nCust['cluster_5'] = km_5.labels_\nCust['cluster_6'] = km_6.labels_\nCust['cluster_7'] = km_7.labels_\nCust['cluster_8'] = km_8.labels_",
"_____no_output_____"
],
[
"Cust.head()",
"_____no_output_____"
]
],
[
[
"### Choosing number clusters using Silhouette Coefficient",
"_____no_output_____"
]
],
[
[
"# calculate SC for K=6\nfrom sklearn import metrics\nmetrics.silhouette_score(Cust_scaled2, km_3.labels_)",
"_____no_output_____"
],
[
"# calculate SC for K=3 through K=9\nk_range = range(3, 13)\nscores = []\nfor k in k_range:\n km = KMeans(n_clusters=k, random_state=123)\n km.fit(Cust_scaled2)\n scores.append(metrics.silhouette_score(Cust_scaled2, km.labels_))",
"_____no_output_____"
],
[
"scores",
"_____no_output_____"
],
[
"# plot the results\nplt.plot(k_range, scores)\nplt.xlabel('Number of clusters')\nplt.ylabel('Silhouette Coefficient')\nplt.grid(True)",
"_____no_output_____"
]
],
[
[
"### Segment Distribution",
"_____no_output_____"
]
],
[
[
"Cust.cluster_3.value_counts()*100/sum(Cust.cluster_3.value_counts())",
"_____no_output_____"
],
[
"pd.Series.sort_index(Cust.cluster_3.value_counts())",
"_____no_output_____"
]
],
[
[
"### Profiling",
"_____no_output_____"
]
],
[
[
"size=pd.concat([pd.Series(Cust.cluster_3.size), pd.Series.sort_index(Cust.cluster_3.value_counts()), pd.Series.sort_index(Cust.cluster_4.value_counts()),\n pd.Series.sort_index(Cust.cluster_5.value_counts()), pd.Series.sort_index(Cust.cluster_6.value_counts()),\n pd.Series.sort_index(Cust.cluster_7.value_counts()), pd.Series.sort_index(Cust.cluster_8.value_counts())])",
"_____no_output_____"
],
[
"size",
"_____no_output_____"
],
[
"Seg_size=pd.DataFrame(size, columns=['Seg_size'])\nSeg_Pct = pd.DataFrame(size/Cust.cluster_3.size, columns=['Seg_Pct'])\nSeg_size.T",
"_____no_output_____"
],
[
"Seg_Pct.T",
"_____no_output_____"
],
[
"pd.concat([Seg_size.T, Seg_Pct.T], axis=0)",
"_____no_output_____"
],
[
"Cust.head()",
"_____no_output_____"
],
[
"# Mean value gives a good indication of the distribution of data. So we are finding mean value for each variable for each cluster\nProfling_output = pd.concat([Cust.apply(lambda x: x.mean()).T, Cust.groupby('cluster_3').apply(lambda x: x.mean()).T, Cust.groupby('cluster_4').apply(lambda x: x.mean()).T,\n Cust.groupby('cluster_5').apply(lambda x: x.mean()).T, Cust.groupby('cluster_6').apply(lambda x: x.mean()).T,\n Cust.groupby('cluster_7').apply(lambda x: x.mean()).T, Cust.groupby('cluster_8').apply(lambda x: x.mean()).T], axis=1)",
"_____no_output_____"
],
[
"Profling_output",
"_____no_output_____"
],
[
"Profling_output_final=pd.concat([Seg_size.T, Seg_Pct.T, Profling_output], axis=0)",
"_____no_output_____"
],
[
"Profling_output_final",
"_____no_output_____"
],
[
"#Profling_output_final.columns = ['Seg_' + str(i) for i in Profling_output_final.columns]\nProfling_output_final.columns = ['Overall', 'KM3_1', 'KM3_2', 'KM3_3',\n 'KM4_1', 'KM4_2', 'KM4_3', 'KM4_4',\n 'KM5_1', 'KM5_2', 'KM5_3', 'KM5_4', 'KM5_5',\n 'KM6_1', 'KM6_2', 'KM6_3', 'KM6_4', 'KM6_5','KM6_6',\n 'KM7_1', 'KM7_2', 'KM7_3', 'KM7_4', 'KM7_5','KM7_6','KM7_7',\n 'KM8_1', 'KM8_2', 'KM8_3', 'KM8_4', 'KM8_5','KM8_6','KM8_7','KM8_8']",
"_____no_output_____"
],
[
"Profling_output_final",
"_____no_output_____"
],
[
"Profling_output_final.to_csv('Profiling_output.csv')",
"_____no_output_____"
]
],
[
[
"### Check profiling Output for more details.",
"_____no_output_____"
],
[
"Submitted By, Pranjal Saxena <a>https://www.linkedin.com/in/pranjalai/ </a> <br>\[email protected]",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d052f1b0fdb41c05dd7bf07c4eba4aafd39216b0 | 32,061 | ipynb | Jupyter Notebook | site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb | masa-ita/docs-l10n | b1c238524c0b4362e4d1c4a841ae998cd1776497 | [
"Apache-2.0"
] | null | null | null | site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb | masa-ita/docs-l10n | b1c238524c0b4362e4d1c4a841ae998cd1776497 | [
"Apache-2.0"
] | null | null | null | site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb | masa-ita/docs-l10n | b1c238524c0b4362e4d1c4a841ae998cd1776497 | [
"Apache-2.0"
] | null | null | null | 38.167857 | 258 | 0.527432 | [
[
[
"##### Copyright 2020 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Writing a training loop from scratch",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/keras-team/keras-io/blob/master/tf/writing_a_training_loop_from_scratch.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/keras-team/keras-io/blob/master/guides/writing_a_training_loop_from_scratch.py\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/keras-io/tf/writing_a_training_loop_from_scratch.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"## Introduction\n\nKeras provides default training and evaluation loops, `fit()` and `evaluate()`.\nTheir usage is covered in the guide\n[Training & evaluation with the built-in methods](https://www.tensorflow.org/guide/keras/train_and_evaluate/).\n\nIf you want to customize the learning algorithm of your model while still leveraging\nthe convenience of `fit()`\n(for instance, to train a GAN using `fit()`), you can subclass the `Model` class and\nimplement your own `train_step()` method, which\nis called repeatedly during `fit()`. This is covered in the guide\n[Customizing what happens in `fit()`](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit/).\n\nNow, if you want very low-level control over training & evaluation, you should write\nyour own training & evaluation loops from scratch. This is what this guide is about.",
"_____no_output_____"
],
[
"## Using the `GradientTape`: a first end-to-end example\n\nCalling a model inside a `GradientTape` scope enables you to retrieve the gradients of\nthe trainable weights of the layer with respect to a loss value. Using an optimizer\ninstance, you can use these gradients to update these variables (which you can\nretrieve using `model.trainable_weights`).\n\nLet's consider a simple MNIST model:",
"_____no_output_____"
]
],
[
[
"inputs = keras.Input(shape=(784,), name=\"digits\")\nx1 = layers.Dense(64, activation=\"relu\")(inputs)\nx2 = layers.Dense(64, activation=\"relu\")(x1)\noutputs = layers.Dense(10, name=\"predictions\")(x2)\nmodel = keras.Model(inputs=inputs, outputs=outputs)",
"_____no_output_____"
]
],
[
[
"Let's train it using mini-batch gradient with a custom training loop.\n\nFirst, we're going to need an optimizer, a loss function, and a dataset:",
"_____no_output_____"
]
],
[
[
"# Instantiate an optimizer.\noptimizer = keras.optimizers.SGD(learning_rate=1e-3)\n# Instantiate a loss function.\nloss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n\n# Prepare the training dataset.\nbatch_size = 64\n(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()\nx_train = np.reshape(x_train, (-1, 784))\nx_test = np.reshape(x_test, (-1, 784))\ntrain_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))\ntrain_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)",
"_____no_output_____"
]
],
[
[
"Here's our training loop:\n\n- We open a `for` loop that iterates over epochs\n- For each epoch, we open a `for` loop that iterates over the dataset, in batches\n- For each batch, we open a `GradientTape()` scope\n- Inside this scope, we call the model (forward pass) and compute the loss\n- Outside the scope, we retrieve the gradients of the weights\nof the model with regard to the loss\n- Finally, we use the optimizer to update the weights of the model based on the\ngradients",
"_____no_output_____"
]
],
[
[
"epochs = 2\nfor epoch in range(epochs):\n print(\"\\nStart of epoch %d\" % (epoch,))\n\n # Iterate over the batches of the dataset.\n for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):\n\n # Open a GradientTape to record the operations run\n # during the forward pass, which enables auto-differentiation.\n with tf.GradientTape() as tape:\n\n # Run the forward pass of the layer.\n # The operations that the layer applies\n # to its inputs are going to be recorded\n # on the GradientTape.\n logits = model(x_batch_train, training=True) # Logits for this minibatch\n\n # Compute the loss value for this minibatch.\n loss_value = loss_fn(y_batch_train, logits)\n\n # Use the gradient tape to automatically retrieve\n # the gradients of the trainable variables with respect to the loss.\n grads = tape.gradient(loss_value, model.trainable_weights)\n\n # Run one step of gradient descent by updating\n # the value of the variables to minimize the loss.\n optimizer.apply_gradients(zip(grads, model.trainable_weights))\n\n # Log every 200 batches.\n if step % 200 == 0:\n print(\n \"Training loss (for one batch) at step %d: %.4f\"\n % (step, float(loss_value))\n )\n print(\"Seen so far: %s samples\" % ((step + 1) * 64))",
"_____no_output_____"
]
],
[
[
"## Low-level handling of metrics\n\nLet's add metrics monitoring to this basic loop.\n\nYou can readily reuse the built-in metrics (or custom ones you wrote) in such training\nloops written from scratch. Here's the flow:\n\n- Instantiate the metric at the start of the loop\n- Call `metric.update_state()` after each batch\n- Call `metric.result()` when you need to display the current value of the metric\n- Call `metric.reset_states()` when you need to clear the state of the metric\n(typically at the end of an epoch)\n\nLet's use this knowledge to compute `SparseCategoricalAccuracy` on validation data at\nthe end of each epoch:",
"_____no_output_____"
]
],
[
[
"# Get model\ninputs = keras.Input(shape=(784,), name=\"digits\")\nx = layers.Dense(64, activation=\"relu\", name=\"dense_1\")(inputs)\nx = layers.Dense(64, activation=\"relu\", name=\"dense_2\")(x)\noutputs = layers.Dense(10, name=\"predictions\")(x)\nmodel = keras.Model(inputs=inputs, outputs=outputs)\n\n# Instantiate an optimizer to train the model.\noptimizer = keras.optimizers.SGD(learning_rate=1e-3)\n# Instantiate a loss function.\nloss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n\n# Prepare the metrics.\ntrain_acc_metric = keras.metrics.SparseCategoricalAccuracy()\nval_acc_metric = keras.metrics.SparseCategoricalAccuracy()\n\n# Prepare the training dataset.\nbatch_size = 64\ntrain_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))\ntrain_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)\n\n# Prepare the validation dataset.\n# Reserve 10,000 samples for validation.\nx_val = x_train[-10000:]\ny_val = y_train[-10000:]\nx_train = x_train[:-10000]\ny_train = y_train[:-10000]\nval_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))\nval_dataset = val_dataset.batch(64)",
"_____no_output_____"
]
],
[
[
"Here's our training & evaluation loop:",
"_____no_output_____"
]
],
[
[
"import time\n\nepochs = 2\nfor epoch in range(epochs):\n print(\"\\nStart of epoch %d\" % (epoch,))\n start_time = time.time()\n\n # Iterate over the batches of the dataset.\n for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):\n with tf.GradientTape() as tape:\n logits = model(x_batch_train, training=True)\n loss_value = loss_fn(y_batch_train, logits)\n grads = tape.gradient(loss_value, model.trainable_weights)\n optimizer.apply_gradients(zip(grads, model.trainable_weights))\n\n # Update training metric.\n train_acc_metric.update_state(y_batch_train, logits)\n\n # Log every 200 batches.\n if step % 200 == 0:\n print(\n \"Training loss (for one batch) at step %d: %.4f\"\n % (step, float(loss_value))\n )\n print(\"Seen so far: %d samples\" % ((step + 1) * 64))\n\n # Display metrics at the end of each epoch.\n train_acc = train_acc_metric.result()\n print(\"Training acc over epoch: %.4f\" % (float(train_acc),))\n\n # Reset training metrics at the end of each epoch\n train_acc_metric.reset_states()\n\n # Run a validation loop at the end of each epoch.\n for x_batch_val, y_batch_val in val_dataset:\n val_logits = model(x_batch_val, training=False)\n # Update val metrics\n val_acc_metric.update_state(y_batch_val, val_logits)\n val_acc = val_acc_metric.result()\n val_acc_metric.reset_states()\n print(\"Validation acc: %.4f\" % (float(val_acc),))\n print(\"Time taken: %.2fs\" % (time.time() - start_time))",
"_____no_output_____"
]
],
[
[
"## Speeding-up your training step with `tf.function`\n\nThe default runtime in TensorFlow 2.0 is\n[eager execution](https://www.tensorflow.org/guide/eager). As such, our training loop\nabove executes eagerly.\n\nThis is great for debugging, but graph compilation has a definite performance\nadvantage. Describing your computation as a static graph enables the framework\nto apply global performance optimizations. This is impossible when\nthe framework is constrained to greedly execute one operation after another,\nwith no knowledge of what comes next.\n\nYou can compile into a static graph any function that takes tensors as input.\nJust add a `@tf.function` decorator on it, like this:",
"_____no_output_____"
]
],
[
[
"@tf.function\ndef train_step(x, y):\n with tf.GradientTape() as tape:\n logits = model(x, training=True)\n loss_value = loss_fn(y, logits)\n grads = tape.gradient(loss_value, model.trainable_weights)\n optimizer.apply_gradients(zip(grads, model.trainable_weights))\n train_acc_metric.update_state(y, logits)\n return loss_value\n",
"_____no_output_____"
]
],
[
[
"Let's do the same with the evaluation step:",
"_____no_output_____"
]
],
[
[
"@tf.function\ndef test_step(x, y):\n val_logits = model(x, training=False)\n val_acc_metric.update_state(y, val_logits)\n",
"_____no_output_____"
]
],
[
[
"Now, let's re-run our training loop with this compiled training step:",
"_____no_output_____"
]
],
[
[
"import time\n\nepochs = 2\nfor epoch in range(epochs):\n print(\"\\nStart of epoch %d\" % (epoch,))\n start_time = time.time()\n\n # Iterate over the batches of the dataset.\n for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):\n loss_value = train_step(x_batch_train, y_batch_train)\n\n # Log every 200 batches.\n if step % 200 == 0:\n print(\n \"Training loss (for one batch) at step %d: %.4f\"\n % (step, float(loss_value))\n )\n print(\"Seen so far: %d samples\" % ((step + 1) * 64))\n\n # Display metrics at the end of each epoch.\n train_acc = train_acc_metric.result()\n print(\"Training acc over epoch: %.4f\" % (float(train_acc),))\n\n # Reset training metrics at the end of each epoch\n train_acc_metric.reset_states()\n\n # Run a validation loop at the end of each epoch.\n for x_batch_val, y_batch_val in val_dataset:\n test_step(x_batch_val, y_batch_val)\n\n val_acc = val_acc_metric.result()\n val_acc_metric.reset_states()\n print(\"Validation acc: %.4f\" % (float(val_acc),))\n print(\"Time taken: %.2fs\" % (time.time() - start_time))",
"_____no_output_____"
]
],
[
[
"Much faster, isn't it?",
"_____no_output_____"
],
[
"## Low-level handling of losses tracked by the model\n\nLayers & models recursively track any losses created during the forward pass\nby layers that call `self.add_loss(value)`. The resulting list of scalar loss\nvalues are available via the property `model.losses`\nat the end of the forward pass.\n\nIf you want to be using these loss components, you should sum them\nand add them to the main loss in your training step.\n\nConsider this layer, that creates an activity regularization loss:",
"_____no_output_____"
]
],
[
[
"class ActivityRegularizationLayer(layers.Layer):\n def call(self, inputs):\n self.add_loss(1e-2 * tf.reduce_sum(inputs))\n return inputs\n",
"_____no_output_____"
]
],
[
[
"Let's build a really simple model that uses it:",
"_____no_output_____"
]
],
[
[
"inputs = keras.Input(shape=(784,), name=\"digits\")\nx = layers.Dense(64, activation=\"relu\")(inputs)\n# Insert activity regularization as a layer\nx = ActivityRegularizationLayer()(x)\nx = layers.Dense(64, activation=\"relu\")(x)\noutputs = layers.Dense(10, name=\"predictions\")(x)\n\nmodel = keras.Model(inputs=inputs, outputs=outputs)",
"_____no_output_____"
]
],
[
[
"Here's what our training step should look like now:",
"_____no_output_____"
]
],
[
[
"@tf.function\ndef train_step(x, y):\n with tf.GradientTape() as tape:\n logits = model(x, training=True)\n loss_value = loss_fn(y, logits)\n # Add any extra losses created during the forward pass.\n loss_value += sum(model.losses)\n grads = tape.gradient(loss_value, model.trainable_weights)\n optimizer.apply_gradients(zip(grads, model.trainable_weights))\n train_acc_metric.update_state(y, logits)\n return loss_value\n",
"_____no_output_____"
]
],
[
[
"## Summary\n\nNow you know everything there is to know about using built-in training loops and\nwriting your own from scratch.\n\nTo conclude, here's a simple end-to-end example that ties together everything\nyou've learned in this guide: a DCGAN trained on MNIST digits.",
"_____no_output_____"
],
[
"## End-to-end example: a GAN training loop from scratch\n\nYou may be familiar with Generative Adversarial Networks (GANs). GANs can generate new\nimages that look almost real, by learning the latent distribution of a training\ndataset of images (the \"latent space\" of the images).\n\nA GAN is made of two parts: a \"generator\" model that maps points in the latent\nspace to points in image space, a \"discriminator\" model, a classifier\nthat can tell the difference between real images (from the training dataset)\nand fake images (the output of the generator network).\n\nA GAN training loop looks like this:\n\n1) Train the discriminator.\n- Sample a batch of random points in the latent space.\n- Turn the points into fake images via the \"generator\" model.\n- Get a batch of real images and combine them with the generated images.\n- Train the \"discriminator\" model to classify generated vs. real images.\n\n2) Train the generator.\n- Sample random points in the latent space.\n- Turn the points into fake images via the \"generator\" network.\n- Get a batch of real images and combine them with the generated images.\n- Train the \"generator\" model to \"fool\" the discriminator and classify the fake images\nas real.\n\nFor a much more detailed overview of how GANs works, see\n[Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).\n\nLet's implement this training loop. First, create the discriminator meant to classify\nfake vs real digits:",
"_____no_output_____"
]
],
[
[
"discriminator = keras.Sequential(\n [\n keras.Input(shape=(28, 28, 1)),\n layers.Conv2D(64, (3, 3), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2D(128, (3, 3), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.GlobalMaxPooling2D(),\n layers.Dense(1),\n ],\n name=\"discriminator\",\n)\ndiscriminator.summary()",
"_____no_output_____"
]
],
[
[
"Then let's create a generator network,\nthat turns latent vectors into outputs of shape `(28, 28, 1)` (representing\nMNIST digits):",
"_____no_output_____"
]
],
[
[
"latent_dim = 128\n\ngenerator = keras.Sequential(\n [\n keras.Input(shape=(latent_dim,)),\n # We want to generate 128 coefficients to reshape into a 7x7x128 map\n layers.Dense(7 * 7 * 128),\n layers.LeakyReLU(alpha=0.2),\n layers.Reshape((7, 7, 128)),\n layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2D(1, (7, 7), padding=\"same\", activation=\"sigmoid\"),\n ],\n name=\"generator\",\n)",
"_____no_output_____"
]
],
[
[
"Here's the key bit: the training loop. As you can see it is quite straightforward. The\ntraining step function only takes 17 lines.",
"_____no_output_____"
]
],
[
[
"# Instantiate one optimizer for the discriminator and another for the generator.\nd_optimizer = keras.optimizers.Adam(learning_rate=0.0003)\ng_optimizer = keras.optimizers.Adam(learning_rate=0.0004)\n\n# Instantiate a loss function.\nloss_fn = keras.losses.BinaryCrossentropy(from_logits=True)\n\n\[email protected]\ndef train_step(real_images):\n # Sample random points in the latent space\n random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))\n # Decode them to fake images\n generated_images = generator(random_latent_vectors)\n # Combine them with real images\n combined_images = tf.concat([generated_images, real_images], axis=0)\n\n # Assemble labels discriminating real from fake images\n labels = tf.concat(\n [tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0\n )\n # Add random noise to the labels - important trick!\n labels += 0.05 * tf.random.uniform(labels.shape)\n\n # Train the discriminator\n with tf.GradientTape() as tape:\n predictions = discriminator(combined_images)\n d_loss = loss_fn(labels, predictions)\n grads = tape.gradient(d_loss, discriminator.trainable_weights)\n d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights))\n\n # Sample random points in the latent space\n random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))\n # Assemble labels that say \"all real images\"\n misleading_labels = tf.zeros((batch_size, 1))\n\n # Train the generator (note that we should *not* update the weights\n # of the discriminator)!\n with tf.GradientTape() as tape:\n predictions = discriminator(generator(random_latent_vectors))\n g_loss = loss_fn(misleading_labels, predictions)\n grads = tape.gradient(g_loss, generator.trainable_weights)\n g_optimizer.apply_gradients(zip(grads, generator.trainable_weights))\n return d_loss, g_loss, generated_images\n",
"_____no_output_____"
]
],
[
[
"Let's train our GAN, by repeatedly calling `train_step` on batches of images.\n\nSince our discriminator and generator are convnets, you're going to want to\nrun this code on a GPU.",
"_____no_output_____"
]
],
[
[
"import os\n\n# Prepare the dataset. We use both the training & test MNIST digits.\nbatch_size = 64\n(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()\nall_digits = np.concatenate([x_train, x_test])\nall_digits = all_digits.astype(\"float32\") / 255.0\nall_digits = np.reshape(all_digits, (-1, 28, 28, 1))\ndataset = tf.data.Dataset.from_tensor_slices(all_digits)\ndataset = dataset.shuffle(buffer_size=1024).batch(batch_size)\n\nepochs = 1 # In practice you need at least 20 epochs to generate nice digits.\nsave_dir = \"./\"\n\nfor epoch in range(epochs):\n print(\"\\nStart epoch\", epoch)\n\n for step, real_images in enumerate(dataset):\n # Train the discriminator & generator on one batch of real images.\n d_loss, g_loss, generated_images = train_step(real_images)\n\n # Logging.\n if step % 200 == 0:\n # Print metrics\n print(\"discriminator loss at step %d: %.2f\" % (step, d_loss))\n print(\"adversarial loss at step %d: %.2f\" % (step, g_loss))\n\n # Save one generated image\n img = tf.keras.preprocessing.image.array_to_img(\n generated_images[0] * 255.0, scale=False\n )\n img.save(os.path.join(save_dir, \"generated_img\" + str(step) + \".png\"))\n\n # To limit execution time we stop after 10 steps.\n # Remove the lines below to actually train the model!\n if step > 10:\n break",
"_____no_output_____"
]
],
[
[
"That's it! You'll get nice-looking fake MNIST digits after just ~30s of training on the\nColab GPU.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d052f1f329a2c89102a649d1f2dd2f50aaaccf45 | 47,902 | ipynb | Jupyter Notebook | create_plot.ipynb | ysterin/awr | 25b79116b322200ee1e7920e89a48864a255e3bb | [
"MIT"
] | null | null | null | create_plot.ipynb | ysterin/awr | 25b79116b322200ee1e7920e89a48864a255e3bb | [
"MIT"
] | null | null | null | create_plot.ipynb | ysterin/awr | 25b79116b322200ee1e7920e89a48864a255e3bb | [
"MIT"
] | null | null | null | 598.775 | 46,164 | 0.950795 | [
[
[
"import numpy as np \nimport pandas as pd\nfrom pathlib import Path\nfrom matplotlib import pyplot as plt",
"_____no_output_____"
],
[
"log_files = [Path(f'lander5k/log.txt')] + [Path(f'lander{i}/log.txt') for i in range(2, 6)]\ndfs = [pd.read_csv(f, delim_whitespace=True) for f in log_files]",
"_____no_output_____"
],
[
"plt.figure(figsize=(12, 6))\nfor i in range(5):\n df = dfs[i]\n plt.plot(df.Samples, df.Test_Return)\nplt.xlabel('samples')\nplt.ylabel('Test Return')\nplt.title('LunarLander-v2')\nplt.savefig('figures/lunar_lander.svg')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d052f8225a00a40cd93293d1dd68bad297ffa8c4 | 8,658 | ipynb | Jupyter Notebook | SNIF/.ipynb_checkpoints/SNIF_Focos de calor_xlsx-checkpoint.ipynb | geanclm/LabHacker | 5cb5142846b05e32a1424cfc91974c44c1d7b13b | [
"MIT"
] | 1 | 2022-03-01T20:59:11.000Z | 2022-03-01T20:59:11.000Z | SNIF/.ipynb_checkpoints/SNIF_Focos de calor_xlsx-checkpoint.ipynb | geanclm/LabHacker | 5cb5142846b05e32a1424cfc91974c44c1d7b13b | [
"MIT"
] | null | null | null | SNIF/.ipynb_checkpoints/SNIF_Focos de calor_xlsx-checkpoint.ipynb | geanclm/LabHacker | 5cb5142846b05e32a1424cfc91974c44c1d7b13b | [
"MIT"
] | null | null | null | 25.023121 | 71 | 0.373181 | [
[
[
"# Title",
"_____no_output_____"
]
],
[
[
"# SERVIÇO FLORESTAL BRASILEIRO\n# Sistema Nacional de Informações Florestais\n# Incêndios Florestais",
"_____no_output_____"
]
],
[
[
"# Import libs",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"# Import data",
"_____no_output_____"
]
],
[
[
"# fonte: https://snif.florestal.gov.br/pt-br/incendios-florestais",
"_____no_output_____"
],
[
"df = pd.read_excel('focos_calor_1998_2019.xlsx')",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 325 entries, 0 to 324\nData columns (total 4 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Ano 325 non-null object\n 1 Mês 325 non-null object\n 2 Número 325 non-null int64 \n 3 Período 325 non-null object\ndtypes: int64(1), object(3)\nmemory usage: 10.3+ KB\n"
],
[
"df",
"_____no_output_____"
],
[
"df[df['Número']==df['Número'].max()]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d052fab5455684662066c6567972bb488953fad1 | 5,276 | ipynb | Jupyter Notebook | data-science/scikit-learn/02/02 kNN-in-Scikit-Learn.ipynb | le3t/ko-repo | 50eb0b4cadb9db9bf608a9e5d36376f38ff5cce5 | [
"Apache-2.0"
] | 4 | 2019-10-26T01:25:30.000Z | 2020-01-12T08:10:25.000Z | data-science/scikit-learn/02/02 kNN-in-Scikit-Learn.ipynb | le3t/ko-repo | 50eb0b4cadb9db9bf608a9e5d36376f38ff5cce5 | [
"Apache-2.0"
] | 3 | 2019-08-26T13:41:57.000Z | 2019-08-26T13:44:21.000Z | data-science/scikit-learn/02/02 kNN-in-Scikit-Learn.ipynb | le3t/ko-repo | 50eb0b4cadb9db9bf608a9e5d36376f38ff5cce5 | [
"Apache-2.0"
] | 1 | 2018-12-07T10:06:42.000Z | 2018-12-07T10:06:42.000Z | 18.319444 | 84 | 0.466073 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\nraw_data_X = [[3.69733645, 2.96309765],\n [3.72261926, 1.86443185],\n [1.36520147, 3.37311737],\n [3.81704265, 4.53354867],\n [2.20880111, 2.87630253],\n [7.29672096, 4.42827336],\n [5.51750851, 3.9209554 ],\n [9.67833238, 2.4217944 ],\n [7.11041949, 3.07309462],\n [7.6705654 , 0.00522596]]\nraw_data_y = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]\n\nX_train = np.array(raw_data_X)\ny_train = np.array(raw_data_y)\n\nx = np.array([[8.023423523, 3.123353242]])",
"_____no_output_____"
],
[
"%run ../scripts/kNN.py",
"_____no_output_____"
],
[
"predict_y = kNN_classify(6, X_train, y_train, x)",
"_____no_output_____"
],
[
"predict_y",
"_____no_output_____"
]
],
[
[
"### 使用scikit_learn中的kNN",
"_____no_output_____"
]
],
[
[
"from sklearn.neighbors import KNeighborsClassifier",
"_____no_output_____"
],
[
"kNN_classifier = KNeighborsClassifier(n_neighbors=6)",
"_____no_output_____"
],
[
"kNN_classifier.fit(X_train, y_train)",
"_____no_output_____"
],
[
"kNN_classifier.predict(x)",
"_____no_output_____"
],
[
"y_predict = kNN_classifier.predict(x)",
"_____no_output_____"
],
[
"y_predict[0]",
"_____no_output_____"
]
],
[
[
"### 重新整理我们的kNN的代码",
"_____no_output_____"
]
],
[
[
"%run ../kNN/kNN.py",
"_____no_output_____"
],
[
"knn_clf = KNNClassifier(k=6)",
"_____no_output_____"
],
[
"knn_clf.fit(X_train, y_train)",
"_____no_output_____"
],
[
"y_predict = knn_clf.predict(x)",
"_____no_output_____"
],
[
"y_predict",
"_____no_output_____"
],
[
"y_predict[0]",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d052fb5ebe7df3d5071aff048653881496f1d987 | 11,199 | ipynb | Jupyter Notebook | C1/W3/assignment/C1W3_Assignment.ipynb | druvdub/Tensorflow-Specialization | 4bce7f5df10d797c8c60f50a822f511de70cd9ee | [
"Apache-2.0"
] | null | null | null | C1/W3/assignment/C1W3_Assignment.ipynb | druvdub/Tensorflow-Specialization | 4bce7f5df10d797c8c60f50a822f511de70cd9ee | [
"Apache-2.0"
] | null | null | null | C1/W3/assignment/C1W3_Assignment.ipynb | druvdub/Tensorflow-Specialization | 4bce7f5df10d797c8c60f50a822f511de70cd9ee | [
"Apache-2.0"
] | null | null | null | 33.035398 | 365 | 0.580766 | [
[
[
"# Week 3: Improve MNIST with Convolutions\n\nIn the videos you looked at how you would improve Fashion MNIST using Convolutions. For this exercise see if you can improve MNIST to 99.5% accuracy or more by adding only a single convolutional layer and a single MaxPooling 2D layer to the model from the assignment of the previous week. \n\nYou should stop training once the accuracy goes above this amount. It should happen in less than 10 epochs, so it's ok to hard code the number of epochs for training, but your training must end once it hits the above metric. If it doesn't, then you'll need to redesign your callback.\n\nWhen 99.5% accuracy has been hit, you should print out the string \"Reached 99.5% accuracy so cancelling training!\"\n",
"_____no_output_____"
]
],
[
[
"import os\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras",
"_____no_output_____"
]
],
[
[
"Begin by loading the data. A couple of things to notice:\n\n- The file `mnist.npz` is already included in the current workspace under the `data` directory. By default the `load_data` from Keras accepts a path relative to `~/.keras/datasets` but in this case it is stored somewhere else, as a result of this, you need to specify the full path.\n\n- `load_data` returns the train and test sets in the form of the tuples `(x_train, y_train), (x_test, y_test)` but in this exercise you will be needing only the train set so you can ignore the second tuple.",
"_____no_output_____"
]
],
[
[
"# Load the data\n\n# Get current working directory\ncurrent_dir = os.getcwd() \n\n# Append data/mnist.npz to the previous path to get the full path\ndata_path = os.path.join(current_dir, \"data/mnist.npz\") \n\n# Get only training set\n(training_images, training_labels), _ = tf.keras.datasets.mnist.load_data(path=data_path) \n",
"_____no_output_____"
]
],
[
[
"One important step when dealing with image data is to preprocess the data. During the preprocess step you can apply transformations to the dataset that will be fed into your convolutional neural network.\n\nHere you will apply two transformations to the data:\n- Reshape the data so that it has an extra dimension. The reason for this \nis that commonly you will use 3-dimensional arrays (without counting the batch dimension) to represent image data. The third dimension represents the color using RGB values. This data might be in black and white format so the third dimension doesn't really add any additional information for the classification process but it is a good practice regardless.\n\n\n- Normalize the pixel values so that these are values between 0 and 1. You can achieve this by dividing every value in the array by the maximum.\n\nRemember that these tensors are of type `numpy.ndarray` so you can use functions like [reshape](https://numpy.org/doc/stable/reference/generated/numpy.reshape.html) or [divide](https://numpy.org/doc/stable/reference/generated/numpy.divide.html) to complete the `reshape_and_normalize` function below:",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: reshape_and_normalize\n\ndef reshape_and_normalize(images):\n \n ### START CODE HERE\n\n # Reshape the images to add an extra dimension\n images = np.reshape(images, images.shape + (1,))\n \n # Normalize pixel values\n images = np.divide(images,255)\n \n ### END CODE HERE\n\n return images",
"_____no_output_____"
]
],
[
[
"Test your function with the next cell:",
"_____no_output_____"
]
],
[
[
"# Reload the images in case you run this cell multiple times\n(training_images, _), _ = tf.keras.datasets.mnist.load_data(path=data_path) \n\n# Apply your function\ntraining_images = reshape_and_normalize(training_images)\n\nprint(f\"Maximum pixel value after normalization: {np.max(training_images)}\\n\")\nprint(f\"Shape of training set after reshaping: {training_images.shape}\\n\")\nprint(f\"Shape of one image after reshaping: {training_images[0].shape}\")\n",
"Maximum pixel value after normalization: 1.0\n\nShape of training set after reshaping: (60000, 28, 28, 1)\n\nShape of one image after reshaping: (28, 28, 1)\n"
]
],
[
[
"**Expected Output:**\n```\nMaximum pixel value after normalization: 1.0\n\nShape of training set after reshaping: (60000, 28, 28, 1)\n\nShape of one image after reshaping: (28, 28, 1)\n```",
"_____no_output_____"
],
[
"Now complete the callback that will ensure that training will stop after an accuracy of 99.5% is reached:",
"_____no_output_____"
]
],
[
[
"# GRADED CLASS: myCallback\n### START CODE HERE\n\n# Remember to inherit from the correct class\nclass myCallback(tf.keras.callbacks.Callback):\n # Define the method that checks the accuracy at the end of each epoch\n def on_epoch_end(self, epoch, logs={}):\n # check accuracy\n if logs.get('accuracy') >= 0.995:\n print('\\nReached 99.5% accuracy so cancelling training!')\n self.model.stop_training = True\n\n### END CODE HERE\n\n\n",
"_____no_output_____"
]
],
[
[
"Finally, complete the `convolutional_model` function below. This function should return your convolutional neural network:",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: convolutional_model\ndef convolutional_model():\n ### START CODE HERE\n\n # Define the model, it should have 5 layers:\n # - A Conv2D layer with 32 filters, a kernel_size of 3x3, ReLU activation function\n # and an input shape that matches that of every image in the training set\n # - A MaxPooling2D layer with a pool_size of 2x2\n # - A Flatten layer with no arguments\n # - A Dense layer with 128 units and ReLU activation function\n # - A Dense layer with 10 units and softmax activation function\n model = tf.keras.models.Sequential([ \n tf.keras.layers.Conv2D(32, (3,3), activation = 'relu', input_shape = (28, 28, 1)),\n tf.keras.layers.MaxPooling2D(2,2),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation = 'relu'),\n tf.keras.layers.Dense(10, activation = 'softmax')\n ]) \n\n ### END CODE HERE\n\n # Compile the model\n model.compile(optimizer='adam', \n loss='sparse_categorical_crossentropy', \n metrics=['accuracy']) \n \n return model",
"_____no_output_____"
],
[
"# Save your untrained model\nmodel = convolutional_model()\n\n# Instantiate the callback class\ncallbacks = myCallback()\n\n# Train your model (this can take up to 5 minutes)\nhistory = model.fit(training_images, training_labels, epochs=10, callbacks=[callbacks])",
"Epoch 1/10\n1875/1875 [==============================] - 36s 19ms/step - loss: 0.1522 - accuracy: 0.9548\nEpoch 2/10\n1875/1875 [==============================] - 35s 19ms/step - loss: 0.0529 - accuracy: 0.9840\nEpoch 3/10\n1875/1875 [==============================] - 35s 19ms/step - loss: 0.0327 - accuracy: 0.9897\nEpoch 4/10\n1875/1875 [==============================] - 34s 18ms/step - loss: 0.0212 - accuracy: 0.9935\nEpoch 5/10\n1872/1875 [============================>.] - ETA: 0s - loss: 0.0150 - accuracy: 0.9952\nReached 99.5% accuracy so cancelling training!\n1875/1875 [==============================] - 34s 18ms/step - loss: 0.0150 - accuracy: 0.9952\n"
]
],
[
[
"If you see the message that you defined in your callback printed out after less than 10 epochs it means your callback worked as expected. You can also double check by running the following cell:",
"_____no_output_____"
]
],
[
[
"print(f\"Your model was trained for {len(history.epoch)} epochs\")",
"Your model was trained for 5 epochs\n"
]
],
[
[
"**Congratulations on finishing this week's assignment!**\n\nYou have successfully implemented a CNN to assist you in the image classification task. Nice job!\n\n**Keep it up!**",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d05307a18330b813c5949c30bb3652d8eae86979 | 6,465 | ipynb | Jupyter Notebook | ML model/model-deploy.ipynb | darshkaushik/cough-it | 683ca49785db794417f1ddcad67fba43ccebcbc2 | [
"MIT"
] | 4 | 2021-11-17T13:08:21.000Z | 2021-11-19T01:47:05.000Z | ML model/model-deploy.ipynb | darshkaushik/cough-it | 683ca49785db794417f1ddcad67fba43ccebcbc2 | [
"MIT"
] | null | null | null | ML model/model-deploy.ipynb | darshkaushik/cough-it | 683ca49785db794417f1ddcad67fba43ccebcbc2 | [
"MIT"
] | null | null | null | 25.756972 | 199 | 0.580356 | [
[
[
"## Data and Training\n\nThe **augmented** cough audio dataset of the [Project Coswara](https://coswara.iisc.ac.in/about) was used to train the deep CNN model.\n\nThe preprocessing steps and CNN architecture is as shown below. The training code is concealed on Github to protect the exact hyperparameters and maintain performance integrity of the model.\n<img src = \"../assets/ml-pipeline.png\" alt=\"../assets/ml-pipeline.png\" width=\"800\"/>\n",
"_____no_output_____"
],
[
"## Model Deployment on IBM Watson Machine Learning\n\nBelow are the contents of an IBM Watson Studio Notebook for deploying our trained ML model IBM Watson Machine Learning.\n\nOutputs, Keys, Endpoints and URLs are removed (replaced with <>) to maintain privacy.",
"_____no_output_____"
],
[
"### Import model",
"_____no_output_____"
]
],
[
[
"import ibm_boto3\nfrom ibm_botocore.client import Config",
"_____no_output_____"
],
[
"\n# @hidden_cell\n# The following code contains the credentials for a file in your IBM Cloud Object Storage.\n# You might want to remove those credentials before you share your notebook.\ncredentials_2 = {\n 'IAM_SERVICE_ID': <>,\n 'IBM_API_KEY_ID': <>,\n 'ENDPOINT': <>,\n 'IBM_AUTH_ENDPOINT': <>,\n 'BUCKET': <>,\n 'FILE': 'cough-it-model.tgz'\n}\n",
"_____no_output_____"
],
[
"cos = ibm_boto3.client(service_name='s3',\n ibm_api_key_id=credentials_2['IBM_API_KEY_ID'], \n ibm_auth_endpoint=credentials_2['IBM_AUTH_ENDPOINT'],\n ibm_service_instance_id=credentials_2['IAM_SERVICE_ID'],\n config=Config(signature_version='oauth'),\n endpoint_url=credentials_2['ENDPOINT'])\n\ncos.download_file(Bucket=credentials_2['BUCKET'], Key='cough-it-model.h5.tgz', Filename='cough-it-model.h5.tgz')\nmodel_path = 'cough-it-model.h5.tgz'\n",
"_____no_output_____"
]
],
[
[
"### Set up Watson Machine Learning Client and Deployment space",
"_____no_output_____"
]
],
[
[
"from ibm_watson_machine_learning import APIClient\n\nwml_credentials = {\n \"apikey\" : <>,\n \"url\" : <>\n}\n\nclient = APIClient( wml_credentials )",
"_____no_output_____"
],
[
"space_guid = <>\nclient.set.default_space(space_guid)",
"_____no_output_____"
]
],
[
[
"### Store the model",
"_____no_output_____"
]
],
[
[
"sofware_spec_uid = client.software_specifications.get_id_by_name(\"default_py3.8\")\n\nmetadata = {\n client.repository.ModelMetaNames.NAME: \"cough-it model\",\n client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid,\n client.repository.ModelMetaNames.TYPE: \"tensorflow_2.4\"\n}\npublished_model = client.repository.store_model( model= model_path, meta_props=metadata )",
"_____no_output_____"
],
[
"import json\n\npublished_model_uid = client.repository.get_model_uid(published_model)\nmodel_details = client.repository.get_details(published_model_uid)\nprint(json.dumps(model_details, indent=2))",
"_____no_output_____"
]
],
[
[
"### Create a deployment",
"_____no_output_____"
]
],
[
[
"dep_metadata = {\n client.deployments.ConfigurationMetaNames.NAME: \"Deployment of external Keras model\",\n client.deployments.ConfigurationMetaNames.ONLINE: {}\n}\n\ncreated_deployment = client.deployments.create(published_model_uid, meta_props=dep_metadata)\n",
"_____no_output_____"
],
[
"deployment_uid = client.deployments.get_uid(created_deployment)\nclient.deployments.get_details(deployment_uid)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0531a925f611ab2e2e2c6ae3e91d981c46cc858 | 6,844 | ipynb | Jupyter Notebook | ParallelParking.ipynb | ianleongg/Joy-Ride-Parallel-Parking | d0dc939bd21bc4c366960240a340387ff18804bc | [
"MIT"
] | null | null | null | ParallelParking.ipynb | ianleongg/Joy-Ride-Parallel-Parking | d0dc939bd21bc4c366960240a340387ff18804bc | [
"MIT"
] | null | null | null | ParallelParking.ipynb | ianleongg/Joy-Ride-Parallel-Parking | d0dc939bd21bc4c366960240a340387ff18804bc | [
"MIT"
] | null | null | null | 34.39196 | 186 | 0.56502 | [
[
[
"# Joy Ride - Part 3: Parallel Parking\nIn this section you will write a function that implements the correct sequence of steps required to parallel park a vehicle.\n\nNOTE: for this segment the vehicle's maximum speed has been set to just over 4 mph. This should make parking a little easier.\n\n",
"_____no_output_____"
],
[
"If you have never heard of WASD keys, please check out this [link](https://en.wikipedia.org/wiki/Arrow_keys#WASD_keys).\n\n## Instructions to get started\n\n1. Run the `SETUP CELL` below this one by pressing `Ctrl + Enter`. \n1. Click the button that says \"Load Car Simulator\". The simulator will appear below the button.\n1. Run the cell below the simulator, marked `CODE CELL` (hit `Ctrl + Enter`). \n1. Try to drive the car using WASD keys. You might notice a problem...\n1. Press the **Reset** button in the simulator and then modify the code in the `CODE CELL` as per the instructions in TODO comments. \n1. When you think you've fixed the problem, run the code cell again. \n\n**NOTE** - Depending on your computer, it may take a few minutes for the simulator to load! Please be patient.\n\n### Instructions to Reload the Simulator\nOnce the simulator is loaded, the `SETUP CELL` cannot be rerun, or it will prevent the simulator from appearing. If something happens to the simulator, you can do the following:\n- Go to Jupyter’s menu: Kernel --> Restart and Clear Output\n- Reload the page (Cmd-R)\n- Run the first cell again\n- Click the Green `Load Car Simulator` button again ",
"_____no_output_____"
]
],
[
[
"# SETUP CELL\n\n%%HTML\n<link rel=\"stylesheet\" type=\"text/css\" href=\"buttonStyle.css\">\n<button id=\"launcher\">Load Car Simulator </button>\n<button id=\"restart\">Restart Connection</button>\n<script src=\"setupLauncher.js\"></script><div id=\"simulator_frame\"></sim>\n<script src=\"kernelRestart.js\"></script>",
"_____no_output_____"
],
[
"# CODE CELL\n\n# Before/After running any code changes make sure to click the button \"Restart Connection\" above first.\n# Also make sure to click Reset in the simulator to refresh the connection.\n# You need to wait for the Kernel Ready message.\n\n\ncar_parameters = {\"throttle\": 0, \"steer\": 0, \"brake\": 0}\n\ndef control(pos_x, pos_y, time, velocity):\n \"\"\" Controls the simulated car\"\"\"\n global car_parameters\n \n # The car will back up with a steering of 25 for 3 seconds\n # then the car will back up with a steering of -25 until its y position is less than 32.5\n # then the car will steer straight and brake \n \n \n if time < 3:\n car_parameters['throttle'] = -1\n car_parameters['steer'] = 25\n elif pos_y > 32.5:\n car_parameters['throttle'] = -1\n car_parameters['steer'] = -25\n else:\n car_parameters['steer'] = 0\n car_parameters['brake'] = 1\n \n return car_parameters\n \nimport src.simulate as sim\nsim.run(control)\n",
"running\nCONNECTED\n('172.18.0.1', 50088) connected\n"
]
],
[
[
"# Submitting this Project!\nYour parallel park function is \"correct\" when:\n\n1. Your car doesn't hit any other cars.\n2. Your car stops completely inside of the right lane.\n\nOnce you've got it working, it's time to submit. Submit by pressing the `SUBMIT` button at the lower right corner of this page.",
"_____no_output_____"
]
],
[
[
"# CODE CELL\n\n# Before/After running any code changes make sure to click the button \"Restart Connection\" above first.\n# Also make sure to click Reset in the simulator to refresh the connection.\n# You need to wait for the Kernel Ready message.\n\n\ncar_parameters = {\"throttle\": 0, \"steer\": 0, \"brake\": 0}\n\ndef control(pos_x, pos_y, time, velocity):\n \"\"\" Controls the simulated car\"\"\"\n global car_parameters\n \n # The car will back up with a steering of 25 for 3 seconds\n # then the car will back up with a steering of -25 until its y position is less than 32.5\n # then the car will steer straight and brake \n \n \n if time < 3:\n car_parameters['throttle'] = -1\n car_parameters['steer'] = 25\n elif pos_y > 32.5:\n car_parameters['throttle'] = -1\n car_parameters['steer'] = -25\n else:\n car_parameters['steer'] = 0\n car_parameters['brake'] = 1\n \n return car_parameters\n \nimport src.simulate as sim\nsim.run(control)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0531d688f5740302f22f7f6c5082472484f3ed2 | 31,244 | ipynb | Jupyter Notebook | OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb | siddhantdixit/OOP-ClassWork | ce414a3836d03aa7dee0eb1d7a69e849fb6707c0 | [
"MIT"
] | null | null | null | OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb | siddhantdixit/OOP-ClassWork | ce414a3836d03aa7dee0eb1d7a69e849fb6707c0 | [
"MIT"
] | null | null | null | OOP/Practice Sessions/Python_S4_Basics_Of_NumPy_Arrays.ipynb | siddhantdixit/OOP-ClassWork | ce414a3836d03aa7dee0eb1d7a69e849fb6707c0 | [
"MIT"
] | null | null | null | 24.856006 | 225 | 0.455447 | [
[
[
"# The Basics of NumPy Arrays",
"_____no_output_____"
],
[
"<!--NAVIGATION-->\n### **Python- Numpy Practice Session-S4 : Save a Copy in local drive and Work**",
"_____no_output_____"
],
[
"Data manipulation in Python is nearly synonymous with NumPy array manipulation: even newer tools like Pandas ([Chapter 3](03.00-Introduction-to-Pandas.ipynb)) are built around the NumPy array.\nThis section will present several examples of using NumPy array manipulation to access data and subarrays, and to split, reshape, and join the arrays.\nWhile the types of operations shown here may seem a bit dry and pedantic, they comprise the building blocks of many other examples used throughout the book.\nGet to know them well!\n\nWe'll cover a few categories of basic array manipulations here:\n\n- *Attributes of arrays*: Determining the size, shape, memory consumption, and data types of arrays\n- *Indexing of arrays*: Getting and setting the value of individual array elements\n- *Slicing of arrays*: Getting and setting smaller subarrays within a larger array\n- *Reshaping of arrays*: Changing the shape of a given array\n- *Joining and splitting of arrays*: Combining multiple arrays into one, and splitting one array into many",
"_____no_output_____"
],
[
"## NumPy Array Attributes",
"_____no_output_____"
],
[
"First let's discuss some useful array attributes.\nWe'll start by defining three random arrays, a one-dimensional, two-dimensional, and three-dimensional array.\nWe'll use NumPy's random number generator, which we will *seed* with a set value in order to ensure that the same random arrays are generated each time this code is run:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nnp.random.seed(0) # seed for reproducibility\n\nx1 = np.random.randint(10, size=6) # One-dimensional array\nx2 = np.random.randint(10, size=(3, 4)) # Two-dimensional array\nx3 = np.random.randint(10, size=(3, 4, 5)) # Three-dimensional array",
"_____no_output_____"
]
],
[
[
"Each array has attributes ``ndim`` (the number of dimensions), ``shape`` (the size of each dimension), and ``size`` (the total size of the array):",
"_____no_output_____"
]
],
[
[
"print(\"x3 ndim: \", x3.ndim)\nprint(\"x3 shape:\", x3.shape)\nprint(\"x3 size: \", x3.size)",
"_____no_output_____"
]
],
[
[
"Another useful attribute is the ``dtype``, the data type of the array (which we discussed previously in [Understanding Data Types in Python](02.01-Understanding-Data-Types.ipynb)):",
"_____no_output_____"
]
],
[
[
"print(\"dtype:\", x3.dtype)",
"_____no_output_____"
]
],
[
[
"Other attributes include ``itemsize``, which lists the size (in bytes) of each array element, and ``nbytes``, which lists the total size (in bytes) of the array:",
"_____no_output_____"
]
],
[
[
"print(\"itemsize:\", x3.itemsize, \"bytes\")\nprint(\"nbytes:\", x3.nbytes, \"bytes\")",
"_____no_output_____"
]
],
[
[
"In general, we expect that ``nbytes`` is equal to ``itemsize`` times ``size``.",
"_____no_output_____"
],
[
"## Array Indexing: Accessing Single Elements",
"_____no_output_____"
],
[
"If you are familiar with Python's standard list indexing, indexing in NumPy will feel quite familiar.\nIn a one-dimensional array, the $i^{th}$ value (counting from zero) can be accessed by specifying the desired index in square brackets, just as with Python lists:",
"_____no_output_____"
]
],
[
[
"x1",
"_____no_output_____"
],
[
"x1[0]",
"_____no_output_____"
],
[
"x1[4]",
"_____no_output_____"
]
],
[
[
"To index from the end of the array, you can use negative indices:",
"_____no_output_____"
]
],
[
[
"x1[-1]",
"_____no_output_____"
],
[
"x1[-2]",
"_____no_output_____"
]
],
[
[
"In a multi-dimensional array, items can be accessed using a comma-separated tuple of indices:",
"_____no_output_____"
]
],
[
[
"x2",
"_____no_output_____"
],
[
"x2[0, 0]",
"_____no_output_____"
],
[
"x2[2, 0]",
"_____no_output_____"
],
[
"x2[2, -1]",
"_____no_output_____"
]
],
[
[
"Values can also be modified using any of the above index notation:",
"_____no_output_____"
]
],
[
[
"x2[0, 0] = 12\nx2",
"_____no_output_____"
]
],
[
[
"Keep in mind that, unlike Python lists, NumPy arrays have a fixed type.\nThis means, for example, that if you attempt to insert a floating-point value to an integer array, the value will be silently truncated. Don't be caught unaware by this behavior!",
"_____no_output_____"
]
],
[
[
"x1[0] = 3.14159 # this will be truncated!\nx1",
"_____no_output_____"
]
],
[
[
"## Array Slicing: Accessing Subarrays",
"_____no_output_____"
],
[
"Just as we can use square brackets to access individual array elements, we can also use them to access subarrays with the *slice* notation, marked by the colon (``:``) character.\nThe NumPy slicing syntax follows that of the standard Python list; to access a slice of an array ``x``, use this:\n``` python\nx[start:stop:step]\n```\nIf any of these are unspecified, they default to the values ``start=0``, ``stop=``*``size of dimension``*, ``step=1``.\nWe'll take a look at accessing sub-arrays in one dimension and in multiple dimensions.",
"_____no_output_____"
],
[
"### One-dimensional subarrays",
"_____no_output_____"
]
],
[
[
"x = np.arange(10)\nx",
"_____no_output_____"
],
[
"x[:5] # first five elements",
"_____no_output_____"
],
[
"x[5:] # elements after index 5",
"_____no_output_____"
],
[
"x[4:7] # middle sub-array",
"_____no_output_____"
],
[
"x[::2] # every other element",
"_____no_output_____"
],
[
"x[1::2] # every other element, starting at index 1",
"_____no_output_____"
]
],
[
[
"A potentially confusing case is when the ``step`` value is negative.\nIn this case, the defaults for ``start`` and ``stop`` are swapped.\nThis becomes a convenient way to reverse an array:",
"_____no_output_____"
]
],
[
[
"x[::-1] # all elements, reversed",
"_____no_output_____"
],
[
"x[5::-2] # reversed every other from index 5",
"_____no_output_____"
]
],
[
[
"### Multi-dimensional subarrays\n\nMulti-dimensional slices work in the same way, with multiple slices separated by commas.\nFor example:",
"_____no_output_____"
]
],
[
[
"x2",
"_____no_output_____"
],
[
"x2[:2, :3] # two rows, three columns",
"_____no_output_____"
],
[
"x2[:3, ::2] # all rows, every other column",
"_____no_output_____"
]
],
[
[
"Finally, subarray dimensions can even be reversed together:",
"_____no_output_____"
]
],
[
[
"x2[::-1, ::-1]",
"_____no_output_____"
]
],
[
[
"#### Accessing array rows and columns\n\nOne commonly needed routine is accessing of single rows or columns of an array.\nThis can be done by combining indexing and slicing, using an empty slice marked by a single colon (``:``):",
"_____no_output_____"
]
],
[
[
"print(x2[:, 0]) # first column of x2",
"_____no_output_____"
],
[
"print(x2[0, :]) # first row of x2",
"_____no_output_____"
]
],
[
[
"In the case of row access, the empty slice can be omitted for a more compact syntax:",
"_____no_output_____"
]
],
[
[
"print(x2[0]) # equivalent to x2[0, :]",
"_____no_output_____"
]
],
[
[
"### Subarrays as no-copy views\n\nOne important–and extremely useful–thing to know about array slices is that they return *views* rather than *copies* of the array data.\nThis is one area in which NumPy array slicing differs from Python list slicing: in lists, slices will be copies.\nConsider our two-dimensional array from before:",
"_____no_output_____"
]
],
[
[
"print(x2)",
"_____no_output_____"
]
],
[
[
"Let's extract a $2 \\times 2$ subarray from this:",
"_____no_output_____"
]
],
[
[
"x2_sub = x2[:2, :2]\nprint(x2_sub)",
"_____no_output_____"
]
],
[
[
"Now if we modify this subarray, we'll see that the original array is changed! Observe:",
"_____no_output_____"
]
],
[
[
"x2_sub[0, 0] = 99\nprint(x2_sub)",
"_____no_output_____"
],
[
"print(x2)",
"_____no_output_____"
]
],
[
[
"This default behavior is actually quite useful: it means that when we work with large datasets, we can access and process pieces of these datasets without the need to copy the underlying data buffer.",
"_____no_output_____"
],
[
"### Creating copies of arrays\n\nDespite the nice features of array views, it is sometimes useful to instead explicitly copy the data within an array or a subarray. This can be most easily done with the ``copy()`` method:",
"_____no_output_____"
]
],
[
[
"x2_sub_copy = x2[:2, :2].copy()\nprint(x2_sub_copy)",
"[[3 5]\n [7 6]]\n"
]
],
[
[
"If we now modify this subarray, the original array is not touched:",
"_____no_output_____"
]
],
[
[
"x2_sub_copy[0, 0] = 42\nprint(x2_sub_copy)",
"_____no_output_____"
],
[
"print(x2)",
"_____no_output_____"
]
],
[
[
"## Reshaping of Arrays\n\nAnother useful type of operation is reshaping of arrays.\nThe most flexible way of doing this is with the ``reshape`` method.\nFor example, if you want to put the numbers 1 through 9 in a $3 \\times 3$ grid, you can do the following:",
"_____no_output_____"
]
],
[
[
"grid = np.arange(1, 10).reshape((3, 3))\nprint(grid)",
"_____no_output_____"
]
],
[
[
"Note that for this to work, the size of the initial array must match the size of the reshaped array. \nWhere possible, the ``reshape`` method will use a no-copy view of the initial array, but with non-contiguous memory buffers this is not always the case.\n\nAnother common reshaping pattern is the conversion of a one-dimensional array into a two-dimensional row or column matrix.\nThis can be done with the ``reshape`` method, or more easily done by making use of the ``newaxis`` keyword within a slice operation:",
"_____no_output_____"
]
],
[
[
"x = np.array([1, 2, 3])\n\n# row vector via reshape\nx.reshape((1, 3))",
"_____no_output_____"
],
[
"# row vector via newaxis\nx[np.newaxis, :]",
"_____no_output_____"
],
[
"# column vector via reshape\nx.reshape((3, 1))",
"_____no_output_____"
],
[
"# column vector via newaxis\nx[:, np.newaxis]",
"_____no_output_____"
]
],
[
[
"We will see this type of transformation often throughout the remainder of the book.",
"_____no_output_____"
],
[
"## Array Concatenation and Splitting\n\nAll of the preceding routines worked on single arrays. It's also possible to combine multiple arrays into one, and to conversely split a single array into multiple arrays. We'll take a look at those operations here.",
"_____no_output_____"
],
[
"### Concatenation of arrays\n\nConcatenation, or joining of two arrays in NumPy, is primarily accomplished using the routines ``np.concatenate``, ``np.vstack``, and ``np.hstack``.\n``np.concatenate`` takes a tuple or list of arrays as its first argument, as we can see here:",
"_____no_output_____"
]
],
[
[
"x = np.array([1, 2, 3])\ny = np.array([3, 2, 1])\nnp.concatenate([x, y])",
"_____no_output_____"
]
],
[
[
"You can also concatenate more than two arrays at once:",
"_____no_output_____"
]
],
[
[
"z = [99, 99, 99]\nprint(np.concatenate([x, y, z]))",
"_____no_output_____"
]
],
[
[
"It can also be used for two-dimensional arrays:",
"_____no_output_____"
]
],
[
[
"grid = np.array([[1, 2, 3],\n [4, 5, 6]])",
"_____no_output_____"
],
[
"# concatenate along the first axis\nnp.concatenate([grid, grid])",
"_____no_output_____"
],
[
"# concatenate along the second axis (zero-indexed)\nnp.concatenate([grid, grid], axis=1)",
"_____no_output_____"
]
],
[
[
"For working with arrays of mixed dimensions, it can be clearer to use the ``np.vstack`` (vertical stack) and ``np.hstack`` (horizontal stack) functions:",
"_____no_output_____"
]
],
[
[
"x = np.array([1, 2, 3])\ngrid = np.array([[9, 8, 7],\n [6, 5, 4]])\n\n# vertically stack the arrays\nnp.vstack([x, grid])",
"_____no_output_____"
],
[
"# horizontally stack the arrays\ny = np.array([[99],\n [99]])\nnp.hstack([grid, y])",
"_____no_output_____"
]
],
[
[
"Similary, ``np.dstack`` will stack arrays along the third axis.",
"_____no_output_____"
],
[
"### Splitting of arrays\n\nThe opposite of concatenation is splitting, which is implemented by the functions ``np.split``, ``np.hsplit``, and ``np.vsplit``. For each of these, we can pass a list of indices giving the split points:",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 3, 99, 99, 3, 2, 1]\nx1, x2, x3 = np.split(x, [3, 5])\nprint(x1, x2, x3)",
"_____no_output_____"
]
],
[
[
"Notice that *N* split-points, leads to *N + 1* subarrays.\nThe related functions ``np.hsplit`` and ``np.vsplit`` are similar:",
"_____no_output_____"
]
],
[
[
"grid = np.arange(16).reshape((4, 4))\ngrid",
"_____no_output_____"
],
[
"upper, lower = np.vsplit(grid, [2])\nprint(upper)\nprint(lower)",
"_____no_output_____"
],
[
"left, right = np.hsplit(grid, [2])\nprint(left)\nprint(right)",
"_____no_output_____"
]
],
[
[
"Similarly, ``np.dsplit`` will split arrays along the third axis.",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d053271f25246894fd4f27f39a9aaed3016fc30e | 45,975 | ipynb | Jupyter Notebook | Week-06_Scraping-and-Parsing-XML.ipynb | pcda17/pcda | 3ba949d5eb90dd48a276cabc6ac38303aa2b6843 | [
"CC0-1.0"
] | 7 | 2017-09-20T15:31:41.000Z | 2020-10-10T03:55:10.000Z | Week-06_Scraping-and-Parsing-XML.ipynb | pcda17/pcda | 3ba949d5eb90dd48a276cabc6ac38303aa2b6843 | [
"CC0-1.0"
] | 1 | 2017-09-22T16:14:16.000Z | 2017-09-22T16:14:16.000Z | Week-06_Scraping-and-Parsing-XML.ipynb | pcda17/pcda | 3ba949d5eb90dd48a276cabc6ac38303aa2b6843 | [
"CC0-1.0"
] | 7 | 2017-09-22T15:14:18.000Z | 2021-11-26T04:31:19.000Z | 31.468172 | 933 | 0.514345 | [
[
[
"# Scraping and Parsing: EAD XML Finding Aids from the Library of Congress",
"_____no_output_____"
]
],
[
[
"import os\nfrom urllib.request import urlopen\nfrom bs4 import BeautifulSoup\nimport subprocess",
"_____no_output_____"
],
[
"## Creating a directory called 'LOC_Metadata' and setting it as our current working directory\n\n!mkdir /sharedfolder/LOC_Metadata\n\nos.chdir('/sharedfolder/LOC_Metadata')",
"_____no_output_____"
],
[
"## To make this notebook self-contained, we'll download a list of XML finding aid files the 'right' way.\n## (In practice I normally use the 'find-and-replace + grep + wget' approach we covered in class,\n## because it takes some extra effort to remind myself how to parse the HTML page via BeautifulSoup.)\n\n## We first load a page with links to finding aids in the 'recorded sound' collection.\n\nfinding_aid_list_url = 'http://findingaids.loc.gov/source/RS'\n\nfinding_aid_list_page = urlopen(finding_aid_list_url).read().decode('utf8') # Loading the page\n\nprint(finding_aid_list_page[:700]) # Printing the first 700 characters in the page we just loaded",
"<!DOCTYPE html>\n<html lang=\"en\" class=\"no-js\">\n \n <!--GENERATED HTML-->\n \n <head>\n <meta charset=\"utf-8\">\n <title>Library of Congress Finding Aids: XML Source Files, Recorded Sound</title>\n <meta name=\"keywords\" content=\"finding aids, registers, inventories, Encoded Archival Description, EAD, Library of Congress, special collections, archives, manuscripts, papers, music, visual materials, performing arts, motion pictures, television, search, HTML, XML, PDF, EAD 2002, subject, browse\">\n <meta name=\"description\" content=\"XML source files for Library finding aids using the Encoded Archival Description (EAD) XML scheme.\">\n <meta name=\"dcterms.type\" content=\"text\"\n"
],
[
"## Now we'll parse the page's HTML using BeautifulSoup ...\n\nsoup = BeautifulSoup(finding_aid_list_page, 'lxml')\n\n## ... and examine soup.find_all('a'), which returns a list of 'a' elements (i.e., HTML links).\n\nprint(len(soup.find_all('a'))) # Checking the number of links on the page\n\nprint() # Printing a blank line for readability\n\nprint(soup.find_all('a')[70]) # Printing element #70 in the list",
"190\n\n<a href=\"http://hdl.loc.gov/loc.mbrsrs/eadmbrs.rs009003.2\" target=\"_blank\" title=\"rs009003 XML\">[XML]</a>\n"
],
[
"## We can access the 'href' attribute of an element (i.e., the link URL) using 'href' in \n## brackets, just like a dictionary.\n\nsoup.find_all('a')[70]['href']",
"_____no_output_____"
],
[
"## Now let's make a list of every link on the page.\n\nall_links = []\n\nfor element in soup.find_all('a'): # Looping through all 'a' elements.\n try: # Because some 'a' elements do not contain 'href' attributes, \n all_links.append(element['href']) ## we can use a try/except statement to skip elements that \n except: ## would otherwise raise an error.\n pass\n\nall_links[:15] # Outputting the first 15 links in the list",
"_____no_output_____"
],
[
"## We know that the URL for every XML file we're looking for ends in '.2', so we can\n## use that fact to filter out irrelevant links.\n\nxml_urls = []\n\nfor link in all_links:\n if link[-2:] == '.2': # Checking whether the last two characters of a link are '.2'\n xml_urls.append(link)\n\nxml_urls # Outputting the full list of relevant XML URLs ",
"_____no_output_____"
],
[
"## Downloading each XML file in our list of URLs\n\n## We can use the subprocess module (which we imported above) to issue commands in the bash shell.\n## In an interactive bash shell session we'd use spaces to separate arguments; instead, subprocess\n## takes arguments in the form of a Python list.\n\n## For each item in our list, the following issues a command with two arguments: 'wget' followed by the URL.\n## It thus downloads each XML file to the current directory.\n\nfor url in xml_urls:\n subprocess.call(['wget', url])",
"_____no_output_____"
],
[
"## Outputting a list of filenames in the current directory\n\n## In Unix-like operating systems, './' always refers to the current directory.\n\nos.listdir('./')",
"_____no_output_____"
],
[
"## Just in case there are other files in the current directory, we can use a \n## list comprehension to create a list of filenames that end in '.2' and assign\n## it to the variable 'xml_filenames'.\n\nxml_filenames = [item for item in os.listdir('./') if item[-2:]=='.2']\n\nxml_filenames",
"_____no_output_____"
],
[
"## Now let's choose an arbitrary XML file in our collection so we can figure out how to parse it.\n\nxml_filename = xml_filenames[4] ## Selecting filename #4 in our list\n\nxml_text = open(xml_filename).read() ## Reading the file and assigning its content to the variable 'xml_text'\n\nprint(xml_text[:700]) ## Printing the first 700 characters in the XML text we just loaded",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- ========== Enhanced: ========== \n Transformed with schema2schema.06_mfer.xsl \nGenerated: Mon, 22 February 2016 12:57:26 PM EST\nURL: http://hdl.loc.gov/loc.mbrsrs/eadmbrs.rs004004\n ===================================== --><!--name=\"eadidNode\" select=\"normalize-space(//ead:eadid[1])\": http://hdl.loc.gov/loc.mbrsrs/eadmbrs.rs004004--><!--name=\"eadidIdentifier\" select=\"substring-after(//ead:eadid/@identifier, '/')\": eadmbrs.rs004004--><!--name=\"eadidId\" select=\"substring-after($eadidIdentifier, '.')\": rs004004-->\n\n<ead xmlns=\"urn:isbn:1-931666-22-9\"\n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"urn:isbn:1-931666-22\n"
],
[
"## Parse the XML text from the previous cell using Beautiful Soup\n\nsoup = BeautifulSoup(xml_text, 'lxml')",
"_____no_output_____"
],
[
"## By looking at the XML text above, we can see that the 'ead' element is the root of our XML tree.\n## Let's use a for loop to look at the names of elements one next level down in the tree.\n\nfor element in soup.ead:\n print(element.name)",
"None\neadheader\nNone\narchdesc\nNone\n"
],
[
"## In practice you'd usually just look through the XML file by eye, identify the elements \n## you're looking for, and use soup.find_all('...') to extract them. For now, let's continue \n## working down the XML tree with BeautifulSoup.\n\n# You can find a glossary of EAD element names here:\n# https://loc.gov/ead/EAD3taglib/index.html",
"_____no_output_____"
],
[
"## Since the 'eadheader' element is administrative metadata we don't care about, let's \n## repeat the process for 'soup.ead.archdesc' ('archdesc' is 'archival description' in EAD parlance).\n\nfor element in soup.ead.archdesc:\n if element.name != None: ## Filtering out 'None' elements, which in this case are irrelevant comments\n print(element.name)",
"did\ncontrolaccess\ndescgrp\nbioghist\nscopecontent\narrangement\notherfindaid\ndsc\n"
],
[
"## By looking at the XML file in a text editor, I notice the 'did' element ('descriptive identification')\n## contains the item-level information we're looking for. Let's run another for loop to look at the \n## names of elements contained within each 'did' element.\n\nfor element in soup.ead.archdesc.did:\n if element.name != None:\n print(element.name)\n\n## Note that 'soup.ead.archdesc.did' only refers to the first 'did' element in the XML document.",
"unittitle\norigination\nphysdesc\nlangmaterial\nrepository\nabstract\nphysloc\n"
],
[
"## OK, that's enough exploring. Let's use soup.find_all() to create a list of 'did' elements. \n\ndid_elements = soup.find_all('did')\n\nprint(len(did_elements)) ## Printing the number of 'did' elements in our list\n\nprint()\n\nprint(did_elements[4]) ## Printing item #4 in the the list",
"12\n\n<did>\n<container type=\"folder\">2</container>\n<unittitle>Script for the <title render=\"italic\" xlink:type=\"simple\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">Frank Sinatra\n\t\t\t\t\t\t\t\tShow</title>, <unitdate calendar=\"gregorian\" era=\"ce\">1944 April 26</unitdate>\n</unittitle>\n</did>\n"
],
[
"## Not every 'did' element contains the same fields; different objects are described differently.\n\n## Try running this cell several times, plugging in other index numbers to compare the way\n## different items' records are formatted.\n\nprint(did_elements[7])",
"<did>\n<container type=\"folder\">5</container>\n<unittitle>\n<title render=\"italic\" xlink:type=\"simple\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">Philip Morris Playhouse</title> script for\n\t\t\t\t\t\t\t\"Here Comes Mr. Jordan,\" <unitdate calendar=\"gregorian\" era=\"ce\">1944 February\n\t\t\t\t\t\t\t11</unitdate>\n</unittitle>\n</did>\n"
],
[
"## If you run the cell above several times with different index numbers, you'll notice that the \n## first item in the list (index 0) refers to the entire box of records, while the others are \n## individual folders or series of folders.\n\n## To make things more complicated, some items are physically described using 'container' elements \n## while others use 'extent' instead. Most appear to include 'unittitle' and 'unitdate'.\n\n## Our goal is to create a CSV that contains a basic description of each 'unit', or 'did' element,\n## in each XML finding aid. For the purposes of this exercise, let's include the following pieces \n## of information for each unit, where available:\n\n#### title of the source collection\n#### unittitle\n#### unitdate\n#### container type\n#### container number\n#### extent",
"_____no_output_____"
],
[
"## Since each XML finding aid represents a single collection, we'll want to include a column that \n## identifies which collection it comes from. By reading through the XML files, we see that each \n## has a single element called 'titleproper' that describes the whole collection.\n\n## Let's create a recipe to extract that text. Here's a first try:\n\ncollection_title = soup.find('titleproper').get_text()\n\ncollection_title",
"_____no_output_____"
],
[
"## That format is OK, but we should remove the tab and newline characters. Let's try again, using \n## the replace() function to replace them with spaces.\n\ncollection_title = soup.find('titleproper').get_text().replace('\\t', ' ').replace('\\n', ' ')\n\ncollection_title",
"_____no_output_____"
],
[
"## We can add the strip() function to remove the space at the end of the string.\n\ncollection_title = soup.find('titleproper').get_text().replace('\\t', ' ').replace('\\n', ' ').strip()\n\ncollection_title",
"_____no_output_____"
],
[
"## We still have a series of spaces in a row in the middle of the string. We can use a 'while loop' \n## to repeatedly replace any occurrence of ' ' (two spaces) with ' ' (one space).\n\ncollection_title = soup.find('titleproper').get_text().replace('\\t', ' ').replace('\\n', ' ').strip()\n\nwhile ' ' in collection_title:\n collection_title = collection_title.replace(' ', ' ')\n\ncollection_title",
"_____no_output_____"
],
[
"## Perfect. We'll extract the collection name whenever we open an XML finding aid and include it \n## in each CSV row associated with that collection.",
"_____no_output_____"
],
[
"## Now on to 'unittitle'. Recall that we created a list of 'did' elements above, called 'did_elements'.\n\nelement = did_elements[4]\n\nunittitle = element.find('unittitle').get_text()\n\nunittitle",
"_____no_output_____"
],
[
"## Since those tabs and newlines are a recurring probem, we should define a function that \n## removes them from any given text string.\n\ndef clean_text(text):\n temp_text = text.replace('\\t', ' ').replace('\\n', ' ').strip()\n while ' ' in temp_text:\n temp_text = temp_text.replace(' ', ' ')\n return temp_text",
"_____no_output_____"
],
[
"# Let's test our clean_text() function.\n\nelement = did_elements[4]\n\nunittitle = element.find('unittitle').get_text()\n\nunittitle = clean_text(unittitle)\n\nunittitle",
"_____no_output_____"
],
[
"## Now let's try extracting the 'unittitle' field for each 'did' element in our list.\n\nfor element in did_elements:\n unittitle = element.get_text().replace('\\t', ' ').replace('\\n', ' ').strip()\n print(clean_text(unittitle))\n print('-----------------') # Printing a divider between elements",
"Collection Summary Manfred F. DeMartino Collection of CBS Radio Scripts 1943-1945 De Martino, Manfred F. .42 linear feet (1 box) Collection materials are in English Recorded Sound Reference Center, Motion Picture, Broadcasting and Recorded Sound Division Library of Congress Washington, D.C. Scripts and a photograph acquired by Manfred F. DeMartino while working backstage at CBS radio during the mid-1940s. Includes scripts for the Frank Sinatra Show, Philip Morris Playhouse, and Your Hit Parade. RPA 00189\n-----------------\nSeries 1. Photograph, undated 1 folder\n-----------------\n1 Autographed photograph of Philip Morris spokesman Johnny Roventini, undated\n-----------------\nSeries 2. Scripts, 1943-1945 8 folders\n-----------------\n2 Script for the Frank Sinatra Show, 1944 April 26\n-----------------\n3 Script for the Frank Sinatra Show, 1944 December 4\n-----------------\n4 Philip Morris Playhouse script for \"Magnificent Obsession,\" 1944 January 27\n-----------------\n5 Philip Morris Playhouse script for \"Here Comes Mr. Jordan,\" 1944 February 11\n-----------------\n6 Philip Morris Playhouse script for \"The Lodger,\" 1944 February 18\n-----------------\n7 Your Hit Parade script, 1943 October 16\n-----------------\n8 Your Hit Parade script, 1944 April 8\n-----------------\n9 Your Hit Parade script, 1945 August 25\n-----------------\n"
],
[
"## The first element in the list above contains more information than we need, but we can\n## let that slide for this exercise.\n\n## Next is 'unitdate'. We'll use our clean_text() function once again.\n\nelement = did_elements[4]\n\nunitdate = element.find('unitdate').get_text()\n\nunitdate = clean_text(unitdate)\n\nunitdate",
"_____no_output_____"
],
[
"## Let's loop through the list of 'did' elements and see if our 'unittitle' recipe holds up.\n\nfor element in did_elements:\n unitdate = element.find('unitdate').get_text()\n print(clean_text(unitdate))\n print('-----------------') # Printing a divider between elements",
"1943-1945\n-----------------\nundated\n-----------------\nundated\n-----------------\n1943-1945\n-----------------\n1944 April 26\n-----------------\n1944 December 4\n-----------------\n1944 January 27\n-----------------\n1944 February 11\n-----------------\n1944 February 18\n-----------------\n1943 October 16\n-----------------\n1944 April 8\n-----------------\n1945 August 25\n-----------------\n"
],
[
"## Now on to container type and number. Let's examine a 'container' XML element.\n\nelement = did_elements[4]\n\nelement.find('container')",
"_____no_output_____"
],
[
"## Since the container type ('folder', in this case) is an attribute in the 'container' tag, \n## we can extract it using bracket notation.\n\nelement = did_elements[4]\n\ncontainer_type = element.find('container')['type']\n\ncontainer_type",
"_____no_output_____"
],
[
"## The container number is specified between the opening and closing 'container' tags, \n## so we can get it using get_text().\n\nelement = did_elements[4]\n\ncontainer_number = element.find('container').get_text()\n\ncontainer_number",
"_____no_output_____"
],
[
"## Next we'll try to get the container type and number for each 'did' element in our list ...\n\nfor element in did_elements:\n container_type = element.find('container')['type']\n print(container_type)\n\n container_number = element.find('container').get_text()\n print(container_number)\n\n print('-----------------') # Printing a divider between elements\n\n## ... and we get an error. The reason is that some 'did' elements don't include a 'container' field.",
"_____no_output_____"
],
[
"## Using try/accept notation, whenever we get an error because a container element isn't found,\n## we can revert to '' (an empty string) instead.\n\nfor element in did_elements:\n try:\n container_type = element.find('container')['type']\n except:\n container_type = ''\n print(container_type)\n \n try:\n container_number = element.find('container').get_text()\n except:\n container_number = ''\n print(container_number)\n print('-----------------') # Printing a divider between elements",
"\n\n-----------------\n\n\n-----------------\nfolder\n1\n-----------------\n\n\n-----------------\nfolder\n2\n-----------------\nfolder\n3\n-----------------\nfolder\n4\n-----------------\nfolder\n5\n-----------------\nfolder\n6\n-----------------\nfolder\n7\n-----------------\nfolder\n8\n-----------------\nfolder\n9\n-----------------\n"
],
[
"## The last field we'll extract is 'extent', which is only included in a handful of 'did' elements.\n\nelement = did_elements[3]\n\nextent = element.find('extent').get_text()\n\nextent",
"_____no_output_____"
],
[
"## Let's extract 'extent' from each element in our list of 'did' elements (for those that happen to include it).\n\nfor element in did_elements:\n try:\n extent = element.find('extent').get_text()\n except:\n extent = ''\n print(extent)\n print('-----------------') # Printing a divider between elements",
".42 linear feet (1 box)\n-----------------\n1 folder\n-----------------\n\n-----------------\n8 folders\n-----------------\n\n-----------------\n\n-----------------\n\n-----------------\n\n-----------------\n\n-----------------\n\n-----------------\n\n-----------------\n\n-----------------\n"
],
[
"## Let's put it all together and view our chosen fields for a single 'did' element.\n## We will combine our fields in a list to create a 'row' for our future CSV file.\n\nelement = did_elements[6]\n\n# unittitle\ntry: # Added try/except statements for 'unittitle' and 'unitdate' just to be safe\n unittitle = clean_text(element.find('unittitle').get_text())\nexcept:\n unittitle = ''\n \n# unitdate\ntry:\n unitdate = clean_text(element.find('unitdate').get_text())\nexcept:\n unitdate = ''\n \n# container type and number\ntry:\n container_type = element.find('container')['type']\nexcept:\n container_type = ''\n\ntry:\n container_number = element.find('container').get_text()\nexcept:\n container_number = ''\n\n# extent\ntry:\n extent = element.find('extent').get_text()\nexcept:\n extent = ''\n\nrow = [unittitle, unitdate, container_type, container_number, extent]\n\n\nprint(row)",
"['Philip Morris Playhouse script for \"Magnificent Obsession,\" 1944 January 27', '1944 January 27', 'folder', '4', '']\n"
],
[
"## Let's take a step back and generalize, so that we can extract metadata for each \n## 'did' element in a single XML file.\n\n## We will also include the 'collection title' field ('titleproper' in EAD's vocabulary) as \n## the first item in each row.\n\nxml_filename = xml_filenames[3] # <-- Change the index number there to run the script on another XML file in the list.\n\n\nxml_text = open(xml_filename).read()\n\nsoup = BeautifulSoup(xml_text, 'lxml')\n\nlist_of_lists = [] # Creating an empty list, which we will use to hold our rows (each row represented as a list)\n\n\ntry:\n collection_title = clean_text(soup.find('titleproper').get_text())\nexcept:\n collection_title = xml_filename # If the 'titleproper' field is missing for some reason,\n ## we'll use the XML filename instead.\n\nfor element in soup.find_all('did'):\n\n # unittitle\n try:\n unittitle = clean_text(element.find('unittitle').get_text())\n except:\n unittitle = ''\n \n # unitdate\n try:\n unitdate = clean_text(element.find('unitdate').get_text())\n except:\n unitdate = ''\n \n # container type and number\n try:\n container_type = element.find('container')['type']\n except:\n container_type = ''\n\n try:\n container_number = element.find('container').get_text()\n except:\n container_number = ''\n\n # extent\n try:\n extent = element.find('extent').get_text()\n except:\n extent = ''\n\n row = [collection_title, unittitle, unitdate, container_type, container_number, extent]\n\n list_of_lists.append(row) ## Adding the row list we defined in the previous line to 'list_of_lists' \n\n\nlist_of_lists[:15] ## Outputting the first 15 rows in our list of lists",
"_____no_output_____"
],
[
"## Almost there! Next we'll run the script above on each XML file in our list, creating a \n## master list of lists that we'll write to disk as a CSV in the next cell.\n\n## Let's begin by re-loading our list of XML filenames:\n\nos.chdir('/sharedfolder/LOC_Metadata')\nxml_filenames = [item for item in os.listdir('./') if item[-2:]=='.2'] # Creating a list of XML filenames\n\nlist_of_lists = [] # Creating an empty list\n\n## Now we'll extract metadata from the full batch of XML files. This may take a few moments to complete.\n\nfor xml_filename in xml_filenames:\n \n xml_text = open(xml_filename).read()\n \n soup = BeautifulSoup(xml_text, 'lxml')\n \n try:\n collection_title = clean_text(soup.find('titleproper').get_text())\n except:\n collection_title = xml_filename # If the 'titleproper' field is missing for some reason,\n ## we'll use the XML filename instead.\n \n for element in soup.find_all('did'):\n \n # unittitle\n try:\n unittitle = clean_text(element.find('unittitle').get_text())\n except:\n unittitle = ''\n \n # unitdate\n try:\n unitdate = clean_text(element.find('unitdate').get_text())\n except:\n unitdate = ''\n \n # container type and number\n try:\n container_type = element.find('container')['type']\n except:\n container_type = ''\n \n try:\n container_number = element.find('container').get_text()\n except:\n container_number = ''\n \n # extent\n try:\n extent = element.find('extent').get_text()\n except:\n extent = ''\n \n row = [collection_title, unittitle, unitdate, container_type, container_number, extent]\n \n list_of_lists.append(row)\n\n\nprint(len(list_of_lists)) ## Printing the number of rows in our table",
"11881\n"
],
[
"## Finally, we write the extracted metadata to disk as a CSV called 'LOC_RS_Reduced_Metadata.csv'\n\nout_path = \"./LOC_RS_Reduced_Metadata.csv\" # The './' part is optional; it just means we're writing to \n # the current working directory.\n\n# Defining a list of column headers, which we will write as the first row in our CSV\ncolumn_headers = ['Collection Title', 'Unit Title', 'Unit Date', 'Container Type', 'Container Number', 'Extent']\n\nimport csv # Importing Python's built-in CSV input/output package\n\nwith open(out_path, 'w') as fo: # Creating a tempory file stream object called 'fo' (my abbreviation for 'file out')\n csv_writer = csv.writer(fo) # Initializing our CSV writer\n csv_writer.writerow(column_headers) # Writing one row (our column headers)\n csv_writer.writerows(list_of_lists) # Writing a list of lists as a sequence of rows",
"_____no_output_____"
],
[
"## Go to 'sharedfolder' on your desktop and use LibreOffice or Excel to open your new CSV.\n\n## As you scroll through the CSV file, you will probably see more formatting oddities you can fix \n## by tweaking the code above.",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0532dc191d9f521733d5b2bb52822500c79a516 | 130,378 | ipynb | Jupyter Notebook | Model backlog/Models/Inference/162-cassava-leaf-inf-effnetb4-dcr-04-380x380.ipynb | dimitreOliveira/Cassava-Leaf-Disease-Classification | 22a42e840875190e2d8cd1c838d1aef7b956f39f | [
"MIT"
] | 8 | 2021-02-18T22:35:19.000Z | 2021-03-29T07:59:10.000Z | Model backlog/Models/Inference/162-cassava-leaf-inf-effnetb4-dcr-04-380x380.ipynb | dimitreOliveira/Cassava-Leaf-Disease-Classification | 22a42e840875190e2d8cd1c838d1aef7b956f39f | [
"MIT"
] | null | null | null | Model backlog/Models/Inference/162-cassava-leaf-inf-effnetb4-dcr-04-380x380.ipynb | dimitreOliveira/Cassava-Leaf-Disease-Classification | 22a42e840875190e2d8cd1c838d1aef7b956f39f | [
"MIT"
] | 3 | 2021-03-27T13:48:23.000Z | 2021-07-26T13:05:35.000Z | 76.738081 | 109 | 0.666378 | [
[
[
"## Dependencies",
"_____no_output_____"
]
],
[
[
"import warnings, glob\nfrom tensorflow.keras import Sequential, Model\nfrom cassava_scripts import *\n\n\nseed = 0\nseed_everything(seed)\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
]
],
[
[
"### Hardware configuration",
"_____no_output_____"
]
],
[
[
"# TPU or GPU detection\n# Detect hardware, return appropriate distribution strategy\nstrategy, tpu = set_up_strategy()\n\nAUTO = tf.data.experimental.AUTOTUNE\nREPLICAS = strategy.num_replicas_in_sync\nprint(f'REPLICAS: {REPLICAS}')",
"REPLICAS: 1\n"
]
],
[
[
"# Model parameters",
"_____no_output_____"
]
],
[
[
"BATCH_SIZE = 8 * REPLICAS\nHEIGHT = 380\nWIDTH = 380\nCHANNELS = 3\nN_CLASSES = 5\nTTA_STEPS = 0 # Do TTA if > 0",
"_____no_output_____"
]
],
[
[
"# Augmentation",
"_____no_output_____"
]
],
[
[
"def data_augment(image, label):\n p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)\n \n # Flips\n image = tf.image.random_flip_left_right(image)\n image = tf.image.random_flip_up_down(image)\n if p_spatial > .75:\n image = tf.image.transpose(image)\n\n return image, label",
"_____no_output_____"
]
],
[
[
"## Auxiliary functions",
"_____no_output_____"
]
],
[
[
"# Datasets utility functions\ndef resize_image(image, label):\n image = tf.image.resize(image, [HEIGHT, WIDTH])\n image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS])\n return image, label\n\ndef process_path(file_path):\n name = get_name(file_path)\n img = tf.io.read_file(file_path)\n img = decode_image(img)\n# img, _ = scale_image(img, None)\n# img = center_crop(img, HEIGHT, WIDTH)\n return img, name\n\ndef get_dataset(files_path, shuffled=False, tta=False, extension='jpg'):\n dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled)\n dataset = dataset.map(process_path, num_parallel_calls=AUTO)\n if tta:\n dataset = dataset.map(data_augment, num_parallel_calls=AUTO)\n dataset = dataset.map(resize_image, num_parallel_calls=AUTO)\n dataset = dataset.batch(BATCH_SIZE)\n dataset = dataset.prefetch(AUTO)\n return dataset",
"_____no_output_____"
]
],
[
[
"# Load data",
"_____no_output_____"
]
],
[
[
"database_base_path = '/kaggle/input/cassava-leaf-disease-classification/'\nsubmission = pd.read_csv(f'{database_base_path}sample_submission.csv')\ndisplay(submission.head())\n\nTEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec')\nNUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)\nprint(f'GCS: test: {NUM_TEST_IMAGES}')",
"_____no_output_____"
],
[
"!ls /kaggle/input/",
"162-cassava-leaf-effnetb4-dcr-04-380x380 cassava-leaf-disease-classification\r\n"
],
[
"model_path_list = glob.glob('/kaggle/input/162-cassava-leaf-effnetb4-dcr-04-380x380/*.h5')\nmodel_path_list.sort()\n\nprint('Models to predict:')\nprint(*model_path_list, sep='\\n')",
"Models to predict:\n/kaggle/input/162-cassava-leaf-effnetb4-dcr-04-380x380/model_0.h5\n"
]
],
[
[
"# Model",
"_____no_output_____"
]
],
[
[
"def model_fn(input_shape, N_CLASSES):\n inputs = L.Input(shape=input_shape, name='input_image')\n base_model = tf.keras.applications.EfficientNetB4(input_tensor=inputs, \n include_top=False, \n drop_connect_rate=.4, \n weights=None)\n \n x = L.GlobalAveragePooling2D()(base_model.output)\n x = L.Dropout(.5)(x)\n output = L.Dense(N_CLASSES, activation='softmax', name='output')(x)\n \n model = Model(inputs=inputs, outputs=output)\n return model\n\nwith strategy.scope():\n model = model_fn((None, None, CHANNELS), N_CLASSES)\n \nmodel.summary()",
"Model: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_image (InputLayer) [(None, None, None, 0 \n__________________________________________________________________________________________________\nrescaling (Rescaling) (None, None, None, 3 0 input_image[0][0] \n__________________________________________________________________________________________________\nnormalization (Normalization) (None, None, None, 3 7 rescaling[0][0] \n__________________________________________________________________________________________________\nstem_conv_pad (ZeroPadding2D) (None, None, None, 3 0 normalization[0][0] \n__________________________________________________________________________________________________\nstem_conv (Conv2D) (None, None, None, 4 1296 stem_conv_pad[0][0] \n__________________________________________________________________________________________________\nstem_bn (BatchNormalization) (None, None, None, 4 192 stem_conv[0][0] \n__________________________________________________________________________________________________\nstem_activation (Activation) (None, None, None, 4 0 stem_bn[0][0] \n__________________________________________________________________________________________________\nblock1a_dwconv (DepthwiseConv2D (None, None, None, 4 432 stem_activation[0][0] \n__________________________________________________________________________________________________\nblock1a_bn (BatchNormalization) (None, None, None, 4 192 block1a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock1a_activation (Activation) (None, None, None, 4 0 block1a_bn[0][0] \n__________________________________________________________________________________________________\nblock1a_se_squeeze (GlobalAvera (None, 48) 0 block1a_activation[0][0] \n__________________________________________________________________________________________________\nblock1a_se_reshape (Reshape) (None, 1, 1, 48) 0 block1a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock1a_se_reduce (Conv2D) (None, 1, 1, 12) 588 block1a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock1a_se_expand (Conv2D) (None, 1, 1, 48) 624 block1a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock1a_se_excite (Multiply) (None, None, None, 4 0 block1a_activation[0][0] \n block1a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock1a_project_conv (Conv2D) (None, None, None, 2 1152 block1a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock1a_project_bn (BatchNormal (None, None, None, 2 96 block1a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock1b_dwconv (DepthwiseConv2D (None, None, None, 2 216 block1a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock1b_bn (BatchNormalization) (None, None, None, 2 96 block1b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock1b_activation (Activation) (None, None, None, 2 0 block1b_bn[0][0] \n__________________________________________________________________________________________________\nblock1b_se_squeeze (GlobalAvera (None, 24) 0 block1b_activation[0][0] \n__________________________________________________________________________________________________\nblock1b_se_reshape (Reshape) (None, 1, 1, 24) 0 block1b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock1b_se_reduce (Conv2D) (None, 1, 1, 6) 150 block1b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock1b_se_expand (Conv2D) (None, 1, 1, 24) 168 block1b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock1b_se_excite (Multiply) (None, None, None, 2 0 block1b_activation[0][0] \n block1b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock1b_project_conv (Conv2D) (None, None, None, 2 576 block1b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock1b_project_bn (BatchNormal (None, None, None, 2 96 block1b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock1b_drop (Dropout) (None, None, None, 2 0 block1b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock1b_add (Add) (None, None, None, 2 0 block1b_drop[0][0] \n block1a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2a_expand_conv (Conv2D) (None, None, None, 1 3456 block1b_add[0][0] \n__________________________________________________________________________________________________\nblock2a_expand_bn (BatchNormali (None, None, None, 1 576 block2a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock2a_expand_activation (Acti (None, None, None, 1 0 block2a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock2a_dwconv_pad (ZeroPadding (None, None, None, 1 0 block2a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock2a_dwconv (DepthwiseConv2D (None, None, None, 1 1296 block2a_dwconv_pad[0][0] \n__________________________________________________________________________________________________\nblock2a_bn (BatchNormalization) (None, None, None, 1 576 block2a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock2a_activation (Activation) (None, None, None, 1 0 block2a_bn[0][0] \n__________________________________________________________________________________________________\nblock2a_se_squeeze (GlobalAvera (None, 144) 0 block2a_activation[0][0] \n__________________________________________________________________________________________________\nblock2a_se_reshape (Reshape) (None, 1, 1, 144) 0 block2a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock2a_se_reduce (Conv2D) (None, 1, 1, 6) 870 block2a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock2a_se_expand (Conv2D) (None, 1, 1, 144) 1008 block2a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock2a_se_excite (Multiply) (None, None, None, 1 0 block2a_activation[0][0] \n block2a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock2a_project_conv (Conv2D) (None, None, None, 3 4608 block2a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock2a_project_bn (BatchNormal (None, None, None, 3 128 block2a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock2b_expand_conv (Conv2D) (None, None, None, 1 6144 block2a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2b_expand_bn (BatchNormali (None, None, None, 1 768 block2b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock2b_expand_activation (Acti (None, None, None, 1 0 block2b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock2b_dwconv (DepthwiseConv2D (None, None, None, 1 1728 block2b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock2b_bn (BatchNormalization) (None, None, None, 1 768 block2b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock2b_activation (Activation) (None, None, None, 1 0 block2b_bn[0][0] \n__________________________________________________________________________________________________\nblock2b_se_squeeze (GlobalAvera (None, 192) 0 block2b_activation[0][0] \n__________________________________________________________________________________________________\nblock2b_se_reshape (Reshape) (None, 1, 1, 192) 0 block2b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock2b_se_reduce (Conv2D) (None, 1, 1, 8) 1544 block2b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock2b_se_expand (Conv2D) (None, 1, 1, 192) 1728 block2b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock2b_se_excite (Multiply) (None, None, None, 1 0 block2b_activation[0][0] \n block2b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock2b_project_conv (Conv2D) (None, None, None, 3 6144 block2b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock2b_project_bn (BatchNormal (None, None, None, 3 128 block2b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock2b_drop (Dropout) (None, None, None, 3 0 block2b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2b_add (Add) (None, None, None, 3 0 block2b_drop[0][0] \n block2a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2c_expand_conv (Conv2D) (None, None, None, 1 6144 block2b_add[0][0] \n__________________________________________________________________________________________________\nblock2c_expand_bn (BatchNormali (None, None, None, 1 768 block2c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock2c_expand_activation (Acti (None, None, None, 1 0 block2c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock2c_dwconv (DepthwiseConv2D (None, None, None, 1 1728 block2c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock2c_bn (BatchNormalization) (None, None, None, 1 768 block2c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock2c_activation (Activation) (None, None, None, 1 0 block2c_bn[0][0] \n__________________________________________________________________________________________________\nblock2c_se_squeeze (GlobalAvera (None, 192) 0 block2c_activation[0][0] \n__________________________________________________________________________________________________\nblock2c_se_reshape (Reshape) (None, 1, 1, 192) 0 block2c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock2c_se_reduce (Conv2D) (None, 1, 1, 8) 1544 block2c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock2c_se_expand (Conv2D) (None, 1, 1, 192) 1728 block2c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock2c_se_excite (Multiply) (None, None, None, 1 0 block2c_activation[0][0] \n block2c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock2c_project_conv (Conv2D) (None, None, None, 3 6144 block2c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock2c_project_bn (BatchNormal (None, None, None, 3 128 block2c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock2c_drop (Dropout) (None, None, None, 3 0 block2c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2c_add (Add) (None, None, None, 3 0 block2c_drop[0][0] \n block2b_add[0][0] \n__________________________________________________________________________________________________\nblock2d_expand_conv (Conv2D) (None, None, None, 1 6144 block2c_add[0][0] \n__________________________________________________________________________________________________\nblock2d_expand_bn (BatchNormali (None, None, None, 1 768 block2d_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock2d_expand_activation (Acti (None, None, None, 1 0 block2d_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock2d_dwconv (DepthwiseConv2D (None, None, None, 1 1728 block2d_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock2d_bn (BatchNormalization) (None, None, None, 1 768 block2d_dwconv[0][0] \n__________________________________________________________________________________________________\nblock2d_activation (Activation) (None, None, None, 1 0 block2d_bn[0][0] \n__________________________________________________________________________________________________\nblock2d_se_squeeze (GlobalAvera (None, 192) 0 block2d_activation[0][0] \n__________________________________________________________________________________________________\nblock2d_se_reshape (Reshape) (None, 1, 1, 192) 0 block2d_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock2d_se_reduce (Conv2D) (None, 1, 1, 8) 1544 block2d_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock2d_se_expand (Conv2D) (None, 1, 1, 192) 1728 block2d_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock2d_se_excite (Multiply) (None, None, None, 1 0 block2d_activation[0][0] \n block2d_se_expand[0][0] \n__________________________________________________________________________________________________\nblock2d_project_conv (Conv2D) (None, None, None, 3 6144 block2d_se_excite[0][0] \n__________________________________________________________________________________________________\nblock2d_project_bn (BatchNormal (None, None, None, 3 128 block2d_project_conv[0][0] \n__________________________________________________________________________________________________\nblock2d_drop (Dropout) (None, None, None, 3 0 block2d_project_bn[0][0] \n__________________________________________________________________________________________________\nblock2d_add (Add) (None, None, None, 3 0 block2d_drop[0][0] \n block2c_add[0][0] \n__________________________________________________________________________________________________\nblock3a_expand_conv (Conv2D) (None, None, None, 1 6144 block2d_add[0][0] \n__________________________________________________________________________________________________\nblock3a_expand_bn (BatchNormali (None, None, None, 1 768 block3a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock3a_expand_activation (Acti (None, None, None, 1 0 block3a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock3a_dwconv_pad (ZeroPadding (None, None, None, 1 0 block3a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock3a_dwconv (DepthwiseConv2D (None, None, None, 1 4800 block3a_dwconv_pad[0][0] \n__________________________________________________________________________________________________\nblock3a_bn (BatchNormalization) (None, None, None, 1 768 block3a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock3a_activation (Activation) (None, None, None, 1 0 block3a_bn[0][0] \n__________________________________________________________________________________________________\nblock3a_se_squeeze (GlobalAvera (None, 192) 0 block3a_activation[0][0] \n__________________________________________________________________________________________________\nblock3a_se_reshape (Reshape) (None, 1, 1, 192) 0 block3a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock3a_se_reduce (Conv2D) (None, 1, 1, 8) 1544 block3a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock3a_se_expand (Conv2D) (None, 1, 1, 192) 1728 block3a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock3a_se_excite (Multiply) (None, None, None, 1 0 block3a_activation[0][0] \n block3a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock3a_project_conv (Conv2D) (None, None, None, 5 10752 block3a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock3a_project_bn (BatchNormal (None, None, None, 5 224 block3a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock3b_expand_conv (Conv2D) (None, None, None, 3 18816 block3a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock3b_expand_bn (BatchNormali (None, None, None, 3 1344 block3b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock3b_expand_activation (Acti (None, None, None, 3 0 block3b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock3b_dwconv (DepthwiseConv2D (None, None, None, 3 8400 block3b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock3b_bn (BatchNormalization) (None, None, None, 3 1344 block3b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock3b_activation (Activation) (None, None, None, 3 0 block3b_bn[0][0] \n__________________________________________________________________________________________________\nblock3b_se_squeeze (GlobalAvera (None, 336) 0 block3b_activation[0][0] \n__________________________________________________________________________________________________\nblock3b_se_reshape (Reshape) (None, 1, 1, 336) 0 block3b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock3b_se_reduce (Conv2D) (None, 1, 1, 14) 4718 block3b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock3b_se_expand (Conv2D) (None, 1, 1, 336) 5040 block3b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock3b_se_excite (Multiply) (None, None, None, 3 0 block3b_activation[0][0] \n block3b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock3b_project_conv (Conv2D) (None, None, None, 5 18816 block3b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock3b_project_bn (BatchNormal (None, None, None, 5 224 block3b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock3b_drop (Dropout) (None, None, None, 5 0 block3b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock3b_add (Add) (None, None, None, 5 0 block3b_drop[0][0] \n block3a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock3c_expand_conv (Conv2D) (None, None, None, 3 18816 block3b_add[0][0] \n__________________________________________________________________________________________________\nblock3c_expand_bn (BatchNormali (None, None, None, 3 1344 block3c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock3c_expand_activation (Acti (None, None, None, 3 0 block3c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock3c_dwconv (DepthwiseConv2D (None, None, None, 3 8400 block3c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock3c_bn (BatchNormalization) (None, None, None, 3 1344 block3c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock3c_activation (Activation) (None, None, None, 3 0 block3c_bn[0][0] \n__________________________________________________________________________________________________\nblock3c_se_squeeze (GlobalAvera (None, 336) 0 block3c_activation[0][0] \n__________________________________________________________________________________________________\nblock3c_se_reshape (Reshape) (None, 1, 1, 336) 0 block3c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock3c_se_reduce (Conv2D) (None, 1, 1, 14) 4718 block3c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock3c_se_expand (Conv2D) (None, 1, 1, 336) 5040 block3c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock3c_se_excite (Multiply) (None, None, None, 3 0 block3c_activation[0][0] \n block3c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock3c_project_conv (Conv2D) (None, None, None, 5 18816 block3c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock3c_project_bn (BatchNormal (None, None, None, 5 224 block3c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock3c_drop (Dropout) (None, None, None, 5 0 block3c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock3c_add (Add) (None, None, None, 5 0 block3c_drop[0][0] \n block3b_add[0][0] \n__________________________________________________________________________________________________\nblock3d_expand_conv (Conv2D) (None, None, None, 3 18816 block3c_add[0][0] \n__________________________________________________________________________________________________\nblock3d_expand_bn (BatchNormali (None, None, None, 3 1344 block3d_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock3d_expand_activation (Acti (None, None, None, 3 0 block3d_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock3d_dwconv (DepthwiseConv2D (None, None, None, 3 8400 block3d_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock3d_bn (BatchNormalization) (None, None, None, 3 1344 block3d_dwconv[0][0] \n__________________________________________________________________________________________________\nblock3d_activation (Activation) (None, None, None, 3 0 block3d_bn[0][0] \n__________________________________________________________________________________________________\nblock3d_se_squeeze (GlobalAvera (None, 336) 0 block3d_activation[0][0] \n__________________________________________________________________________________________________\nblock3d_se_reshape (Reshape) (None, 1, 1, 336) 0 block3d_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock3d_se_reduce (Conv2D) (None, 1, 1, 14) 4718 block3d_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock3d_se_expand (Conv2D) (None, 1, 1, 336) 5040 block3d_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock3d_se_excite (Multiply) (None, None, None, 3 0 block3d_activation[0][0] \n block3d_se_expand[0][0] \n__________________________________________________________________________________________________\nblock3d_project_conv (Conv2D) (None, None, None, 5 18816 block3d_se_excite[0][0] \n__________________________________________________________________________________________________\nblock3d_project_bn (BatchNormal (None, None, None, 5 224 block3d_project_conv[0][0] \n__________________________________________________________________________________________________\nblock3d_drop (Dropout) (None, None, None, 5 0 block3d_project_bn[0][0] \n__________________________________________________________________________________________________\nblock3d_add (Add) (None, None, None, 5 0 block3d_drop[0][0] \n block3c_add[0][0] \n__________________________________________________________________________________________________\nblock4a_expand_conv (Conv2D) (None, None, None, 3 18816 block3d_add[0][0] \n__________________________________________________________________________________________________\nblock4a_expand_bn (BatchNormali (None, None, None, 3 1344 block4a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4a_expand_activation (Acti (None, None, None, 3 0 block4a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4a_dwconv_pad (ZeroPadding (None, None, None, 3 0 block4a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4a_dwconv (DepthwiseConv2D (None, None, None, 3 3024 block4a_dwconv_pad[0][0] \n__________________________________________________________________________________________________\nblock4a_bn (BatchNormalization) (None, None, None, 3 1344 block4a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4a_activation (Activation) (None, None, None, 3 0 block4a_bn[0][0] \n__________________________________________________________________________________________________\nblock4a_se_squeeze (GlobalAvera (None, 336) 0 block4a_activation[0][0] \n__________________________________________________________________________________________________\nblock4a_se_reshape (Reshape) (None, 1, 1, 336) 0 block4a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4a_se_reduce (Conv2D) (None, 1, 1, 14) 4718 block4a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4a_se_expand (Conv2D) (None, 1, 1, 336) 5040 block4a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4a_se_excite (Multiply) (None, None, None, 3 0 block4a_activation[0][0] \n block4a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4a_project_conv (Conv2D) (None, None, None, 1 37632 block4a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4a_project_bn (BatchNormal (None, None, None, 1 448 block4a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4b_expand_conv (Conv2D) (None, None, None, 6 75264 block4a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4b_expand_bn (BatchNormali (None, None, None, 6 2688 block4b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4b_expand_activation (Acti (None, None, None, 6 0 block4b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4b_dwconv (DepthwiseConv2D (None, None, None, 6 6048 block4b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4b_bn (BatchNormalization) (None, None, None, 6 2688 block4b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4b_activation (Activation) (None, None, None, 6 0 block4b_bn[0][0] \n__________________________________________________________________________________________________\nblock4b_se_squeeze (GlobalAvera (None, 672) 0 block4b_activation[0][0] \n__________________________________________________________________________________________________\nblock4b_se_reshape (Reshape) (None, 1, 1, 672) 0 block4b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4b_se_reduce (Conv2D) (None, 1, 1, 28) 18844 block4b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4b_se_expand (Conv2D) (None, 1, 1, 672) 19488 block4b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4b_se_excite (Multiply) (None, None, None, 6 0 block4b_activation[0][0] \n block4b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4b_project_conv (Conv2D) (None, None, None, 1 75264 block4b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4b_project_bn (BatchNormal (None, None, None, 1 448 block4b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4b_drop (Dropout) (None, None, None, 1 0 block4b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4b_add (Add) (None, None, None, 1 0 block4b_drop[0][0] \n block4a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4c_expand_conv (Conv2D) (None, None, None, 6 75264 block4b_add[0][0] \n__________________________________________________________________________________________________\nblock4c_expand_bn (BatchNormali (None, None, None, 6 2688 block4c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4c_expand_activation (Acti (None, None, None, 6 0 block4c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4c_dwconv (DepthwiseConv2D (None, None, None, 6 6048 block4c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4c_bn (BatchNormalization) (None, None, None, 6 2688 block4c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4c_activation (Activation) (None, None, None, 6 0 block4c_bn[0][0] \n__________________________________________________________________________________________________\nblock4c_se_squeeze (GlobalAvera (None, 672) 0 block4c_activation[0][0] \n__________________________________________________________________________________________________\nblock4c_se_reshape (Reshape) (None, 1, 1, 672) 0 block4c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4c_se_reduce (Conv2D) (None, 1, 1, 28) 18844 block4c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4c_se_expand (Conv2D) (None, 1, 1, 672) 19488 block4c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4c_se_excite (Multiply) (None, None, None, 6 0 block4c_activation[0][0] \n block4c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4c_project_conv (Conv2D) (None, None, None, 1 75264 block4c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4c_project_bn (BatchNormal (None, None, None, 1 448 block4c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4c_drop (Dropout) (None, None, None, 1 0 block4c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4c_add (Add) (None, None, None, 1 0 block4c_drop[0][0] \n block4b_add[0][0] \n__________________________________________________________________________________________________\nblock4d_expand_conv (Conv2D) (None, None, None, 6 75264 block4c_add[0][0] \n__________________________________________________________________________________________________\nblock4d_expand_bn (BatchNormali (None, None, None, 6 2688 block4d_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4d_expand_activation (Acti (None, None, None, 6 0 block4d_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4d_dwconv (DepthwiseConv2D (None, None, None, 6 6048 block4d_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4d_bn (BatchNormalization) (None, None, None, 6 2688 block4d_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4d_activation (Activation) (None, None, None, 6 0 block4d_bn[0][0] \n__________________________________________________________________________________________________\nblock4d_se_squeeze (GlobalAvera (None, 672) 0 block4d_activation[0][0] \n__________________________________________________________________________________________________\nblock4d_se_reshape (Reshape) (None, 1, 1, 672) 0 block4d_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4d_se_reduce (Conv2D) (None, 1, 1, 28) 18844 block4d_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4d_se_expand (Conv2D) (None, 1, 1, 672) 19488 block4d_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4d_se_excite (Multiply) (None, None, None, 6 0 block4d_activation[0][0] \n block4d_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4d_project_conv (Conv2D) (None, None, None, 1 75264 block4d_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4d_project_bn (BatchNormal (None, None, None, 1 448 block4d_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4d_drop (Dropout) (None, None, None, 1 0 block4d_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4d_add (Add) (None, None, None, 1 0 block4d_drop[0][0] \n block4c_add[0][0] \n__________________________________________________________________________________________________\nblock4e_expand_conv (Conv2D) (None, None, None, 6 75264 block4d_add[0][0] \n__________________________________________________________________________________________________\nblock4e_expand_bn (BatchNormali (None, None, None, 6 2688 block4e_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4e_expand_activation (Acti (None, None, None, 6 0 block4e_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4e_dwconv (DepthwiseConv2D (None, None, None, 6 6048 block4e_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4e_bn (BatchNormalization) (None, None, None, 6 2688 block4e_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4e_activation (Activation) (None, None, None, 6 0 block4e_bn[0][0] \n__________________________________________________________________________________________________\nblock4e_se_squeeze (GlobalAvera (None, 672) 0 block4e_activation[0][0] \n__________________________________________________________________________________________________\nblock4e_se_reshape (Reshape) (None, 1, 1, 672) 0 block4e_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4e_se_reduce (Conv2D) (None, 1, 1, 28) 18844 block4e_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4e_se_expand (Conv2D) (None, 1, 1, 672) 19488 block4e_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4e_se_excite (Multiply) (None, None, None, 6 0 block4e_activation[0][0] \n block4e_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4e_project_conv (Conv2D) (None, None, None, 1 75264 block4e_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4e_project_bn (BatchNormal (None, None, None, 1 448 block4e_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4e_drop (Dropout) (None, None, None, 1 0 block4e_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4e_add (Add) (None, None, None, 1 0 block4e_drop[0][0] \n block4d_add[0][0] \n__________________________________________________________________________________________________\nblock4f_expand_conv (Conv2D) (None, None, None, 6 75264 block4e_add[0][0] \n__________________________________________________________________________________________________\nblock4f_expand_bn (BatchNormali (None, None, None, 6 2688 block4f_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock4f_expand_activation (Acti (None, None, None, 6 0 block4f_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock4f_dwconv (DepthwiseConv2D (None, None, None, 6 6048 block4f_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock4f_bn (BatchNormalization) (None, None, None, 6 2688 block4f_dwconv[0][0] \n__________________________________________________________________________________________________\nblock4f_activation (Activation) (None, None, None, 6 0 block4f_bn[0][0] \n__________________________________________________________________________________________________\nblock4f_se_squeeze (GlobalAvera (None, 672) 0 block4f_activation[0][0] \n__________________________________________________________________________________________________\nblock4f_se_reshape (Reshape) (None, 1, 1, 672) 0 block4f_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock4f_se_reduce (Conv2D) (None, 1, 1, 28) 18844 block4f_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock4f_se_expand (Conv2D) (None, 1, 1, 672) 19488 block4f_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock4f_se_excite (Multiply) (None, None, None, 6 0 block4f_activation[0][0] \n block4f_se_expand[0][0] \n__________________________________________________________________________________________________\nblock4f_project_conv (Conv2D) (None, None, None, 1 75264 block4f_se_excite[0][0] \n__________________________________________________________________________________________________\nblock4f_project_bn (BatchNormal (None, None, None, 1 448 block4f_project_conv[0][0] \n__________________________________________________________________________________________________\nblock4f_drop (Dropout) (None, None, None, 1 0 block4f_project_bn[0][0] \n__________________________________________________________________________________________________\nblock4f_add (Add) (None, None, None, 1 0 block4f_drop[0][0] \n block4e_add[0][0] \n__________________________________________________________________________________________________\nblock5a_expand_conv (Conv2D) (None, None, None, 6 75264 block4f_add[0][0] \n__________________________________________________________________________________________________\nblock5a_expand_bn (BatchNormali (None, None, None, 6 2688 block5a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5a_expand_activation (Acti (None, None, None, 6 0 block5a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5a_dwconv (DepthwiseConv2D (None, None, None, 6 16800 block5a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5a_bn (BatchNormalization) (None, None, None, 6 2688 block5a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5a_activation (Activation) (None, None, None, 6 0 block5a_bn[0][0] \n__________________________________________________________________________________________________\nblock5a_se_squeeze (GlobalAvera (None, 672) 0 block5a_activation[0][0] \n__________________________________________________________________________________________________\nblock5a_se_reshape (Reshape) (None, 1, 1, 672) 0 block5a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5a_se_reduce (Conv2D) (None, 1, 1, 28) 18844 block5a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5a_se_expand (Conv2D) (None, 1, 1, 672) 19488 block5a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5a_se_excite (Multiply) (None, None, None, 6 0 block5a_activation[0][0] \n block5a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5a_project_conv (Conv2D) (None, None, None, 1 107520 block5a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5a_project_bn (BatchNormal (None, None, None, 1 640 block5a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5b_expand_conv (Conv2D) (None, None, None, 9 153600 block5a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5b_expand_bn (BatchNormali (None, None, None, 9 3840 block5b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5b_expand_activation (Acti (None, None, None, 9 0 block5b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5b_dwconv (DepthwiseConv2D (None, None, None, 9 24000 block5b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5b_bn (BatchNormalization) (None, None, None, 9 3840 block5b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5b_activation (Activation) (None, None, None, 9 0 block5b_bn[0][0] \n__________________________________________________________________________________________________\nblock5b_se_squeeze (GlobalAvera (None, 960) 0 block5b_activation[0][0] \n__________________________________________________________________________________________________\nblock5b_se_reshape (Reshape) (None, 1, 1, 960) 0 block5b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5b_se_reduce (Conv2D) (None, 1, 1, 40) 38440 block5b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5b_se_expand (Conv2D) (None, 1, 1, 960) 39360 block5b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5b_se_excite (Multiply) (None, None, None, 9 0 block5b_activation[0][0] \n block5b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5b_project_conv (Conv2D) (None, None, None, 1 153600 block5b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5b_project_bn (BatchNormal (None, None, None, 1 640 block5b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5b_drop (Dropout) (None, None, None, 1 0 block5b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5b_add (Add) (None, None, None, 1 0 block5b_drop[0][0] \n block5a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5c_expand_conv (Conv2D) (None, None, None, 9 153600 block5b_add[0][0] \n__________________________________________________________________________________________________\nblock5c_expand_bn (BatchNormali (None, None, None, 9 3840 block5c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5c_expand_activation (Acti (None, None, None, 9 0 block5c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5c_dwconv (DepthwiseConv2D (None, None, None, 9 24000 block5c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5c_bn (BatchNormalization) (None, None, None, 9 3840 block5c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5c_activation (Activation) (None, None, None, 9 0 block5c_bn[0][0] \n__________________________________________________________________________________________________\nblock5c_se_squeeze (GlobalAvera (None, 960) 0 block5c_activation[0][0] \n__________________________________________________________________________________________________\nblock5c_se_reshape (Reshape) (None, 1, 1, 960) 0 block5c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5c_se_reduce (Conv2D) (None, 1, 1, 40) 38440 block5c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5c_se_expand (Conv2D) (None, 1, 1, 960) 39360 block5c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5c_se_excite (Multiply) (None, None, None, 9 0 block5c_activation[0][0] \n block5c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5c_project_conv (Conv2D) (None, None, None, 1 153600 block5c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5c_project_bn (BatchNormal (None, None, None, 1 640 block5c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5c_drop (Dropout) (None, None, None, 1 0 block5c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5c_add (Add) (None, None, None, 1 0 block5c_drop[0][0] \n block5b_add[0][0] \n__________________________________________________________________________________________________\nblock5d_expand_conv (Conv2D) (None, None, None, 9 153600 block5c_add[0][0] \n__________________________________________________________________________________________________\nblock5d_expand_bn (BatchNormali (None, None, None, 9 3840 block5d_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5d_expand_activation (Acti (None, None, None, 9 0 block5d_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5d_dwconv (DepthwiseConv2D (None, None, None, 9 24000 block5d_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5d_bn (BatchNormalization) (None, None, None, 9 3840 block5d_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5d_activation (Activation) (None, None, None, 9 0 block5d_bn[0][0] \n__________________________________________________________________________________________________\nblock5d_se_squeeze (GlobalAvera (None, 960) 0 block5d_activation[0][0] \n__________________________________________________________________________________________________\nblock5d_se_reshape (Reshape) (None, 1, 1, 960) 0 block5d_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5d_se_reduce (Conv2D) (None, 1, 1, 40) 38440 block5d_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5d_se_expand (Conv2D) (None, 1, 1, 960) 39360 block5d_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5d_se_excite (Multiply) (None, None, None, 9 0 block5d_activation[0][0] \n block5d_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5d_project_conv (Conv2D) (None, None, None, 1 153600 block5d_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5d_project_bn (BatchNormal (None, None, None, 1 640 block5d_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5d_drop (Dropout) (None, None, None, 1 0 block5d_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5d_add (Add) (None, None, None, 1 0 block5d_drop[0][0] \n block5c_add[0][0] \n__________________________________________________________________________________________________\nblock5e_expand_conv (Conv2D) (None, None, None, 9 153600 block5d_add[0][0] \n__________________________________________________________________________________________________\nblock5e_expand_bn (BatchNormali (None, None, None, 9 3840 block5e_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5e_expand_activation (Acti (None, None, None, 9 0 block5e_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5e_dwconv (DepthwiseConv2D (None, None, None, 9 24000 block5e_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5e_bn (BatchNormalization) (None, None, None, 9 3840 block5e_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5e_activation (Activation) (None, None, None, 9 0 block5e_bn[0][0] \n__________________________________________________________________________________________________\nblock5e_se_squeeze (GlobalAvera (None, 960) 0 block5e_activation[0][0] \n__________________________________________________________________________________________________\nblock5e_se_reshape (Reshape) (None, 1, 1, 960) 0 block5e_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5e_se_reduce (Conv2D) (None, 1, 1, 40) 38440 block5e_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5e_se_expand (Conv2D) (None, 1, 1, 960) 39360 block5e_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5e_se_excite (Multiply) (None, None, None, 9 0 block5e_activation[0][0] \n block5e_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5e_project_conv (Conv2D) (None, None, None, 1 153600 block5e_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5e_project_bn (BatchNormal (None, None, None, 1 640 block5e_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5e_drop (Dropout) (None, None, None, 1 0 block5e_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5e_add (Add) (None, None, None, 1 0 block5e_drop[0][0] \n block5d_add[0][0] \n__________________________________________________________________________________________________\nblock5f_expand_conv (Conv2D) (None, None, None, 9 153600 block5e_add[0][0] \n__________________________________________________________________________________________________\nblock5f_expand_bn (BatchNormali (None, None, None, 9 3840 block5f_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock5f_expand_activation (Acti (None, None, None, 9 0 block5f_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock5f_dwconv (DepthwiseConv2D (None, None, None, 9 24000 block5f_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock5f_bn (BatchNormalization) (None, None, None, 9 3840 block5f_dwconv[0][0] \n__________________________________________________________________________________________________\nblock5f_activation (Activation) (None, None, None, 9 0 block5f_bn[0][0] \n__________________________________________________________________________________________________\nblock5f_se_squeeze (GlobalAvera (None, 960) 0 block5f_activation[0][0] \n__________________________________________________________________________________________________\nblock5f_se_reshape (Reshape) (None, 1, 1, 960) 0 block5f_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock5f_se_reduce (Conv2D) (None, 1, 1, 40) 38440 block5f_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock5f_se_expand (Conv2D) (None, 1, 1, 960) 39360 block5f_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock5f_se_excite (Multiply) (None, None, None, 9 0 block5f_activation[0][0] \n block5f_se_expand[0][0] \n__________________________________________________________________________________________________\nblock5f_project_conv (Conv2D) (None, None, None, 1 153600 block5f_se_excite[0][0] \n__________________________________________________________________________________________________\nblock5f_project_bn (BatchNormal (None, None, None, 1 640 block5f_project_conv[0][0] \n__________________________________________________________________________________________________\nblock5f_drop (Dropout) (None, None, None, 1 0 block5f_project_bn[0][0] \n__________________________________________________________________________________________________\nblock5f_add (Add) (None, None, None, 1 0 block5f_drop[0][0] \n block5e_add[0][0] \n__________________________________________________________________________________________________\nblock6a_expand_conv (Conv2D) (None, None, None, 9 153600 block5f_add[0][0] \n__________________________________________________________________________________________________\nblock6a_expand_bn (BatchNormali (None, None, None, 9 3840 block6a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6a_expand_activation (Acti (None, None, None, 9 0 block6a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6a_dwconv_pad (ZeroPadding (None, None, None, 9 0 block6a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6a_dwconv (DepthwiseConv2D (None, None, None, 9 24000 block6a_dwconv_pad[0][0] \n__________________________________________________________________________________________________\nblock6a_bn (BatchNormalization) (None, None, None, 9 3840 block6a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6a_activation (Activation) (None, None, None, 9 0 block6a_bn[0][0] \n__________________________________________________________________________________________________\nblock6a_se_squeeze (GlobalAvera (None, 960) 0 block6a_activation[0][0] \n__________________________________________________________________________________________________\nblock6a_se_reshape (Reshape) (None, 1, 1, 960) 0 block6a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6a_se_reduce (Conv2D) (None, 1, 1, 40) 38440 block6a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6a_se_expand (Conv2D) (None, 1, 1, 960) 39360 block6a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6a_se_excite (Multiply) (None, None, None, 9 0 block6a_activation[0][0] \n block6a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6a_project_conv (Conv2D) (None, None, None, 2 261120 block6a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6a_project_bn (BatchNormal (None, None, None, 2 1088 block6a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6b_expand_conv (Conv2D) (None, None, None, 1 443904 block6a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6b_expand_bn (BatchNormali (None, None, None, 1 6528 block6b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6b_expand_activation (Acti (None, None, None, 1 0 block6b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6b_dwconv (DepthwiseConv2D (None, None, None, 1 40800 block6b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6b_bn (BatchNormalization) (None, None, None, 1 6528 block6b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6b_activation (Activation) (None, None, None, 1 0 block6b_bn[0][0] \n__________________________________________________________________________________________________\nblock6b_se_squeeze (GlobalAvera (None, 1632) 0 block6b_activation[0][0] \n__________________________________________________________________________________________________\nblock6b_se_reshape (Reshape) (None, 1, 1, 1632) 0 block6b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6b_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block6b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6b_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block6b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6b_se_excite (Multiply) (None, None, None, 1 0 block6b_activation[0][0] \n block6b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6b_project_conv (Conv2D) (None, None, None, 2 443904 block6b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6b_project_bn (BatchNormal (None, None, None, 2 1088 block6b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6b_drop (Dropout) (None, None, None, 2 0 block6b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6b_add (Add) (None, None, None, 2 0 block6b_drop[0][0] \n block6a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6c_expand_conv (Conv2D) (None, None, None, 1 443904 block6b_add[0][0] \n__________________________________________________________________________________________________\nblock6c_expand_bn (BatchNormali (None, None, None, 1 6528 block6c_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6c_expand_activation (Acti (None, None, None, 1 0 block6c_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6c_dwconv (DepthwiseConv2D (None, None, None, 1 40800 block6c_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6c_bn (BatchNormalization) (None, None, None, 1 6528 block6c_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6c_activation (Activation) (None, None, None, 1 0 block6c_bn[0][0] \n__________________________________________________________________________________________________\nblock6c_se_squeeze (GlobalAvera (None, 1632) 0 block6c_activation[0][0] \n__________________________________________________________________________________________________\nblock6c_se_reshape (Reshape) (None, 1, 1, 1632) 0 block6c_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6c_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block6c_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6c_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block6c_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6c_se_excite (Multiply) (None, None, None, 1 0 block6c_activation[0][0] \n block6c_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6c_project_conv (Conv2D) (None, None, None, 2 443904 block6c_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6c_project_bn (BatchNormal (None, None, None, 2 1088 block6c_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6c_drop (Dropout) (None, None, None, 2 0 block6c_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6c_add (Add) (None, None, None, 2 0 block6c_drop[0][0] \n block6b_add[0][0] \n__________________________________________________________________________________________________\nblock6d_expand_conv (Conv2D) (None, None, None, 1 443904 block6c_add[0][0] \n__________________________________________________________________________________________________\nblock6d_expand_bn (BatchNormali (None, None, None, 1 6528 block6d_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6d_expand_activation (Acti (None, None, None, 1 0 block6d_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6d_dwconv (DepthwiseConv2D (None, None, None, 1 40800 block6d_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6d_bn (BatchNormalization) (None, None, None, 1 6528 block6d_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6d_activation (Activation) (None, None, None, 1 0 block6d_bn[0][0] \n__________________________________________________________________________________________________\nblock6d_se_squeeze (GlobalAvera (None, 1632) 0 block6d_activation[0][0] \n__________________________________________________________________________________________________\nblock6d_se_reshape (Reshape) (None, 1, 1, 1632) 0 block6d_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6d_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block6d_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6d_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block6d_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6d_se_excite (Multiply) (None, None, None, 1 0 block6d_activation[0][0] \n block6d_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6d_project_conv (Conv2D) (None, None, None, 2 443904 block6d_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6d_project_bn (BatchNormal (None, None, None, 2 1088 block6d_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6d_drop (Dropout) (None, None, None, 2 0 block6d_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6d_add (Add) (None, None, None, 2 0 block6d_drop[0][0] \n block6c_add[0][0] \n__________________________________________________________________________________________________\nblock6e_expand_conv (Conv2D) (None, None, None, 1 443904 block6d_add[0][0] \n__________________________________________________________________________________________________\nblock6e_expand_bn (BatchNormali (None, None, None, 1 6528 block6e_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6e_expand_activation (Acti (None, None, None, 1 0 block6e_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6e_dwconv (DepthwiseConv2D (None, None, None, 1 40800 block6e_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6e_bn (BatchNormalization) (None, None, None, 1 6528 block6e_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6e_activation (Activation) (None, None, None, 1 0 block6e_bn[0][0] \n__________________________________________________________________________________________________\nblock6e_se_squeeze (GlobalAvera (None, 1632) 0 block6e_activation[0][0] \n__________________________________________________________________________________________________\nblock6e_se_reshape (Reshape) (None, 1, 1, 1632) 0 block6e_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6e_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block6e_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6e_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block6e_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6e_se_excite (Multiply) (None, None, None, 1 0 block6e_activation[0][0] \n block6e_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6e_project_conv (Conv2D) (None, None, None, 2 443904 block6e_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6e_project_bn (BatchNormal (None, None, None, 2 1088 block6e_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6e_drop (Dropout) (None, None, None, 2 0 block6e_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6e_add (Add) (None, None, None, 2 0 block6e_drop[0][0] \n block6d_add[0][0] \n__________________________________________________________________________________________________\nblock6f_expand_conv (Conv2D) (None, None, None, 1 443904 block6e_add[0][0] \n__________________________________________________________________________________________________\nblock6f_expand_bn (BatchNormali (None, None, None, 1 6528 block6f_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6f_expand_activation (Acti (None, None, None, 1 0 block6f_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6f_dwconv (DepthwiseConv2D (None, None, None, 1 40800 block6f_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6f_bn (BatchNormalization) (None, None, None, 1 6528 block6f_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6f_activation (Activation) (None, None, None, 1 0 block6f_bn[0][0] \n__________________________________________________________________________________________________\nblock6f_se_squeeze (GlobalAvera (None, 1632) 0 block6f_activation[0][0] \n__________________________________________________________________________________________________\nblock6f_se_reshape (Reshape) (None, 1, 1, 1632) 0 block6f_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6f_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block6f_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6f_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block6f_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6f_se_excite (Multiply) (None, None, None, 1 0 block6f_activation[0][0] \n block6f_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6f_project_conv (Conv2D) (None, None, None, 2 443904 block6f_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6f_project_bn (BatchNormal (None, None, None, 2 1088 block6f_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6f_drop (Dropout) (None, None, None, 2 0 block6f_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6f_add (Add) (None, None, None, 2 0 block6f_drop[0][0] \n block6e_add[0][0] \n__________________________________________________________________________________________________\nblock6g_expand_conv (Conv2D) (None, None, None, 1 443904 block6f_add[0][0] \n__________________________________________________________________________________________________\nblock6g_expand_bn (BatchNormali (None, None, None, 1 6528 block6g_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6g_expand_activation (Acti (None, None, None, 1 0 block6g_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6g_dwconv (DepthwiseConv2D (None, None, None, 1 40800 block6g_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6g_bn (BatchNormalization) (None, None, None, 1 6528 block6g_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6g_activation (Activation) (None, None, None, 1 0 block6g_bn[0][0] \n__________________________________________________________________________________________________\nblock6g_se_squeeze (GlobalAvera (None, 1632) 0 block6g_activation[0][0] \n__________________________________________________________________________________________________\nblock6g_se_reshape (Reshape) (None, 1, 1, 1632) 0 block6g_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6g_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block6g_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6g_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block6g_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6g_se_excite (Multiply) (None, None, None, 1 0 block6g_activation[0][0] \n block6g_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6g_project_conv (Conv2D) (None, None, None, 2 443904 block6g_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6g_project_bn (BatchNormal (None, None, None, 2 1088 block6g_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6g_drop (Dropout) (None, None, None, 2 0 block6g_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6g_add (Add) (None, None, None, 2 0 block6g_drop[0][0] \n block6f_add[0][0] \n__________________________________________________________________________________________________\nblock6h_expand_conv (Conv2D) (None, None, None, 1 443904 block6g_add[0][0] \n__________________________________________________________________________________________________\nblock6h_expand_bn (BatchNormali (None, None, None, 1 6528 block6h_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock6h_expand_activation (Acti (None, None, None, 1 0 block6h_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock6h_dwconv (DepthwiseConv2D (None, None, None, 1 40800 block6h_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock6h_bn (BatchNormalization) (None, None, None, 1 6528 block6h_dwconv[0][0] \n__________________________________________________________________________________________________\nblock6h_activation (Activation) (None, None, None, 1 0 block6h_bn[0][0] \n__________________________________________________________________________________________________\nblock6h_se_squeeze (GlobalAvera (None, 1632) 0 block6h_activation[0][0] \n__________________________________________________________________________________________________\nblock6h_se_reshape (Reshape) (None, 1, 1, 1632) 0 block6h_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock6h_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block6h_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock6h_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block6h_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock6h_se_excite (Multiply) (None, None, None, 1 0 block6h_activation[0][0] \n block6h_se_expand[0][0] \n__________________________________________________________________________________________________\nblock6h_project_conv (Conv2D) (None, None, None, 2 443904 block6h_se_excite[0][0] \n__________________________________________________________________________________________________\nblock6h_project_bn (BatchNormal (None, None, None, 2 1088 block6h_project_conv[0][0] \n__________________________________________________________________________________________________\nblock6h_drop (Dropout) (None, None, None, 2 0 block6h_project_bn[0][0] \n__________________________________________________________________________________________________\nblock6h_add (Add) (None, None, None, 2 0 block6h_drop[0][0] \n block6g_add[0][0] \n__________________________________________________________________________________________________\nblock7a_expand_conv (Conv2D) (None, None, None, 1 443904 block6h_add[0][0] \n__________________________________________________________________________________________________\nblock7a_expand_bn (BatchNormali (None, None, None, 1 6528 block7a_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock7a_expand_activation (Acti (None, None, None, 1 0 block7a_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock7a_dwconv (DepthwiseConv2D (None, None, None, 1 14688 block7a_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock7a_bn (BatchNormalization) (None, None, None, 1 6528 block7a_dwconv[0][0] \n__________________________________________________________________________________________________\nblock7a_activation (Activation) (None, None, None, 1 0 block7a_bn[0][0] \n__________________________________________________________________________________________________\nblock7a_se_squeeze (GlobalAvera (None, 1632) 0 block7a_activation[0][0] \n__________________________________________________________________________________________________\nblock7a_se_reshape (Reshape) (None, 1, 1, 1632) 0 block7a_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock7a_se_reduce (Conv2D) (None, 1, 1, 68) 111044 block7a_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock7a_se_expand (Conv2D) (None, 1, 1, 1632) 112608 block7a_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock7a_se_excite (Multiply) (None, None, None, 1 0 block7a_activation[0][0] \n block7a_se_expand[0][0] \n__________________________________________________________________________________________________\nblock7a_project_conv (Conv2D) (None, None, None, 4 731136 block7a_se_excite[0][0] \n__________________________________________________________________________________________________\nblock7a_project_bn (BatchNormal (None, None, None, 4 1792 block7a_project_conv[0][0] \n__________________________________________________________________________________________________\nblock7b_expand_conv (Conv2D) (None, None, None, 2 1204224 block7a_project_bn[0][0] \n__________________________________________________________________________________________________\nblock7b_expand_bn (BatchNormali (None, None, None, 2 10752 block7b_expand_conv[0][0] \n__________________________________________________________________________________________________\nblock7b_expand_activation (Acti (None, None, None, 2 0 block7b_expand_bn[0][0] \n__________________________________________________________________________________________________\nblock7b_dwconv (DepthwiseConv2D (None, None, None, 2 24192 block7b_expand_activation[0][0] \n__________________________________________________________________________________________________\nblock7b_bn (BatchNormalization) (None, None, None, 2 10752 block7b_dwconv[0][0] \n__________________________________________________________________________________________________\nblock7b_activation (Activation) (None, None, None, 2 0 block7b_bn[0][0] \n__________________________________________________________________________________________________\nblock7b_se_squeeze (GlobalAvera (None, 2688) 0 block7b_activation[0][0] \n__________________________________________________________________________________________________\nblock7b_se_reshape (Reshape) (None, 1, 1, 2688) 0 block7b_se_squeeze[0][0] \n__________________________________________________________________________________________________\nblock7b_se_reduce (Conv2D) (None, 1, 1, 112) 301168 block7b_se_reshape[0][0] \n__________________________________________________________________________________________________\nblock7b_se_expand (Conv2D) (None, 1, 1, 2688) 303744 block7b_se_reduce[0][0] \n__________________________________________________________________________________________________\nblock7b_se_excite (Multiply) (None, None, None, 2 0 block7b_activation[0][0] \n block7b_se_expand[0][0] \n__________________________________________________________________________________________________\nblock7b_project_conv (Conv2D) (None, None, None, 4 1204224 block7b_se_excite[0][0] \n__________________________________________________________________________________________________\nblock7b_project_bn (BatchNormal (None, None, None, 4 1792 block7b_project_conv[0][0] \n__________________________________________________________________________________________________\nblock7b_drop (Dropout) (None, None, None, 4 0 block7b_project_bn[0][0] \n__________________________________________________________________________________________________\nblock7b_add (Add) (None, None, None, 4 0 block7b_drop[0][0] \n block7a_project_bn[0][0] \n__________________________________________________________________________________________________\ntop_conv (Conv2D) (None, None, None, 1 802816 block7b_add[0][0] \n__________________________________________________________________________________________________\ntop_bn (BatchNormalization) (None, None, None, 1 7168 top_conv[0][0] \n__________________________________________________________________________________________________\ntop_activation (Activation) (None, None, None, 1 0 top_bn[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d (Globa (None, 1792) 0 top_activation[0][0] \n__________________________________________________________________________________________________\ndropout (Dropout) (None, 1792) 0 global_average_pooling2d[0][0] \n__________________________________________________________________________________________________\noutput (Dense) (None, 5) 8965 dropout[0][0] \n==================================================================================================\nTotal params: 17,682,788\nTrainable params: 17,557,581\nNon-trainable params: 125,207\n__________________________________________________________________________________________________\n"
]
],
[
[
"# Test set predictions",
"_____no_output_____"
]
],
[
[
"files_path = f'{database_base_path}test_images/'\ntest_size = len(os.listdir(files_path))\ntest_preds = np.zeros((test_size, N_CLASSES))\n\n\nfor model_path in model_path_list:\n print(model_path)\n K.clear_session()\n model.load_weights(model_path)\n\n if TTA_STEPS > 0:\n test_ds = get_dataset(files_path, tta=True).repeat()\n ct_steps = TTA_STEPS * ((test_size/BATCH_SIZE) + 1)\n preds = model.predict(test_ds, steps=ct_steps, verbose=1)[:(test_size * TTA_STEPS)]\n preds = np.mean(preds.reshape(test_size, TTA_STEPS, N_CLASSES, order='F'), axis=1)\n test_preds += preds / len(model_path_list)\n else:\n test_ds = get_dataset(files_path, tta=False)\n x_test = test_ds.map(lambda image, image_name: image)\n test_preds += model.predict(x_test) / len(model_path_list)\n \ntest_preds = np.argmax(test_preds, axis=-1)\ntest_names_ds = get_dataset(files_path)\nimage_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_names_ds.unbatch())]",
"/kaggle/input/162-cassava-leaf-effnetb4-dcr-04-380x380/model_0.h5\n"
],
[
"submission = pd.DataFrame({'image_id': image_names, 'label': test_preds})\nsubmission.to_csv('submission.csv', index=False)\ndisplay(submission.head())",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d053338355d4e68be4d41e4c66977ac3328f8414 | 17,253 | ipynb | Jupyter Notebook | 02-Variaveis_Tipo_Estrutura_Dados/03-Strings.ipynb | alineAssuncao/Python_Fundamentos_Analise_Dados | 872781f2cec24487b0f29f62afeb60650a451bfd | [
"MIT"
] | 1 | 2019-02-03T10:53:55.000Z | 2019-02-03T10:53:55.000Z | 02-Variaveis_Tipo_Estrutura_Dados/03-Strings.ipynb | alineAssuncao/Python_Fundamentos_Analise_Dados | 872781f2cec24487b0f29f62afeb60650a451bfd | [
"MIT"
] | null | null | null | 02-Variaveis_Tipo_Estrutura_Dados/03-Strings.ipynb | alineAssuncao/Python_Fundamentos_Analise_Dados | 872781f2cec24487b0f29f62afeb60650a451bfd | [
"MIT"
] | null | null | null | 17.304915 | 480 | 0.452965 | [
[
[
"# String\n\n## Criando uma String\n\n#### Para criar uma string em python você pode usar aspas simples ou duplas",
"_____no_output_____"
]
],
[
[
"# Uma única palavra\n'Olá'",
"_____no_output_____"
],
[
"# uma frase\n'isto é uma string em pyton'",
"_____no_output_____"
],
[
"# usando aspas duplas\n\"teste aspa duplas\"",
"_____no_output_____"
],
[
"# combinação\n\"podemos utilizas as duas aspas ou uma no 'python'\"",
"_____no_output_____"
]
],
[
[
"## Imprimindo uma String",
"_____no_output_____"
]
],
[
[
"print ('imprimindo uma String')",
"imprimindo uma String\n"
],
[
"print ('testando \\nString \\nem \\nPython')",
"testando \nString \nem \nPython\n"
],
[
"print ('\\n')",
"\n\n"
]
],
[
[
"## Indexando Strings",
"_____no_output_____"
]
],
[
[
"# Atribuindo uma string\ns = 'Data Science Academy'",
"_____no_output_____"
],
[
"print (s)",
"Data Science Academy\n"
],
[
"# primeiro elemento da string\ns[0]",
"_____no_output_____"
],
[
"s[1]",
"_____no_output_____"
],
[
"s[2]",
"_____no_output_____"
]
],
[
[
"#### Podemos usar : para executar um slicing que faz a leitura de tudo até um ponto designado",
"_____no_output_____"
]
],
[
[
"# retorna os elementos sa string, começando em uma posição\ns[1:]",
"_____no_output_____"
],
[
"# a string continua inalterada\ns",
"_____no_output_____"
],
[
"# retorna tudo até uma posição anterior da informada\ns[:3]",
"_____no_output_____"
],
[
"# retorna uma determinada cadeia de caracter\ns[2:6]",
"_____no_output_____"
],
[
"s[:]",
"_____no_output_____"
],
[
"# indexação negativa para ler de trás para frente \n# busca apenas a posição informada\ns[-2]",
"_____no_output_____"
],
[
"# retorna tudo, exceto a última letra\ns[:-1]",
"_____no_output_____"
]
],
[
[
"#### Podemos usar a notação de índice e fatiar a string em pedaços especificos",
"_____no_output_____"
]
],
[
[
"s[::1]",
"_____no_output_____"
],
[
"s[::2]",
"_____no_output_____"
],
[
"s[::-1]",
"_____no_output_____"
]
],
[
[
"## Propriedades de string",
"_____no_output_____"
]
],
[
[
"s",
"_____no_output_____"
],
[
"# Alterando um caracter (não permite a alteração - imutaveis)\ns[0] = 'x'",
"_____no_output_____"
],
[
"# concatenando strings\ns + ' é a melhor'",
"_____no_output_____"
],
[
"print (s)",
"Data Science Academy\n"
],
[
"s = s + ' é a melhor'",
"_____no_output_____"
],
[
"print(s)",
"Data Science Academy é a melhor\n"
],
[
"# podemos usar o símbolo de multiplicação para criar repetição\nletra = 'W'",
"_____no_output_____"
],
[
"letra * 3",
"_____no_output_____"
]
],
[
[
"## Funções Built-in de strings",
"_____no_output_____"
]
],
[
[
"s",
"_____no_output_____"
],
[
"# upper case\ns.upper()",
"_____no_output_____"
],
[
"#lower case\ns.lower()",
"_____no_output_____"
],
[
"# dividir uma string por espaços em branco(padrão)\ns.split()",
"_____no_output_____"
],
[
"# dividindo com um elemento especifico\ns.split('y')",
"_____no_output_____"
]
],
[
[
"## Funções de string",
"_____no_output_____"
]
],
[
[
"s = 'olá! Seja bem vindo ao universo Python'",
"_____no_output_____"
],
[
"s.capitalize()",
"_____no_output_____"
],
[
"s.count('a')",
"_____no_output_____"
],
[
"s.find('p')",
"_____no_output_____"
],
[
"s.center(20, 'z')",
"_____no_output_____"
],
[
"s.isalnum()",
"_____no_output_____"
],
[
"s.islower()",
"_____no_output_____"
],
[
"s.isspace()",
"_____no_output_____"
],
[
"s.endswith('o')",
"_____no_output_____"
],
[
"s.partition('!')",
"_____no_output_____"
]
],
[
[
"## Comparando Strings",
"_____no_output_____"
]
],
[
[
"print (\"Python\" == \"R\")",
"False\n"
],
[
"print (\"Python\" == \"Python\")",
"True\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d053397790d6be95d29b1ea1639f5013ca360921 | 12,128 | ipynb | Jupyter Notebook | HT-banks_of_tubes.ipynb | CarlGriffinsteed/UVM-ME144-Heat-Transfer | 9c477449d6ba5d6a9ee7c57f1c0ed4aab0ce4cca | [
"CC-BY-3.0"
] | null | null | null | HT-banks_of_tubes.ipynb | CarlGriffinsteed/UVM-ME144-Heat-Transfer | 9c477449d6ba5d6a9ee7c57f1c0ed4aab0ce4cca | [
"CC-BY-3.0"
] | null | null | null | HT-banks_of_tubes.ipynb | CarlGriffinsteed/UVM-ME144-Heat-Transfer | 9c477449d6ba5d6a9ee7c57f1c0ed4aab0ce4cca | [
"CC-BY-3.0"
] | null | null | null | 40.697987 | 682 | 0.611478 | [
[
[
"%matplotlib inline \n",
"_____no_output_____"
]
],
[
[
"This notebook deals with banks of cylinders in a cross flow. Cylinder banks are common heat exchangers where the cylinders may be heated by electricity or a fluid may be flowing within the cylinder to cool or heat the flow around the cylinders. The advantage of cylinder banks is the increase mixing in the fluid, thus the temperature downstream of the bank is likely to be quite homogeneous.\n\n<img src='figures_Tube_Banks/fig_07_11.jpg' alt=\"my awesome sketch\" width=50% >",
"_____no_output_____"
],
[
"The arrangement of cylinders may be aligned or staggered as shown in the following figures. The flow and geometrical parameters will be used in the derivation of temperature equations and Nusselt number correlation.\n<img src='figures_Tube_Banks/fig_07_12.jpg' alt=\"my awesome sketch\" width=50% >",
"_____no_output_____"
],
[
"This notebook should cover a wide variety of problems, providing that the assumption of isothermal boundary conditions on the tubes is (approximately) valid. The tube surface temperature is $T_s$. \nThe flow and geometrical parameters of importance to solve this problem are:\n\n* Arithmetic mean of temperature between inlet $T_i$ and outlet $T_o$ of the bank. \n$$\nT_m = \\frac{T_i+T_o}{2}\n$$\n* Reynolds number based on the max velocity within the bank $V_\\text{max}$, the density and viscosity based on $T_m$:\n$$\nRe=\\frac{\\rho V_\\text{max}D}{\\mu}\n$$\n**Question: At what temperature should you estimate $\\rho$ and $\\mu$?** The energy of the flow comes from the inlet and the velocity $V_\\mathrm{max}$ is calculated from the inlet velocity. The density should therefore be estimated at $T_i$. The viscous forces however occur throughout the domain, so $\\mu$ should be estimated at $T_m$. In some cases $T_o$ is the quantity to be found. It is acceptable to use $\\mu(T_i)$, but you must verify that the temperature difference $\\Delta T=\\vert T_i-T_o\\vert$ is not too large. If it is, you must repeat the calculation iteratively with $\\mu(T_i)$ until $T_o$ converges.\n\n* Prandtl number $Pr$ based on $T_m$ \n* Surface Prandtl number $Pr_s$ based on $T_s$\n* Number of tubes in the transversal direction $N_T$, longitudinal direction $N_L$ and total $N=N_T\\times N_L$\n* The transversal $S_T$ and longitudinal $S_L$ separations between tubes in a row and between rows.\n* The type of tube arrangement: \n * Aligned\n$$\nV_\\text{max}=\\frac{S_T}{S_T-D}V_i\n$$\n * Staggered\n$$\nV_\\text{max}=\\frac{S_T}{2(S_D-D)}V_i\\text{ with }S_D=\\sqrt{S_L^2+\\left(\\frac{S_T}{2}\\right)^2}\n$$\n",
"_____no_output_____"
],
[
"The Nusselt number correlation for a bank of tubes is a variation of the Zukauskas correlation:\n$$\nNu = C_2C_1Re^mPr^{0.36}\\left(\\frac{Pr}{Pr_s}\\right)^{1/4}\n$$\nwhere $C_2$ depends on $N_L$. In the library, the function for this correlation is\n<FONT FACE=\"courier\" style=\"color:red\">Nu_tube_banks(Re,Pr,Pr_s,S_L,S_T,N_L,arrangement) </FONT>.\n\nThe heat rate per unit length across the tube bank is\n$$\nq'=N\\overline{h}\\pi D \\Delta T_\\text{lm}\n$$\nwhere the temperature drop is the log-mean temperature difference\n$$\n\\Delta T_\\text{lm}=\\cfrac{(T_s-T_i)-(T_s-T_o)}{\\ln\\left(\\cfrac{T_s-T_i}{T_s-T_o}\\right)}\n$$\nwhich accounts for the exponential variation of temperature across the bank\n$$\n\\cfrac{T_s-T_o}{T_s-T_i}=\\exp\\left(-\\cfrac{\\pi D N \\overline{h}}{\\rho V_i N_T S_T C_p}\\right)\n$$\nwhere $\\rho$, $C_p$ and $V_i$ are inlet quantities if $T_o$ is unknown of the arthimetic mean temperature if available. Note that $N=N_L\\times N_T$ thus \n$$\n\\cfrac{T_s-T_o}{T_s-T_i}=\\exp\\left(-\\cfrac{\\pi D N_L \\overline{h}}{\\rho V_i S_T C_p}\\right)\n$$\nOne may want to determine the number of tubes necessary to achieve a given $T_o$. The number of tubes in the transverse directions is typically dictated by the geometry of the system, so we are looking for $N_L$:\n$$\nN_L = \\cfrac{\\rho V_i S_T C_p}{\\pi D \\overline{h}} \\log\\left(\\cfrac{T_s-T_i}{T_s-T_o}\\right)\n$$\n",
"_____no_output_____"
],
[
"The pressure loss through the tube bank is a critical component of the heat exchanger design. The presence of obstacles in the flow requires an increase in the mechanical energy necessary to drive the flow at a given flow rate. The pressure loss, given all parameters above, is\n$$\n\\Delta p = N_L\\,\\chi\\, f\\,\\frac{\\rho V_\\text{max}^2}{2}\n$$\nwhere the friction factor $f$ and the parameter $\\chi$ are given by the graphs below for the aligned (top) and staggered (bottom) arrangements. These graphs use two new quantities, the longitudnal and transverse pitches:\n$$\nP_L=\\frac{S_L}{D}\\text{ and } P_T=\\frac{S_T}{D}\n$$\n<img src='figures_Tube_Banks/fig_07_14.jpg' alt=\"my awesome sketch\" width=100% >\n<img src='figures_Tube_Banks/fig_07_15.jpg' alt=\"my awesome sketch\" width=100% >",
"_____no_output_____"
],
[
"## Problem1\nA preheater involves the use of condensing steam on the inside of a bank of tubes to heat air that enters at $P_i=1 \\text{ atm}$ and $T_i=25^\\circ\\text{C}$. The air moves at $V_i=5\\text{ m/s}$ in cross flow over the tubes. Each tube is $L=1\\text{ m}$ long and has an outside diameter of $D=10 \\text{ mm}$. The bank consists of columns of 14 tubes in the transversal direction $N_T=14$ and $N_L$ rows in the direction of flow. The arrangement of tubes is aligned array for which $S_T=S_L=15\\text{ mm}$. What is the minimum value of $N_L$ needed to achieve an outlet temperature of $T_o=75^\\circ\\text{C}$? What is the corresponding pressure drop across the tube bank?\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom Libraries import thermodynamics as thermo\nfrom Libraries import HT_external_convection as extconv\n\nT_i = 25 #C\nT_o = 75 #C\nT_s = 100 #C\nV_i = 5 #m/s\nL = 1 #m\nD = 10e-3 #mm\nN_L = 14\nS_T = S_L = 15e-3 #m\n\n# ?extconv.BankofTubes\nbank = extconv.BankofTubes('aligned','air',T_i,T_s,T_o,\"C\",V_i,D,S_L,S_T,N_L)\n\nprint(\"The number of rows required to reach T_o=%.0f C is %.2f\" %(bank.T_o,bank.N_L_for_given_To))\n",
"The number of rows required to reach T_o=75 C is 15.26\n"
]
],
[
[
"If the outlet temperature can be slightly below $75^\\circ\\mathrm{C}$, then the number of rows is 15.\n\nIf the outlet temperature has to be at least $75^\\circ\\mathrm{C}$, then the number of rows is 16.",
"_____no_output_____"
]
],
[
[
"N_L = 15\nbank = extconv.BankofTubes('aligned','air',T_i,T_s,T_o,\"C\",V_i,D,S_L,S_T,N_L)\nN_T = 14\nbank.temperature_outlet_tube_banks(N_T,N_L)\nprint(\"With N_L=%.0f, T_o=%.2f\" %(bank.N_L,bank.T_o))\nprint(\"Re=%.0f, P_L = %.2f\" %(bank.Re,bank.S_T/bank.D))\nbank.pressure_drop(N_L,3.2,1)\nprint(\"Pressure drop is %.2f Pa\" %(bank.Delta_p))",
"With N_L=15, T_o=74.54\nRe=9052, P_L = 1.50\nPressure drop is 6401.70\n"
]
],
[
[
"## Problem 2\n\nA preheater involves the use of condensing steam at $100^\\circ\\text{C}$ on the inside of a bank of tubes to heat air that enters at $1 \\text{ atm}$ and $25^\\circ\\text{C}$. The air moves at $5\\text{ m/s}$ in cross flow over the tubes. Each tube is $1\\text{ m}$ long and has an outside diameter of $10 \\text{ mm}$. The bank consists of 196 tubes in a square, aligned array for which $S_T=S_L=15\\text{ mm}$. What is the total rate of heat transfer to the air? What is the pressure drop associated with the airflow?",
"_____no_output_____"
]
],
[
[
"N_L = N_T = 14\n# T_o = 50.\n# bank = extconv.BankofTubes('aligned','air',T_i,T_s,T_o,\"C\",V_i,D,S_L,S_T,N_L)\n# bank.temperature_outlet_tube_banks(N_T,N_L)\n# print(bank.T_o)\n# print(bank.Re)\n# print(bank.Nu)\nT_o = 72.6\nbank = extconv.BankofTubes('aligned','air',T_i,T_s,T_o,\"C\",V_i,D,S_L,S_T,N_L)\nbank.temperature_outlet_tube_banks(N_T,N_L)\nprint(bank.T_o)\nprint(bank.Re)\nprint(bank.Nu)\nbank.heat_rate(N_T,N_L,L)\nprint(bank.q)",
"72.60620496012206\n9080.451003545966\n73.95776478607291\n59665.2457253688\n"
]
],
[
[
"## Problem 3",
"_____no_output_____"
],
[
"<img src='figures_Tube_Banks/probun_07_34.jpg' alt=\"my awesome sketch\" width=100% >\nAn air duct heater consists of an aligned array of electrical heating elements in which the longitudinal and transverse pitches are $S_L=S_T= 24\\text{ mm}$. There are 3 rows of elements in the flow direction ($N_L=3$) and 4 elements per row ($N_T=4$). Atmospheric air with an upstream velocity of $12\\text{ m/s}$ and a temperature of $25^\\circ\\text{C}$ moves in cross flow over the elements, which have a diameter of $12\\text{ mm}$, a length of $250\\text{ mm}$, and are maintained at a surface temperature of $350^\\circ\\text{C}$.\n<ol>\n<li>\nDetermine the total heat transfer to the air and the temperature of the air leaving the duct heater.\n</li>\n<li>\nDetermine the pressure drop across the element bank and the fan power requirement.\n</li>\n<li>\nCompare the average convection coefficient obtained in your analysis with the value for an isolated (single) element. Explain the difference between the results.\n</li>\n<li>\nWhat effect would increasing the longitudinal and transverse pitches to 30 mm have on the exit temperature of the air, the total heat rate, and the pressure drop?\n</li>\n</ol>",
"_____no_output_____"
]
],
[
[
"\n\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d0534aed4803a65f4cc6a88d15ed5e9a8c0fdb4b | 21,188 | ipynb | Jupyter Notebook | Custom_Script/02_CustomScript_Training_Pipeline.ipynb | ben-chin-unify/solution-accelerator-many-models | 99e016f62f69052c43515291e82df3034c08f58a | [
"MIT"
] | 137 | 2020-05-18T07:19:27.000Z | 2022-03-31T00:40:01.000Z | Custom_Script/02_CustomScript_Training_Pipeline.ipynb | ben-chin-unify/solution-accelerator-many-models | 99e016f62f69052c43515291e82df3034c08f58a | [
"MIT"
] | 44 | 2020-05-18T07:15:03.000Z | 2022-03-10T14:03:19.000Z | Custom_Script/02_CustomScript_Training_Pipeline.ipynb | ben-chin-unify/solution-accelerator-many-models | 99e016f62f69052c43515291e82df3034c08f58a | [
"MIT"
] | 69 | 2020-06-01T16:32:15.000Z | 2022-03-29T18:15:46.000Z | 35.431438 | 918 | 0.620068 | [
[
[
"Copyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License.",
"_____no_output_____"
],
[
"# Training Pipeline - Custom Script\n_**Training many models using a custom script**_\n\n----\n\nThis notebook demonstrates how to create a pipeline that trains and registers many models using a custom script. We utilize the [ParallelRunStep](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-parallel-run-step) to parallelize the process of training the models to make the process more efficient. For this solution accelerator we are using the [OJ Sales Dataset](https://azure.microsoft.com/en-us/services/open-datasets/catalog/sample-oj-sales-simulated/) to train individual models that predict sales for each store and brand of orange juice.\n\nThe model we use here is a simple, regression-based forecaster built on scikit-learn and pandas utilities. See the [training script](scripts/train.py) to see how the forecaster is constructed. This forecaster is intended for demonstration purposes, so it does not handle the large variety of special cases that one encounters in time-series modeling. For instance, the model here assumes that all time-series are comprised of regularly sampled observations on a contiguous interval with no missing values. The model does not include any handling of categorical variables. For a more general-use forecaster that handles missing data, advanced featurization, and automatic model selection, see the [AutoML Forecasting task](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-forecast). Also, see the notebooks demonstrating [AutoML forecasting in a many models scenario](../Automated_ML).\n\n### Prerequisites\nAt this point, you should have already:\n\n1. Created your AML Workspace using the [00_Setup_AML_Workspace notebook](../00_Setup_AML_Workspace.ipynb)\n2. Run [01_Data_Preparation.ipynb](../01_Data_Preparation.ipynb) to setup your compute and create the dataset",
"_____no_output_____"
],
[
"#### Please ensure you have the latest version of the Azure ML SDK and also install Pipeline Steps Package",
"_____no_output_____"
]
],
[
[
"#!pip install --upgrade azureml-sdk",
"_____no_output_____"
],
[
"# !pip install azureml-pipeline-steps",
"_____no_output_____"
]
],
[
[
"## 1.0 Connect to workspace and datastore",
"_____no_output_____"
]
],
[
[
"from azureml.core import Workspace\n\n# set up workspace\nws = Workspace.from_config()\n\n# set up datastores\ndstore = ws.get_default_datastore()\n\nprint('Workspace Name: ' + ws.name, \n 'Azure Region: ' + ws.location, \n 'Subscription Id: ' + ws.subscription_id, \n 'Resource Group: ' + ws.resource_group, \n sep = '\\n')",
"_____no_output_____"
]
],
[
[
"## 2.0 Create an experiment",
"_____no_output_____"
]
],
[
[
"from azureml.core import Experiment\n\nexperiment = Experiment(ws, 'oj_training_pipeline')\n\nprint('Experiment name: ' + experiment.name)",
"_____no_output_____"
]
],
[
[
"## 3.0 Get the training Dataset\n\nNext, we get the training Dataset using the [Dataset.get_by_name()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset#get-by-name-workspace--name--version--latest--) method.\n\nThis is the training dataset we created and registered in the [data preparation notebook](../01_Data_Preparation.ipynb). If you chose to use only a subset of the files, the training dataset name will be `oj_data_small_train`. Otherwise, the name you'll have to use is `oj_data_train`. \n\nWe recommend to start with the small dataset and make sure everything runs successfully, then scale up to the full dataset.",
"_____no_output_____"
]
],
[
[
"dataset_name = 'oj_data_small_train'",
"_____no_output_____"
],
[
"from azureml.core.dataset import Dataset\n\ndataset = Dataset.get_by_name(ws, name=dataset_name)\ndataset_input = dataset.as_named_input(dataset_name)",
"_____no_output_____"
]
],
[
[
"## 4.0 Create the training pipeline\nNow that the workspace, experiment, and dataset are set up, we can put together a pipeline for training.\n\n### 4.1 Configure environment for ParallelRunStep\nAn [environment](https://docs.microsoft.com/en-us/azure/machine-learning/concept-environments) defines a collection of resources that we will need to run our pipelines. We configure a reproducible Python environment for our training script including the [scikit-learn](https://scikit-learn.org/stable/index.html) python library.",
"_____no_output_____"
]
],
[
[
"from azureml.core import Environment\nfrom azureml.core.conda_dependencies import CondaDependencies\n\ntrain_env = Environment(name=\"many_models_environment\")\ntrain_conda_deps = CondaDependencies.create(pip_packages=['sklearn', 'pandas', 'joblib', 'azureml-defaults', 'azureml-core', 'azureml-dataprep[fuse]'])\ntrain_env.python.conda_dependencies = train_conda_deps",
"_____no_output_____"
]
],
[
[
"### 4.2 Choose a compute target ",
"_____no_output_____"
],
[
"Currently ParallelRunConfig only supports AMLCompute. This is the compute cluster you created in the [setup notebook](../00_Setup_AML_Workspace.ipynb#3.0-Create-compute-cluster).",
"_____no_output_____"
]
],
[
[
"cpu_cluster_name = \"cpucluster\"",
"_____no_output_____"
],
[
"from azureml.core.compute import AmlCompute\n\ncompute = AmlCompute(ws, cpu_cluster_name)",
"_____no_output_____"
]
],
[
[
"### 4.3 Set up ParallelRunConfig\n\n[ParallelRunConfig](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_config.parallelrunconfig?view=azure-ml-py) provides the configuration for the ParallelRunStep we'll be creating next. Here we specify the environment and compute target we created above along with the entry script that will be for each batch.\n\nThere's a number of important parameters to configure including:\n- **mini_batch_size**: The number of files per batch. If you have 500 files and mini_batch_size is 10, 50 batches would be created containing 10 files each. Batches are split across the various nodes. \n\n- **node_count**: The number of compute nodes to be used for running the user script. For the small sample of OJ datasets, we only need a single node, but you will likely need to increase this number for larger datasets composed of more files. If you increase the node count beyond five here, you may need to increase the max_nodes for the compute cluster as well.\n\n- **process_count_per_node**: The number of processes per node. The compute cluster we are using has 8 cores so we set this parameter to 8.\n\n- **run_invocation_timeout**: The run() method invocation timeout in seconds. The timeout should be set to be higher than the maximum training time of one model (in seconds), by default it's 60. Since the batches that takes the longest to train are about 120 seconds, we set it to be 180 to ensure the method has adequate time to run.\n\n\nWe also added tags to preserve the information about our training cluster's node count, process count per node, and dataset name. You can find the 'Tags' column in Azure Machine Learning Studio.",
"_____no_output_____"
]
],
[
[
"from azureml.pipeline.steps import ParallelRunConfig\n\nprocesses_per_node = 8\nnode_count = 1\ntimeout = 180\n\nparallel_run_config = ParallelRunConfig(\n source_directory='./scripts',\n entry_script='train.py',\n mini_batch_size=\"1\",\n run_invocation_timeout=timeout,\n error_threshold=-1,\n output_action=\"append_row\",\n environment=train_env,\n process_count_per_node=processes_per_node,\n compute_target=compute,\n node_count=node_count)",
"_____no_output_____"
]
],
[
[
"### 4.4 Set up ParallelRunStep\n\nThis [ParallelRunStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_step.parallelrunstep?view=azure-ml-py) is the main step in our training pipeline. \n\nFirst, we set up the output directory and define the pipeline's output name. The datastore that stores the pipeline's output data is Workspace's default datastore.",
"_____no_output_____"
]
],
[
[
"from azureml.pipeline.core import PipelineData\n\noutput_dir = PipelineData(name=\"training_output\", datastore=dstore)",
"_____no_output_____"
]
],
[
[
"We provide our ParallelRunStep with a name, the ParallelRunConfig created above and several other parameters:\n\n- **inputs**: A list of input datasets. Here we'll use the dataset created in the previous notebook. The number of files in that path determines the number of models will be trained in the ParallelRunStep.\n\n- **output**: A PipelineData object that corresponds to the output directory. We'll use the output directory we just defined. \n\n- **arguments**: A list of arguments required for the train.py entry script. Here, we provide the schema for the timeseries data - i.e. the names of target, timestamp, and id columns - as well as columns that should be dropped prior to modeling, a string identifying the model type, and the number of observations we want to leave aside for testing.",
"_____no_output_____"
]
],
[
[
"from azureml.pipeline.steps import ParallelRunStep\n\nparallel_run_step = ParallelRunStep(\n name=\"many-models-training\",\n parallel_run_config=parallel_run_config,\n inputs=[dataset_input],\n output=output_dir,\n allow_reuse=False,\n arguments=['--target_column', 'Quantity', \n '--timestamp_column', 'WeekStarting', \n '--timeseries_id_columns', 'Store', 'Brand',\n '--drop_columns', 'Revenue', 'Store', 'Brand',\n '--model_type', 'lr',\n '--test_size', 20]\n)",
"_____no_output_____"
]
],
[
[
"## 5.0 Run the pipeline\nNext, we submit our pipeline to run. The run will train models for each dataset using a train set, compute accuracy metrics for the fits using a test set, and finally re-train models with all the data available. With 10 files, this should only take a few minutes but with the full dataset this can take over an hour.",
"_____no_output_____"
]
],
[
[
"from azureml.pipeline.core import Pipeline\n\npipeline = Pipeline(workspace=ws, steps=[parallel_run_step])\nrun = experiment.submit(pipeline)",
"_____no_output_____"
],
[
"#Wait for the run to complete\nrun.wait_for_completion(show_output=False, raise_on_error=True)",
"_____no_output_____"
]
],
[
[
"## 6.0 View results of training pipeline\nThe dataframe we return in the run method of train.py is outputted to *parallel_run_step.txt*. To see the results of our training pipeline, we'll download that file, read in the data to a DataFrame, and then visualize the results, including the in-sample metrics.\nThe run submitted to the Azure Machine Learning Training Compute Cluster may take a while. The output is not generated until the run is complete. You can monitor the status of the run in Azure Portal https://ml.azure.com\n\n### 6.1 Download parallel_run_step.txt locally",
"_____no_output_____"
]
],
[
[
"import os\n\ndef download_results(run, target_dir=None, step_name='many-models-training', output_name='training_output'):\n stitch_run = run.find_step_run(step_name)[0]\n port_data = stitch_run.get_output_data(output_name)\n port_data.download(target_dir, show_progress=True)\n return os.path.join(target_dir, 'azureml', stitch_run.id, output_name)\n\nfile_path = download_results(run, 'output')\nfile_path",
"_____no_output_____"
]
],
[
[
"### 6.2 Convert the file to a dataframe",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ndf = pd.read_csv(file_path + '/parallel_run_step.txt', sep=\" \", header=None)\ndf.columns = ['Store', 'Brand', 'Model', 'File Name', 'ModelName', 'StartTime', 'EndTime', 'Duration',\n 'MSE', 'RMSE', 'MAE', 'MAPE', 'Index', 'Number of Models', 'Status']\n\ndf['StartTime'] = pd.to_datetime(df['StartTime'])\ndf['EndTime'] = pd.to_datetime(df['EndTime'])\ndf['Duration'] = df['EndTime'] - df['StartTime']\ndf.head()",
"_____no_output_____"
]
],
[
[
"### 6.3 Review Results",
"_____no_output_____"
]
],
[
[
"total = df['EndTime'].max() - df['StartTime'].min()\n\nprint('Number of Models: ' + str(len(df)))\nprint('Total Duration: ' + str(total)[6:])",
"_____no_output_____"
],
[
"print('Average MAPE: ' + str(round(df['MAPE'].mean(), 5)))\nprint('Average MSE: ' + str(round(df['MSE'].mean(), 5)))\nprint('Average RMSE: ' + str(round(df['RMSE'].mean(), 5)))\nprint('Average MAE: '+ str(round(df['MAE'].mean(), 5)))",
"_____no_output_____"
],
[
"print('Maximum Duration: '+ str(df['Duration'].max())[7:])\nprint('Minimum Duration: ' + str(df['Duration'].min())[7:])\nprint('Average Duration: ' + str(df['Duration'].mean())[7:])",
"_____no_output_____"
]
],
[
[
"### 6.4 Visualize Performance across models\n\nHere, we produce some charts from the errors metrics calculated during the run using a subset put aside for testing.\n\nFirst, we examine the distribution of mean absolute percentage error (MAPE) over all the models:",
"_____no_output_____"
]
],
[
[
"import seaborn as sns \nimport matplotlib.pyplot as plt\n\nfig = sns.boxplot(y='MAPE', data=df)\nfig.set_title('MAPE across all models')",
"_____no_output_____"
]
],
[
[
"Next, we can break that down by Brand or Store to see variations in error across our models",
"_____no_output_____"
]
],
[
[
"fig = sns.boxplot(x='Brand', y='MAPE', data=df)\nfig.set_title('MAPE by Brand')",
"_____no_output_____"
]
],
[
[
"We can also look at how long models for different brands took to train",
"_____no_output_____"
]
],
[
[
"brand = df.groupby('Brand')\nbrand = brand['Duration'].sum()\nbrand = pd.DataFrame(brand)\nbrand['time_in_seconds'] = [time.total_seconds() for time in brand['Duration']]\n\nbrand.drop(columns=['Duration']).plot(kind='bar')\nplt.xlabel('Brand')\nplt.ylabel('Seconds')\nplt.title('Total Training Time by Brand')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 7.0 Publish and schedule the pipeline (Optional)\n\n\n### 7.1 Publish the pipeline\nOnce you have a pipeline you're happy with, you can publish a pipeline so you can call it programatically later on. See this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-your-first-pipeline#publish-a-pipeline) for additional information on publishing and calling pipelines.",
"_____no_output_____"
]
],
[
[
"# published_pipeline = pipeline.publish(name = 'train_many_models',\n# description = 'train many models',\n# version = '1',\n# continue_on_step_failure = False)",
"_____no_output_____"
]
],
[
[
"### 7.2 Schedule the pipeline\nYou can also [schedule the pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-schedule-pipelines) to run on a time-based or change-based schedule. This could be used to automatically retrain models every month or based on another trigger such as data drift.",
"_____no_output_____"
]
],
[
[
"# from azureml.pipeline.core import Schedule, ScheduleRecurrence\n \n# training_pipeline_id = published_pipeline.id\n\n# recurrence = ScheduleRecurrence(frequency=\"Month\", interval=1, start_time=\"2020-01-01T09:00:00\")\n# recurring_schedule = Schedule.create(ws, name=\"training_pipeline_recurring_schedule\", \n# description=\"Schedule Training Pipeline to run on the first day of every month\",\n# pipeline_id=training_pipeline_id, \n# experiment_name=experiment.name, \n# recurrence=recurrence)",
"_____no_output_____"
]
],
[
[
"## Next Steps\n\nNow that you've trained and scored the models, move on to [03_CustomScript_Forecasting_Pipeline.ipynb](03_CustomScript_Forecasting_Pipeline.ipynb) to make forecasts with your models.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0534cf28b1754c6f39a12051131833db542675a | 9,563 | ipynb | Jupyter Notebook | Chapter 6 TD(0) - Random Walk Example (Figure 6.2 Experiment Replication).ipynb | ahlusar1989/ma797_supplement | 648385683b1c734e97c008813821f897c540eeed | [
"MIT"
] | null | null | null | Chapter 6 TD(0) - Random Walk Example (Figure 6.2 Experiment Replication).ipynb | ahlusar1989/ma797_supplement | 648385683b1c734e97c008813821f897c540eeed | [
"MIT"
] | null | null | null | Chapter 6 TD(0) - Random Walk Example (Figure 6.2 Experiment Replication).ipynb | ahlusar1989/ma797_supplement | 648385683b1c734e97c008813821f897c540eeed | [
"MIT"
] | null | null | null | 33.554386 | 145 | 0.476942 | [
[
[
"import numpy as np\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\nfrom tqdm import tqdm\n\n# Pg. 125: http://incompleteideas.net/book/bookdraft2018mar21.pdf",
"_____no_output_____"
],
[
"#0 is the left terminal state\n# 6 is the right terminal state\n# 1 ... 5 represents A ... E\nVALUES = np.zeros(7)\nVALUES[1:6] = 0.5\n# For convenience, we assume all rewards are 0\n# and the left terminal state has value 0, the right terminal state has value 1\n# This trick has been used in the Gambler's Problem\nVALUES[6] = 1\n\n# set up true state values\nTRUE_VALUE = np.zeros(7)\nTRUE_VALUE[1:6] = np.arange(1, 6) / 6.0\nTRUE_VALUE[6] = 1\n\nACTION_LEFT = 0\nACTION_RIGHT = 1",
"_____no_output_____"
],
[
"def temporal_difference(values, alpha = 0.6, batch = False):\n state = 3\n trajectory = [state]\n rewards = [0]\n while True:\n prior_state = state\n if np.random.binomial(1, 0.5) == ACTION_LEFT:\n state -= 1\n else:\n state += 1\n reward = 0\n trajectory.append(state)\n # TD Update\n if not batch:\n values[prior_state] += alpha * (reward + values[state] - values[prior_state])\n if state == 6 or state == 0:\n break\n rewards.append(reward)\n return trajectory, rewards",
"_____no_output_____"
],
[
"# @values: current states value, will be updated if @batch is False\n# @alpha: step size\n# @batch: whether to update @values\ndef monte_carlo(values, alpha=0.1, batch=False):\n state = 3\n trajectory = [3]\n\n # if end up with left terminal state, all returns are 0\n # if end up with right terminal state, all returns are 1\n while True:\n if np.random.binomial(1, 0.5) == ACTION_LEFT:\n state -= 1\n else:\n state += 1\n trajectory.append(state)\n if state == 6:\n returns = 1.0\n break\n elif state == 0:\n returns = 0.0\n break\n\n if not batch:\n # traverse backwards\n for state_ in trajectory[:-1]:\n # MC update\n values[state_] += alpha * (returns - values[state_])\n return trajectory, [returns] * (len(trajectory) - 1)",
"_____no_output_____"
],
[
"def compute_state_value():\n episodes = [0, 1, 10, 100]\n current_values = np.copy(VALUES)\n plt.figure(1)\n for i in range(episodes[-1] + 1):\n if i in episodes:\n plt.plot(current_values, label=str(i) + ' episodes')\n temporal_difference(current_values)\n plt.plot(TRUE_VALUE, label='true values')\n plt.xlabel('State')\n plt.ylabel('Estimated Value')\n plt.legend()\n\n# Example 6.2 right\ndef rms_error():\n # Same alpha value can appear in both arrays\n td_alphas = [0.15, 0.1, 0.05]\n mc_alphas = [0.01, 0.02, 0.03, 0.04]\n episodes = 100 + 1\n runs = 100\n for i, alpha in enumerate(td_alphas + mc_alphas):\n total_errors = np.zeros(episodes)\n if i < len(td_alphas):\n method = 'TD'\n linestyle = 'solid'\n else:\n method = 'MC'\n linestyle = 'dashdot'\n for r in tqdm(range(runs)):\n errors = []\n current_values = np.copy(VALUES)\n for i in range(0, episodes):\n errors.append(np.sqrt(np.sum(np.power(TRUE_VALUE - current_values, 2)) / 5.0))\n if method == 'TD':\n temporal_difference(current_values, alpha=alpha)\n else:\n monte_carlo(current_values, alpha=alpha)\n total_errors += np.asarray(errors)\n total_errors /= runs\n plt.plot(total_errors, linestyle=linestyle, label=method + ', alpha = %.02f' % (alpha))\n plt.xlabel('episodes')\n plt.ylabel('RMS')\n plt.legend()",
"_____no_output_____"
],
[
"# Figure 6.2\n# @method: 'TD' or 'MC'\ndef batch_updating(method, episodes, alpha=0.001):\n # perform 100 independent runs\n runs = 100\n total_errors = np.zeros(episodes)\n for r in tqdm(range(0, runs)):\n current_values = np.copy(VALUES)\n errors = []\n # track shown trajectories and reward/return sequences\n trajectories = []\n rewards = []\n for ep in range(episodes):\n if method == 'TD':\n trajectory_, rewards_ = temporal_difference(current_values, batch=True)\n else:\n trajectory_, rewards_ = monte_carlo(current_values, batch=True)\n trajectories.append(trajectory_)\n rewards.append(rewards_)\n while True:\n # keep feeding our algorithm with trajectories seen so far until state value function converges\n updates = np.zeros(7)\n for trajectory_, rewards_ in zip(trajectories, rewards):\n for i in range(0, len(trajectory_) - 1):\n if method == 'TD':\n updates[trajectory_[i]] += rewards_[i] + current_values[trajectory_[i + 1]] - current_values[trajectory_[i]]\n else:\n updates[trajectory_[i]] += rewards_[i] - current_values[trajectory_[i]]\n updates *= alpha\n if np.sum(np.abs(updates)) < 1e-3:\n break\n # perform batch updating\n current_values += updates\n # calculate rms error\n errors.append(np.sqrt(np.sum(np.power(current_values - TRUE_VALUE, 2)) / 5.0))\n total_errors += np.asarray(errors)\n total_errors /= runs\n return total_errors\n\ndef example_6_2():\n plt.figure(figsize=(10, 20))\n plt.subplot(2, 1, 1)\n compute_state_value()\n\n plt.subplot(2, 1, 2)\n rms_error()\n plt.tight_layout()\n\n plt.savefig('./images/example_6_2.png')\n plt.close()\n\ndef figure_6_2():\n episodes = 100 + 1\n td_erros = batch_updating('TD', episodes)\n mc_erros = batch_updating('MC', episodes)\n\n plt.plot(td_erros, label='TD')\n plt.plot(mc_erros, label='MC')\n plt.xlabel('episodes')\n plt.ylabel('RMS error')\n plt.legend()\n\n plt.savefig('./images/figure_6_2.png')\n plt.close()",
"_____no_output_____"
],
[
"example_6_2()\nfigure_6_2()",
"100%|██████████| 100/100 [00:00<00:00, 207.83it/s]\n100%|██████████| 100/100 [00:00<00:00, 219.76it/s]\n100%|██████████| 100/100 [00:00<00:00, 214.69it/s]\n100%|██████████| 100/100 [00:00<00:00, 240.16it/s]\n100%|██████████| 100/100 [00:00<00:00, 235.45it/s]\n100%|██████████| 100/100 [00:00<00:00, 247.14it/s]\n100%|██████████| 100/100 [00:00<00:00, 257.44it/s]\n100%|██████████| 100/100 [00:49<00:00, 1.94it/s]\n100%|██████████| 100/100 [00:42<00:00, 2.25it/s]\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0535fe5ebfe0c823dcd4818741d7c53f2bba24d | 342,508 | ipynb | Jupyter Notebook | examples/notebooks/plot_quiver_curly.ipynb | teresaupdyke/codar_processing | d73abcbb68149c32281979a57637abf1734f50e3 | [
"MIT"
] | null | null | null | examples/notebooks/plot_quiver_curly.ipynb | teresaupdyke/codar_processing | d73abcbb68149c32281979a57637abf1734f50e3 | [
"MIT"
] | 1 | 2020-05-20T17:14:56.000Z | 2020-05-20T17:14:56.000Z | examples/notebooks/plot_quiver_curly.ipynb | teresaupdyke/codar_processing | d73abcbb68149c32281979a57637abf1734f50e3 | [
"MIT"
] | 6 | 2018-10-03T19:09:08.000Z | 2020-06-08T17:56:08.000Z | 159.157993 | 183,744 | 0.825452 | [
[
[
"import xarray as xr\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\n# file = '/Users/mikesmith/Downloads/5MHz_6km_realtime-agg_2f30_fcd6_a21e.nc'\nfile = '/Users/mikesmith/Downloads/5MHz_6km_realtime-agg_a667_a2f2_f11b.nc'\n\nds = xr.open_dataset(file).mean('time')\nds",
"_____no_output_____"
],
[
"tds = ds.coarsen(longitude=2, latitude=2, boundary='pad').mean()\ntds",
"_____no_output_____"
],
[
"import cartopy.crs as ccrs\nimport matplotlib.ticker as mticker\nfrom cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER\nimport cartopy.feature as cfeature\n\nprojection = ccrs.Mercator()\n\nlon = tds.longitude\nlat = tds.latitude\n\nextent = [\n lon.min() - 1,\n lon.max() + 1,\n lat.min() - 1,\n lat.max() + 1\n]\n\nLAND = cfeature.NaturalEarthFeature(\n 'physical', 'land', '10m',\n edgecolor='face',\n facecolor='tan'\n)\n\nstate_lines = cfeature.NaturalEarthFeature(\n category='cultural',\n name='admin_1_states_provinces_lines',\n scale='50m',\n facecolor='none'\n)",
"_____no_output_____"
]
],
[
[
"#### Let's turn the mapping features into a function",
"_____no_output_____"
]
],
[
[
"def get_ticks(bounds, dirs, otherbounds):\n dirs = dirs.lower()\n l0 = np.float(bounds[0])\n l1 = np.float(bounds[1])\n r = np.max([l1 - l0, np.float(otherbounds[1]) - np.float(otherbounds[0])])\n if r <= 1.5:\n # <1.5 degrees: 15' major ticks, 5' minor ticks\n minor_int = 1.0 / 12.0\n major_int = 1.0 / 4.0\n elif r <= 3.0:\n # <3 degrees: 30' major ticks, 10' minor ticks\n minor_int = 1.0 / 6.0\n major_int = 0.5\n elif r <= 7.0:\n # <7 degrees: 1d major ticks, 15' minor ticks\n minor_int = 0.25\n major_int = np.float(1)\n elif r <= 15:\n # <15 degrees: 2d major ticks, 30' minor ticks\n minor_int = 0.5\n major_int = np.float(2)\n elif r <= 30:\n # <30 degrees: 3d major ticks, 1d minor ticks\n minor_int = np.float(1)\n major_int = np.float(3)\n else:\n # >=30 degrees: 5d major ticks, 1d minor ticks\n minor_int = np.float(1)\n major_int = np.float(5)\n\n minor_ticks = np.arange(np.ceil(l0 / minor_int) * minor_int, np.ceil(l1 / minor_int) * minor_int + minor_int,\n minor_int)\n minor_ticks = minor_ticks[minor_ticks <= l1]\n major_ticks = np.arange(np.ceil(l0 / major_int) * major_int, np.ceil(l1 / major_int) * major_int + major_int,\n major_int)\n major_ticks = major_ticks[major_ticks <= l1]\n\n if major_int < 1:\n d, m, s = dd2dms(np.array(major_ticks))\n if dirs == 'we' or dirs == 'ew' or dirs == 'lon' or dirs == 'long' or dirs == 'longitude':\n n = 'W' * sum(d < 0)\n p = 'E' * sum(d >= 0)\n dir = n + p\n major_tick_labels = [str(np.abs(int(d[i]))) + u\"\\N{DEGREE SIGN}\" + str(int(m[i])) + \"'\" + dir[i] for i in\n range(len(d))]\n elif dirs == 'sn' or dirs == 'ns' or dirs == 'lat' or dirs == 'latitude':\n n = 'S' * sum(d < 0)\n p = 'N' * sum(d >= 0)\n dir = n + p\n major_tick_labels = [str(np.abs(int(d[i]))) + u\"\\N{DEGREE SIGN}\" + str(int(m[i])) + \"'\" + dir[i] for i in\n range(len(d))]\n else:\n major_tick_labels = [str(int(d[i])) + u\"\\N{DEGREE SIGN}\" + str(int(m[i])) + \"'\" for i in range(len(d))]\n else:\n d = major_ticks\n if dirs == 'we' or dirs == 'ew' or dirs == 'lon' or dirs == 'long' or dirs == 'longitude':\n n = 'W' * sum(d < 0)\n p = 'E' * sum(d >= 0)\n dir = n + p\n major_tick_labels = [str(np.abs(int(d[i]))) + u\"\\N{DEGREE SIGN}\" + dir[i] for i in range(len(d))]\n elif dirs == 'sn' or dirs == 'ns' or dirs == 'lat' or dirs == 'latitude':\n n = 'S' * sum(d < 0)\n p = 'N' * sum(d >= 0)\n dir = n + p\n major_tick_labels = [str(np.abs(int(d[i]))) + u\"\\N{DEGREE SIGN}\" + dir[i] for i in range(len(d))]\n else:\n major_tick_labels = [str(int(d[i])) + u\"\\N{DEGREE SIGN}\" for i in range(len(d))]\n\n return minor_ticks, major_ticks, major_tick_labels",
"_____no_output_____"
],
[
"def add_map_features(ax, extent):\n# # Gridlines and grid labels\n# gl = ax.gridlines(\n# draw_labels=True,\n# linewidth=.5,\n# color='black',\n# alpha=0.25,\n# linestyle='--',\n# )\n\n# gl.xlabels_top = gl.ylabels_right = False\n# gl.xlabel_style = {'size': 16, 'color': 'black'}\n# gl.ylabel_style = {'size': 16, 'color': 'black'}\n\n# gl.xformatter = LONGITUDE_FORMATTER\n# gl.yformatter = LATITUDE_FORMATTER\n\n xl = [extent[0], extent[1]]\n yl = [extent[2], extent[3]]\n\n tick0x, tick1, ticklab = get_ticks(xl, 'we', yl)\n ax.set_xticks(tick0x, minor=True, crs=ccrs.PlateCarree())\n ax.set_xticks(tick1, crs=ccrs.PlateCarree())\n ax.set_xticklabels(ticklab, fontsize=14)\n\n # get and add latitude ticks/labels\n tick0y, tick1, ticklab = get_ticks(yl, 'sn', xl)\n ax.set_yticks(tick0y, minor=True, crs=ccrs.PlateCarree())\n ax.set_yticks(tick1, crs=ccrs.PlateCarree())\n ax.set_yticklabels(ticklab, fontsize=14)\n\n gl = ax.gridlines(draw_labels=False, linewidth=.5, color='gray', alpha=0.75, linestyle='--', crs=ccrs.PlateCarree())\n gl.xlocator = mticker.FixedLocator(tick0x)\n gl.ylocator = mticker.FixedLocator(tick0y)\n\n ax.tick_params(which='major',\n direction='out',\n bottom=True, top=True,\n labelbottom=True, labeltop=False,\n left=True, right=True,\n labelleft=True, labelright=False,\n length=5, width=2)\n\n ax.tick_params(which='minor',\n direction='out',\n bottom=True, top=True,\n labelbottom=True, labeltop=False,\n left=True, right=True,\n labelleft=True, labelright=False,\n width=1)\n\n # Axes properties and features\n ax.set_extent(extent)\n ax.add_feature(LAND, zorder=0, edgecolor='black')\n ax.add_feature(cfeature.LAKES)\n ax.add_feature(cfeature.BORDERS)\n ax.add_feature(state_lines, edgecolor='black')\n return ax",
"_____no_output_____"
]
],
[
[
"### Let's change the arrows",
"_____no_output_____"
]
],
[
[
"# velocity_min = np.int32(np.nanmin(speed)) # Get the minimum speed from the data\n# velocity_max =np.int32(np.nanmax(speed)) # Get the maximum speed from the data\n\n# velocity_min = 0 # Get the minimum speed from the data\n# velocity_max = 40 # Get the maximum speed from the data\n\n# Setup a keyword argument, kwargs, dictionary to pass optional arguments to the quiver plot\nkwargs = dict(\n transform=ccrs.PlateCarree(),\n scale=65, # Number of data units per arrow length unit, e.g., m/s per plot width; a smaller scale parameter makes the arrow longer. Default is None.\n headwidth=2.75, # Head width as multiple of shaft width.\n headlength=2.75, #Head length as multiple of shaft width.\n headaxislength=2.5, # Head length at shaft intersection.\n minshaft=1,\n minlength=1\n)\n\n# Clip the colors \n# color_clipped = np.clip(speed, velocity_min, velocity_max).squeeze(),\n\n# Set the colorbar ticks to correspond to the velocity minimum and maximum of the data with a step of 20... Append the max velocity \n# ticks = np.append(np.arange(velocity_min, velocity_max, 5), velocity_max)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.interpolate import griddata\n\nlon, lat = np.meshgrid(tds.longitude, tds.latitude)\nu = tds.u.data\nv = tds.v.data\n\n# \n# resample onto a 50x50 grid\nnx, ny = 50, 50\n\n# (N, 2) arrays of input x,y coords and u,v values\npts = np.vstack((lon.ravel(), lat.ravel())).T\nvals = np.vstack((u.ravel(), v.ravel())).T\n\n# the new x and y coordinates for the grid, which will correspond to the\n# columns and rows of u and v respectively\nxi = np.linspace(lon.min(), lon.max(), nx)\nyi = np.linspace(lat.min(), lat.max(), ny)\n\n# an (nx * ny, 2) array of x,y coordinates to interpolate at\nipts = np.vstack(a.ravel() for a in np.meshgrid(yi, xi)[::-1]).T\n\n# an (nx * ny, 2) array of interpolated u, v values\nivals = griddata(pts, vals, ipts, method='linear') # Only works with nearest\n\n# reshape interpolated u,v values into (ny, nx) arrays\nui, vi = ivals.T\nui.shape = vi.shape = (ny, nx)",
"/Users/mikesmith/miniconda3/envs/hfradar/lib/python3.7/site-packages/ipykernel_launcher.py:23: FutureWarning: arrays to stack must be passed as a \"sequence\" type such as list or tuple. Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error in the future.\n"
],
[
"np.nanmax(yi)",
"_____no_output_____"
],
[
"# Initialize blank plot with a mercator projection\nfig, ax = plt.subplots(\n figsize=(22, 16),\n subplot_kw=dict(projection=ccrs.Mercator())\n)\n\nnorm = np.sqrt(ui**2 + vi**2)\nnorm_flat = norm.flatten()\n\nstart_points = np.array([xi.flatten(), yi.flatten()]).T\nscale = .2/np.nanmax(norm)\n\nfor i in range(start_points.shape[0]):\n plt.streamplot(xi, yi, ui, vi, \n color='k',\n start_points=np.array([start_points[i,:]]),\n minlength=.95*norm_flat[i]*scale,\n maxlength=1.0*norm_flat[i]*scale,\n integration_direction='backward', \n density=10, \n arrowsize=0.0,\n transform=ccrs.PlateCarree()\n )\n\n# Add map features to the axes\nadd_map_features(ax, extent)\n\n# plt.quiver(xi, yi, ui/norm, vi/norm, scale=30, transform=ccrs.PlateCarree())",
"/Users/mikesmith/miniconda3/envs/hfradar/lib/python3.7/site-packages/ipykernel_launcher.py:3: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.\nDeprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\n This is separate from the ipykernel package so we can avoid doing imports until\n/Users/mikesmith/miniconda3/envs/hfradar/lib/python3.7/site-packages/ipykernel_launcher.py:4: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.\nDeprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\n after removing the cwd from sys.path.\n/Users/mikesmith/miniconda3/envs/hfradar/lib/python3.7/site-packages/ipykernel_launcher.py:5: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.\nDeprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\n \"\"\"\n/Users/mikesmith/miniconda3/envs/hfradar/lib/python3.7/site-packages/ipykernel_launcher.py:21: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.\nDeprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\n"
],
[
"import matplotlib.pyplot as plt\nimport numpy as np\nw = 3\nY, X = np.mgrid[-w:w:8j, -w:w:8j]\n\nU = -Y\nV = X\nnorm = np.sqrt(U**2 + V**2)\nnorm_flat = norm.flatten()\n\nstart_points = np.array([X.flatten(),Y.flatten()]).T\n\nplt.clf()\nscale = .2/np.max(norm)\n\nplt.subplot(121)\nplt.title('scaling only the length')\nfor i in range(start_points.shape[0]):\n plt.streamplot(X,Y,U,V, color='k', start_points=np.array([start_points[i,:]]),minlength=.95*norm_flat[i]*scale, maxlength=1.0*norm_flat[i]*scale,\n integration_direction='backward', density=10, arrowsize=0.0)\nplt.quiver(X,Y,U/norm, V/norm,scale=30)\nplt.axis('square')\n\n\n\nplt.subplot(122)\nplt.title('scaling length, arrowhead and linewidth')\nfor i in range(start_points.shape[0]):\n plt.streamplot(X,Y,U,V, color='k', start_points=np.array([start_points[i,:]]),minlength=.95*norm_flat[i]*scale, maxlength=1.0*norm_flat[i]*scale,\n integration_direction='backward', density=10, arrowsize=0.0, linewidth=.5*norm_flat[i])\nplt.quiver(X,Y,U/np.max(norm), V/np.max(norm),scale=30)\n\nplt.axis('square')",
"/Users/mikesmith/miniconda3/envs/hfradar/lib/python3.7/site-packages/matplotlib/patches.py:3027: RuntimeWarning: invalid value encountered in double_scalars\n cos_t, sin_t = head_length / head_dist, head_width / head_dist\n"
],
[
"\"\"\"\nStreamline plotting for 2D vector fields.\n\"\"\"\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\n\nimport six\nfrom six.moves import xrange\nfrom scipy.interpolate import interp1d\n\nimport numpy as np\nimport matplotlib\nimport matplotlib.cm as cm\nimport matplotlib.colors as mcolors\nimport matplotlib.collections as mcollections\nimport matplotlib.lines as mlines\nimport matplotlib.patches as patches\n\n\ndef velovect(axes, x, y, u, v, linewidth=None, color=None,\n cmap=None, norm=None, arrowsize=1, arrowstyle='-|>',\n transform=None, zorder=None, start_points=None,\n scale=1.0, grains=15):\n \"\"\"Draws streamlines of a vector flow.\n *x*, *y* : 1d arrays\n an *evenly spaced* grid.\n *u*, *v* : 2d arrays\n x and y-velocities. Number of rows should match length of y, and\n the number of columns should match x.\n *density* : float or 2-tuple\n Controls the closeness of streamlines. When `density = 1`, the domain\n is divided into a 30x30 grid---*density* linearly scales this grid.\n Each cell in the grid can have, at most, one traversing streamline.\n For different densities in each direction, use [density_x, density_y].\n *linewidth* : numeric or 2d array\n vary linewidth when given a 2d array with the same shape as velocities.\n *color* : matplotlib color code, or 2d array\n Streamline color. When given an array with the same shape as\n velocities, *color* values are converted to colors using *cmap*.\n *cmap* : :class:`~matplotlib.colors.Colormap`\n Colormap used to plot streamlines and arrows. Only necessary when using\n an array input for *color*.\n *norm* : :class:`~matplotlib.colors.Normalize`\n Normalize object used to scale luminance data to 0, 1. If None, stretch\n (min, max) to (0, 1). Only necessary when *color* is an array.\n *arrowsize* : float\n Factor scale arrow size.\n *arrowstyle* : str\n Arrow style specification.\n See :class:`~matplotlib.patches.FancyArrowPatch`.\n *minlength* : float\n Minimum length of streamline in axes coordinates.\n *start_points*: Nx2 array\n Coordinates of starting points for the streamlines.\n In data coordinates, the same as the ``x`` and ``y`` arrays.\n *zorder* : int\n any number\n *scale* : float\n Maximum length of streamline in axes coordinates.\n Returns:\n *stream_container* : StreamplotSet\n Container object with attributes\n - lines: `matplotlib.collections.LineCollection` of streamlines\n - arrows: collection of `matplotlib.patches.FancyArrowPatch`\n objects representing arrows half-way along stream\n lines.\n This container will probably change in the future to allow changes\n to the colormap, alpha, etc. for both lines and arrows, but these\n changes should be backward compatible.\n \"\"\"\n grid = Grid(x, y)\n mask = StreamMask(10)\n dmap = DomainMap(grid, mask)\n\n if zorder is None:\n zorder = mlines.Line2D.zorder\n\n # default to data coordinates\n if transform is None:\n transform = axes.transData\n\n if color is None:\n color = axes._get_lines.get_next_color()\n\n if linewidth is None:\n linewidth = matplotlib.rcParams['lines.linewidth']\n\n line_kw = {}\n arrow_kw = dict(arrowstyle=arrowstyle, mutation_scale=10 * arrowsize)\n\n use_multicolor_lines = isinstance(color, np.ndarray)\n if use_multicolor_lines:\n if color.shape != grid.shape:\n raise ValueError(\n \"If 'color' is given, must have the shape of 'Grid(x,y)'\")\n line_colors = []\n color = np.ma.masked_invalid(color)\n else:\n line_kw['color'] = color\n arrow_kw['color'] = color\n\n if isinstance(linewidth, np.ndarray):\n if linewidth.shape != grid.shape:\n raise ValueError(\n \"If 'linewidth' is given, must have the shape of 'Grid(x,y)'\")\n line_kw['linewidth'] = []\n else:\n line_kw['linewidth'] = linewidth\n arrow_kw['linewidth'] = linewidth\n\n line_kw['zorder'] = zorder\n arrow_kw['zorder'] = zorder\n\n ## Sanity checks.\n if u.shape != grid.shape or v.shape != grid.shape:\n raise ValueError(\"'u' and 'v' must be of shape 'Grid(x,y)'\")\n\n u = np.ma.masked_invalid(u)\n v = np.ma.masked_invalid(v)\n magnitude = np.sqrt(u**2 + v**2)\n magnitude/=np.max(magnitude)\n\n resolution = scale/grains\n minlength = .9*resolution\n integrate = get_integrator(u, v, dmap, minlength, resolution, magnitude)\n\n trajectories = []\n edges = []\n \n if start_points is None:\n start_points=_gen_starting_points(x,y,grains)\n \n sp2 = np.asanyarray(start_points, dtype=float).copy()\n\n # Check if start_points are outside the data boundaries\n for xs, ys in sp2:\n if not (grid.x_origin <= xs <= grid.x_origin + grid.width\n and grid.y_origin <= ys <= grid.y_origin + grid.height):\n raise ValueError(\"Starting point ({}, {}) outside of data \"\n \"boundaries\".format(xs, ys))\n\n # Convert start_points from data to array coords\n # Shift the seed points from the bottom left of the data so that\n # data2grid works properly.\n sp2[:, 0] -= grid.x_origin\n sp2[:, 1] -= grid.y_origin\n\n for xs, ys in sp2:\n xg, yg = dmap.data2grid(xs, ys)\n t = integrate(xg, yg)\n if t is not None:\n trajectories.append(t[0])\n edges.append(t[1])\n\n if use_multicolor_lines:\n if norm is None:\n norm = mcolors.Normalize(color.min(), color.max())\n if cmap is None:\n cmap = cm.get_cmap(matplotlib.rcParams['image.cmap'])\n else:\n cmap = cm.get_cmap(cmap)\n\n streamlines = []\n arrows = []\n for t, edge in zip(trajectories,edges):\n tgx = np.array(t[0])\n tgy = np.array(t[1])\n \n # Rescale from grid-coordinates to data-coordinates.\n tx, ty = dmap.grid2data(*np.array(t))\n tx += grid.x_origin\n ty += grid.y_origin\n\n \n points = np.transpose([tx, ty]).reshape(-1, 1, 2)\n streamlines.extend(np.hstack([points[:-1], points[1:]]))\n\n # Add arrows half way along each trajectory.\n s = np.cumsum(np.sqrt(np.diff(tx) ** 2 + np.diff(ty) ** 2))\n n = np.searchsorted(s, s[-1])\n arrow_tail = (tx[n], ty[n])\n arrow_head = (np.mean(tx[n:n + 2]), np.mean(ty[n:n + 2]))\n\n if isinstance(linewidth, np.ndarray):\n line_widths = interpgrid(linewidth, tgx, tgy)[:-1]\n line_kw['linewidth'].extend(line_widths)\n arrow_kw['linewidth'] = line_widths[n]\n\n if use_multicolor_lines:\n color_values = interpgrid(color, tgx, tgy)[:-1]\n line_colors.append(color_values)\n arrow_kw['color'] = cmap(norm(color_values[n]))\n \n if not edge:\n p = patches.FancyArrowPatch(\n arrow_tail, arrow_head, transform=transform, **arrow_kw)\n else:\n continue\n \n ds = np.sqrt((arrow_tail[0]-arrow_head[0])**2+(arrow_tail[1]-arrow_head[1])**2)\n \n if ds<1e-15: continue #remove vanishingly short arrows that cause Patch to fail\n \n axes.add_patch(p)\n arrows.append(p) \n\n lc = mcollections.LineCollection(\n streamlines, transform=transform, **line_kw)\n lc.sticky_edges.x[:] = [grid.x_origin, grid.x_origin + grid.width]\n lc.sticky_edges.y[:] = [grid.y_origin, grid.y_origin + grid.height]\n if use_multicolor_lines:\n lc.set_array(np.ma.hstack(line_colors))\n lc.set_cmap(cmap)\n lc.set_norm(norm)\n axes.add_collection(lc)\n axes.autoscale_view()\n\n ac = matplotlib.collections.PatchCollection(arrows)\n stream_container = StreamplotSet(lc, ac)\n return stream_container\n\nclass StreamplotSet(object):\n\n def __init__(self, lines, arrows, **kwargs):\n self.lines = lines\n self.arrows = arrows\n\n\n# Coordinate definitions\n# ========================\n\nclass DomainMap(object):\n \"\"\"Map representing different coordinate systems.\n Coordinate definitions:\n * axes-coordinates goes from 0 to 1 in the domain.\n * data-coordinates are specified by the input x-y coordinates.\n * grid-coordinates goes from 0 to N and 0 to M for an N x M grid,\n where N and M match the shape of the input data.\n * mask-coordinates goes from 0 to N and 0 to M for an N x M mask,\n where N and M are user-specified to control the density of streamlines.\n This class also has methods for adding trajectories to the StreamMask.\n Before adding a trajectory, run `start_trajectory` to keep track of regions\n crossed by a given trajectory. Later, if you decide the trajectory is bad\n (e.g., if the trajectory is very short) just call `undo_trajectory`.\n \"\"\"\n\n def __init__(self, grid, mask):\n self.grid = grid\n self.mask = mask\n # Constants for conversion between grid- and mask-coordinates\n self.x_grid2mask = (mask.nx - 1) / grid.nx\n self.y_grid2mask = (mask.ny - 1) / grid.ny\n\n self.x_mask2grid = 1. / self.x_grid2mask\n self.y_mask2grid = 1. / self.y_grid2mask\n\n self.x_data2grid = 1. / grid.dx\n self.y_data2grid = 1. / grid.dy\n\n def grid2mask(self, xi, yi):\n \"\"\"Return nearest space in mask-coords from given grid-coords.\"\"\"\n return (int((xi * self.x_grid2mask) + 0.5),\n int((yi * self.y_grid2mask) + 0.5))\n\n def mask2grid(self, xm, ym):\n return xm * self.x_mask2grid, ym * self.y_mask2grid\n\n def data2grid(self, xd, yd):\n return xd * self.x_data2grid, yd * self.y_data2grid\n\n def grid2data(self, xg, yg):\n return xg / self.x_data2grid, yg / self.y_data2grid\n\n def start_trajectory(self, xg, yg):\n xm, ym = self.grid2mask(xg, yg)\n self.mask._start_trajectory(xm, ym)\n\n def reset_start_point(self, xg, yg):\n xm, ym = self.grid2mask(xg, yg)\n self.mask._current_xy = (xm, ym)\n\n def update_trajectory(self, xg, yg):\n \n xm, ym = self.grid2mask(xg, yg)\n #self.mask._update_trajectory(xm, ym)\n\n def undo_trajectory(self):\n self.mask._undo_trajectory()\n \n\n\nclass Grid(object):\n \"\"\"Grid of data.\"\"\"\n def __init__(self, x, y):\n\n if x.ndim == 1:\n pass\n elif x.ndim == 2:\n x_row = x[0, :]\n if not np.allclose(x_row, x):\n raise ValueError(\"The rows of 'x' must be equal\")\n x = x_row\n else:\n raise ValueError(\"'x' can have at maximum 2 dimensions\")\n\n if y.ndim == 1:\n pass\n elif y.ndim == 2:\n y_col = y[:, 0]\n if not np.allclose(y_col, y.T):\n raise ValueError(\"The columns of 'y' must be equal\")\n y = y_col\n else:\n raise ValueError(\"'y' can have at maximum 2 dimensions\")\n\n self.nx = len(x)\n self.ny = len(y)\n\n self.dx = x[1] - x[0]\n self.dy = y[1] - y[0]\n\n self.x_origin = x[0]\n self.y_origin = y[0]\n\n self.width = x[-1] - x[0]\n self.height = y[-1] - y[0]\n\n @property\n def shape(self):\n return self.ny, self.nx\n\n def within_grid(self, xi, yi):\n \"\"\"Return True if point is a valid index of grid.\"\"\"\n # Note that xi/yi can be floats; so, for example, we can't simply check\n # `xi < self.nx` since `xi` can be `self.nx - 1 < xi < self.nx`\n return xi >= 0 and xi <= self.nx - 1 and yi >= 0 and yi <= self.ny - 1\n\n\nclass StreamMask(object):\n \"\"\"Mask to keep track of discrete regions crossed by streamlines.\n The resolution of this grid determines the approximate spacing between\n trajectories. Streamlines are only allowed to pass through zeroed cells:\n When a streamline enters a cell, that cell is set to 1, and no new\n streamlines are allowed to enter.\n \"\"\"\n\n def __init__(self, density):\n if np.isscalar(density):\n if density <= 0:\n raise ValueError(\"If a scalar, 'density' must be positive\")\n self.nx = self.ny = int(30 * density)\n else:\n if len(density) != 2:\n raise ValueError(\"'density' can have at maximum 2 dimensions\")\n self.nx = int(30 * density[0])\n self.ny = int(30 * density[1])\n self._mask = np.zeros((self.ny, self.nx))\n self.shape = self._mask.shape\n\n self._current_xy = None\n\n def __getitem__(self, *args):\n return self._mask.__getitem__(*args)\n\n def _start_trajectory(self, xm, ym):\n \"\"\"Start recording streamline trajectory\"\"\"\n self._traj = []\n self._update_trajectory(xm, ym)\n\n def _undo_trajectory(self):\n \"\"\"Remove current trajectory from mask\"\"\"\n for t in self._traj:\n self._mask.__setitem__(t, 0)\n\n def _update_trajectory(self, xm, ym):\n \"\"\"Update current trajectory position in mask.\n If the new position has already been filled, raise `InvalidIndexError`.\n \"\"\"\n #if self._current_xy != (xm, ym):\n # if self[ym, xm] == 0:\n self._traj.append((ym, xm))\n self._mask[ym, xm] = 1\n self._current_xy = (xm, ym)\n # else:\n # raise InvalidIndexError\n\n\n\n\n# Integrator definitions\n#========================\n\ndef get_integrator(u, v, dmap, minlength, resolution, magnitude):\n\n # rescale velocity onto grid-coordinates for integrations.\n u, v = dmap.data2grid(u, v)\n\n # speed (path length) will be in axes-coordinates\n u_ax = u / dmap.grid.nx\n v_ax = v / dmap.grid.ny\n speed = np.ma.sqrt(u_ax ** 2 + v_ax ** 2)\n\n def forward_time(xi, yi):\n ds_dt = interpgrid(speed, xi, yi)\n if ds_dt == 0:\n raise TerminateTrajectory()\n dt_ds = 1. / ds_dt\n ui = interpgrid(u, xi, yi)\n vi = interpgrid(v, xi, yi)\n return ui * dt_ds, vi * dt_ds\n\n\n def integrate(x0, y0):\n \"\"\"Return x, y grid-coordinates of trajectory based on starting point.\n Integrate both forward and backward in time from starting point in\n grid coordinates.\n Integration is terminated when a trajectory reaches a domain boundary\n or when it crosses into an already occupied cell in the StreamMask. The\n resulting trajectory is None if it is shorter than `minlength`.\n \"\"\"\n\n stotal, x_traj, y_traj = 0., [], []\n\n \n dmap.start_trajectory(x0, y0)\n\n dmap.reset_start_point(x0, y0)\n stotal, x_traj, y_traj, m_total, hit_edge = _integrate_rk12(x0, y0, dmap, forward_time, resolution, magnitude)\n\n \n if len(x_traj)>1:\n return (x_traj, y_traj), hit_edge\n else: # reject short trajectories\n dmap.undo_trajectory()\n return None\n\n return integrate\n\n\ndef _integrate_rk12(x0, y0, dmap, f, resolution, magnitude):\n \"\"\"2nd-order Runge-Kutta algorithm with adaptive step size.\n This method is also referred to as the improved Euler's method, or Heun's\n method. This method is favored over higher-order methods because:\n 1. To get decent looking trajectories and to sample every mask cell\n on the trajectory we need a small timestep, so a lower order\n solver doesn't hurt us unless the data is *very* high resolution.\n In fact, for cases where the user inputs\n data smaller or of similar grid size to the mask grid, the higher\n order corrections are negligible because of the very fast linear\n interpolation used in `interpgrid`.\n 2. For high resolution input data (i.e. beyond the mask\n resolution), we must reduce the timestep. Therefore, an adaptive\n timestep is more suited to the problem as this would be very hard\n to judge automatically otherwise.\n This integrator is about 1.5 - 2x as fast as both the RK4 and RK45\n solvers in most setups on my machine. I would recommend removing the\n other two to keep things simple.\n \"\"\"\n # This error is below that needed to match the RK4 integrator. It\n # is set for visual reasons -- too low and corners start\n # appearing ugly and jagged. Can be tuned.\n maxerror = 0.003\n\n # This limit is important (for all integrators) to avoid the\n # trajectory skipping some mask cells. We could relax this\n # condition if we use the code which is commented out below to\n # increment the location gradually. However, due to the efficient\n # nature of the interpolation, this doesn't boost speed by much\n # for quite a bit of complexity.\n maxds = min(1. / dmap.mask.nx, 1. / dmap.mask.ny, 0.1)\n\n ds = maxds\n stotal = 0\n xi = x0\n yi = y0\n xf_traj = []\n yf_traj = []\n m_total = []\n hit_edge = False\n \n while dmap.grid.within_grid(xi, yi):\n xf_traj.append(xi)\n yf_traj.append(yi)\n m_total.append(interpgrid(magnitude, xi, yi))\n try:\n k1x, k1y = f(xi, yi)\n k2x, k2y = f(xi + ds * k1x,\n yi + ds * k1y)\n except IndexError:\n # Out of the domain on one of the intermediate integration steps.\n # Take an Euler step to the boundary to improve neatness.\n ds, xf_traj, yf_traj = _euler_step(xf_traj, yf_traj, dmap, f)\n stotal += ds\n hit_edge = True\n break\n except TerminateTrajectory:\n break\n\n dx1 = ds * k1x\n dy1 = ds * k1y\n dx2 = ds * 0.5 * (k1x + k2x)\n dy2 = ds * 0.5 * (k1y + k2y)\n\n nx, ny = dmap.grid.shape\n # Error is normalized to the axes coordinates\n error = np.sqrt(((dx2 - dx1) / nx) ** 2 + ((dy2 - dy1) / ny) ** 2)\n\n # Only save step if within error tolerance\n if error < maxerror:\n xi += dx2\n yi += dy2\n \n dmap.update_trajectory(xi, yi)\n \n if not dmap.grid.within_grid(xi, yi):\n hit_edge=True\n \n if (stotal + ds) > resolution*np.mean(m_total):\n break\n stotal += ds\n\n # recalculate stepsize based on step error\n if error == 0:\n ds = maxds\n else:\n ds = min(maxds, 0.85 * ds * (maxerror / error) ** 0.5)\n\n return stotal, xf_traj, yf_traj, m_total, hit_edge\n\n\ndef _euler_step(xf_traj, yf_traj, dmap, f):\n \"\"\"Simple Euler integration step that extends streamline to boundary.\"\"\"\n ny, nx = dmap.grid.shape\n xi = xf_traj[-1]\n yi = yf_traj[-1]\n cx, cy = f(xi, yi)\n if cx == 0:\n dsx = np.inf\n elif cx < 0:\n dsx = xi / -cx\n else:\n dsx = (nx - 1 - xi) / cx\n if cy == 0:\n dsy = np.inf\n elif cy < 0:\n dsy = yi / -cy\n else:\n dsy = (ny - 1 - yi) / cy\n ds = min(dsx, dsy)\n xf_traj.append(xi + cx * ds)\n yf_traj.append(yi + cy * ds)\n return ds, xf_traj, yf_traj\n\n\n# Utility functions\n# ========================\n\ndef interpgrid(a, xi, yi):\n \"\"\"Fast 2D, linear interpolation on an integer grid\"\"\"\n\n Ny, Nx = np.shape(a)\n if isinstance(xi, np.ndarray):\n x = xi.astype(int)\n y = yi.astype(int)\n # Check that xn, yn don't exceed max index\n xn = np.clip(x + 1, 0, Nx - 1)\n yn = np.clip(y + 1, 0, Ny - 1)\n else:\n x = int(xi)\n y = int(yi)\n # conditional is faster than clipping for integers\n if x == (Nx - 2):\n xn = x\n else:\n xn = x + 1\n if y == (Ny - 2):\n yn = y\n else:\n yn = y + 1\n\n a00 = a[y, x]\n a01 = a[y, xn]\n a10 = a[yn, x]\n a11 = a[yn, xn]\n xt = xi - x\n yt = yi - y\n a0 = a00 * (1 - xt) + a01 * xt\n a1 = a10 * (1 - xt) + a11 * xt\n ai = a0 * (1 - yt) + a1 * yt\n\n if not isinstance(xi, np.ndarray):\n if np.ma.is_masked(ai):\n raise TerminateTrajectory\n\n return ai\n\n\ndef _gen_starting_points(x,y,grains):\n \n eps = np.finfo(np.float32).eps\n \n tmp_x = np.linspace(x.min()+eps, x.max()-eps, grains)\n tmp_y = np.linspace(y.min()+eps, y.max()-eps, grains)\n \n xs = np.tile(tmp_x, grains)\n ys = np.repeat(tmp_y, grains)\n\n seed_points = np.array([list(xs), list(ys)])\n \n return seed_points.T",
"_____no_output_____"
],
[
"f, ax = plt.subplots(figsize=(15,4))\n\n\ngrains = 15\ntmp = np.linspace(-3, 3, grains)\nxs = np.tile(tmp, grains)\nys = np.repeat(tmp, grains)\n\nseed_points = np.array([list(xs), list(ys)])\n\nscale=2.\n\nvelovect(ax, xi, yi, ui, vi, arrowstyle='fancy', scale = 1.5, grains = 15, color='k')\n\n\n# cs = ax.contourf(xi,yi, W, cmap=plt.cm.viridis, alpha=0.5, zorder=-1)\n\n\n# ax1.set_title(\"Quiver\")\n# ax2.set_title(\"Streamplot\")\n# ax3.set_title(\"Curved quivers\")\n\n\n# plt.colorbar(cs, ax=[ax1,ax2,ax3]) \nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d053770dbcc459070a7bb2318e61a1a4584d63eb | 52,698 | ipynb | Jupyter Notebook | introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb | Amirosimani/amazon-sagemaker-examples | bc35e7a9da9e2258e77f98098254c2a8e308041a | [
"Apache-2.0"
] | 1 | 2021-06-21T12:48:16.000Z | 2021-06-21T12:48:16.000Z | introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb | Amirosimani/amazon-sagemaker-examples | bc35e7a9da9e2258e77f98098254c2a8e308041a | [
"Apache-2.0"
] | 1 | 2019-07-01T23:54:20.000Z | 2019-07-01T23:55:29.000Z | introduction_to_amazon_algorithms/object_detection_birds/object_detection_birds.ipynb | Amirosimani/amazon-sagemaker-examples | bc35e7a9da9e2258e77f98098254c2a8e308041a | [
"Apache-2.0"
] | 2 | 2021-06-24T11:49:58.000Z | 2021-06-24T11:54:01.000Z | 41.138173 | 799 | 0.60767 | [
[
[
"# Amazon SageMaker Object Detection for Bird Species\n\n1. [Introduction](#Introduction)\n2. [Setup](#Setup)\n3. [Data Preparation](#Data-Preparation)\n 1. [Download and unpack the dataset](#Download-and-unpack-the-dataset)\n 2. [Understand the dataset](#Understand-the-dataset)\n 3. [Generate RecordIO files](#Generate-RecordIO-files)\n4. [Train the model](#Train-the-model)\n5. [Host the model](#Host-the-model)\n6. [Test the model](#Test-the-model)\n7. [Clean up](#Clean-up)\n8. [Improve the model](#Improve-the-model)\n9. [Final cleanup](#Final-cleanup)",
"_____no_output_____"
],
[
"## Introduction\n\nObject detection is the process of identifying and localizing objects in an image. A typical object detection solution takes an image as input and provides a bounding box on the image where an object of interest is found. It also identifies what type of object the box encapsulates. To create such a solution, we need to acquire and process a traning dataset, create and setup a training job for the alorithm so that it can learn about the dataset. Finally, we can then host the trained model in an endpoint, to which we can supply images.\n\nThis notebook is an end-to-end example showing how the Amazon SageMaker Object Detection algorithm can be used with a publicly available dataset of bird images. We demonstrate how to train and to host an object detection model based on the [Caltech Birds (CUB 200 2011)](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html) dataset. Amazon SageMaker's object detection algorithm uses the Single Shot multibox Detector ([SSD](https://arxiv.org/abs/1512.02325)) algorithm, and this notebook uses a [ResNet](https://arxiv.org/pdf/1603.05027.pdf) base network with that algorithm.\n\n\n\nWe will also demonstrate how to construct a training dataset using the RecordIO format, as this is the format that the training job consumes. This notebook is similar to the [Object Detection using the RecordIO format](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_recordio_format.ipynb) notebook, with the following key differences:\n\n- We provide an example of how to translate bounding box specifications when providing images to SageMaker's algorithm. You will see code for generating the train.lst and val.lst files used to create [recordIO](https://mxnet.incubator.apache.org/architecture/note_data_loading.html) files.\n- We demonstrate how to improve an object detection model by adding training images that are flipped horizontally (mirror images).\n- We give you a notebook for experimenting with object detection challenges with an order of magnitude more classes (200 bird species, as opposed to the 20 categories used by [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/)).\n- We show how to chart the accuracy improvements that occur across the epochs of the training job.\n\nNote that Amazon SageMaker Object Detection also allows training with the image and JSON format, which is illustrated in the [image and JSON Notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_image_json_format.ipynb).",
"_____no_output_____"
],
[
"## Setup\n\nBefore preparing the data, there are some initial steps required for setup.\n",
"_____no_output_____"
],
[
"This notebook requires two additional Python packages:\n* **OpenCV** is required for gathering image sizes and flipping of images horizontally.\n* The **MXNet** runtime is required for using the im2rec tool.",
"_____no_output_____"
]
],
[
[
"import sys\n\n!{sys.executable} -m pip install opencv-python\n!{sys.executable} -m pip install mxnet",
"_____no_output_____"
]
],
[
[
"We need to identify the S3 bucket that you want to use for providing training and validation datasets. It will also be used to store the tranied model artifacts. In this notebook, we use a custom bucket. You could alternatively use a default bucket for the session. We use an object prefix to help organize the bucket content.",
"_____no_output_____"
]
],
[
[
"bucket = \"<your_s3_bucket_name_here>\" # custom bucket name.\nprefix = \"DEMO-ObjectDetection-birds\"",
"_____no_output_____"
]
],
[
[
"To train the Object Detection algorithm on Amazon SageMaker, we need to setup and authenticate the use of AWS services. To begin with, we need an AWS account role with SageMaker access. Here we will use the execution role the current notebook instance was given when it was created. This role has necessary permissions, including access to your data in S3.",
"_____no_output_____"
]
],
[
[
"import sagemaker\nfrom sagemaker import get_execution_role\n\nrole = get_execution_role()\nprint(role)\nsess = sagemaker.Session()",
"_____no_output_____"
]
],
[
[
"# Data Preparation\n\nThe [Caltech Birds (CUB 200 2011)](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html) dataset contains 11,788 images across 200 bird species (the original technical report can be found [here](http://www.vision.caltech.edu/visipedia/papers/CUB_200_2011.pdf)). Each species comes with around 60 images, with a typical size of about 350 pixels by 500 pixels. Bounding boxes are provided, as are annotations of bird parts. A recommended train/test split is given, but image size data is not.\n\n\n\nThe dataset can be downloaded [here](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html).\n\n## Download and unpack the dataset\n\nHere we download the birds dataset from CalTech.",
"_____no_output_____"
]
],
[
[
"import os\nimport urllib.request\n\n\ndef download(url):\n filename = url.split(\"/\")[-1]\n if not os.path.exists(filename):\n urllib.request.urlretrieve(url, filename)",
"_____no_output_____"
],
[
"%%time\n# download('http://www.vision.caltech.edu/visipedia-data/CUB-200-2011/CUB_200_2011.tgz')\n# CalTech's download is (at least temporarily) unavailable since August 2020.\n\n# Can now use one made available by fast.ai .\ndownload(\"https://s3.amazonaws.com/fast-ai-imageclas/CUB_200_2011.tgz\")",
"_____no_output_____"
]
],
[
[
"Now we unpack the dataset into its own directory structure.",
"_____no_output_____"
]
],
[
[
"%%time\n# Clean up prior version of the downloaded dataset if you are running this again\n!rm -rf CUB_200_2011\n\n# Unpack and then remove the downloaded compressed tar file\n!gunzip -c ./CUB_200_2011.tgz | tar xopf -\n!rm CUB_200_2011.tgz",
"_____no_output_____"
]
],
[
[
"# Understand the dataset",
"_____no_output_____"
],
[
"## Set some parameters for the rest of the notebook to use",
"_____no_output_____"
],
[
"Here we define a few parameters that help drive the rest of the notebook. For example, `SAMPLE_ONLY` is defaulted to `True`. This will force the notebook to train on only a handful of species. Setting to false will make the notebook work with the entire dataset of 200 bird species. This makes the training a more difficult challenge, and you will need many more epochs to complete.\n\nThe file parameters define names and locations of metadata files for the dataset.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport cv2\nimport boto3\nimport json\n\nruntime = boto3.client(service_name=\"runtime.sagemaker\")\n\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nRANDOM_SPLIT = False\nSAMPLE_ONLY = True\nFLIP = False\n\n# To speed up training and experimenting, you can use a small handful of species.\n# To see the full list of the classes available, look at the content of CLASSES_FILE.\nCLASSES = [17, 36, 47, 68, 73]\n\n# Otherwise, you can use the full set of species\nif not SAMPLE_ONLY:\n CLASSES = []\n for c in range(200):\n CLASSES += [c + 1]\n\nRESIZE_SIZE = 256\n\nBASE_DIR = \"CUB_200_2011/\"\nIMAGES_DIR = BASE_DIR + \"images/\"\n\nCLASSES_FILE = BASE_DIR + \"classes.txt\"\nBBOX_FILE = BASE_DIR + \"bounding_boxes.txt\"\nIMAGE_FILE = BASE_DIR + \"images.txt\"\nLABEL_FILE = BASE_DIR + \"image_class_labels.txt\"\nSIZE_FILE = BASE_DIR + \"sizes.txt\"\nSPLIT_FILE = BASE_DIR + \"train_test_split.txt\"\n\nTRAIN_LST_FILE = \"birds_ssd_train.lst\"\nVAL_LST_FILE = \"birds_ssd_val.lst\"\n\nif SAMPLE_ONLY:\n TRAIN_LST_FILE = \"birds_ssd_sample_train.lst\"\n VAL_LST_FILE = \"birds_ssd_sample_val.lst\"\n\nTRAIN_RATIO = 0.8\nCLASS_COLS = [\"class_number\", \"class_id\"]\nIM2REC_SSD_COLS = [\n \"header_cols\",\n \"label_width\",\n \"zero_based_id\",\n \"xmin\",\n \"ymin\",\n \"xmax\",\n \"ymax\",\n \"image_file_name\",\n]",
"_____no_output_____"
]
],
[
[
"## Explore the dataset images\n\nFor each species, there are dozens of images of various shapes and sizes. By dividing the entire dataset into individual named (numbered) folders, the images are in effect labelled for supervised learning using image classification and object detection algorithms. \n\nThe following function displays a grid of thumbnail images for all the image files for a given species.",
"_____no_output_____"
]
],
[
[
"def show_species(species_id):\n _im_list = !ls $IMAGES_DIR/$species_id\n\n NUM_COLS = 6\n IM_COUNT = len(_im_list)\n\n print('Species ' + species_id + ' has ' + str(IM_COUNT) + ' images.')\n \n NUM_ROWS = int(IM_COUNT / NUM_COLS)\n if ((IM_COUNT % NUM_COLS) > 0):\n NUM_ROWS += 1\n\n fig, axarr = plt.subplots(NUM_ROWS, NUM_COLS)\n fig.set_size_inches(8.0, 16.0, forward=True)\n\n curr_row = 0\n for curr_img in range(IM_COUNT):\n # fetch the url as a file type object, then read the image\n f = IMAGES_DIR + species_id + '/' + _im_list[curr_img]\n a = plt.imread(f)\n\n # find the column by taking the current index modulo 3\n col = curr_img % NUM_ROWS\n # plot on relevant subplot\n axarr[col, curr_row].imshow(a)\n if col == (NUM_ROWS - 1):\n # we have finished the current row, so increment row counter\n curr_row += 1\n\n fig.tight_layout() \n plt.show()\n \n # Clean up\n plt.clf()\n plt.cla()\n plt.close()",
"_____no_output_____"
]
],
[
[
"Show the list of bird species or dataset classes.",
"_____no_output_____"
]
],
[
[
"classes_df = pd.read_csv(CLASSES_FILE, sep=\" \", names=CLASS_COLS, header=None)\ncriteria = classes_df[\"class_number\"].isin(CLASSES)\nclasses_df = classes_df[criteria]\nprint(classes_df.to_csv(columns=[\"class_id\"], sep=\"\\t\", index=False, header=False))",
"_____no_output_____"
]
],
[
[
"Now for any given species, display thumbnail images of each of the images provided for training and testing.",
"_____no_output_____"
]
],
[
[
"show_species(\"017.Cardinal\")",
"_____no_output_____"
]
],
[
[
"# Generate RecordIO files",
"_____no_output_____"
],
[
"## Step 1. Gather image sizes\n\nFor this particular dataset, bounding box annotations are specified in absolute terms. RecordIO format requires them to be defined in terms relative to the image size. The following code visits each image, extracts the height and width, and saves this information into a file for subsequent use. Some other publicly available datasets provide such a file for exactly this purpose. ",
"_____no_output_____"
]
],
[
[
"%%time\nSIZE_COLS = [\"idx\", \"width\", \"height\"]\n\n\ndef gen_image_size_file():\n print(\"Generating a file containing image sizes...\")\n images_df = pd.read_csv(\n IMAGE_FILE, sep=\" \", names=[\"image_pretty_name\", \"image_file_name\"], header=None\n )\n rows_list = []\n idx = 0\n for i in images_df[\"image_file_name\"]:\n # TODO: add progress bar\n idx += 1\n img = cv2.imread(IMAGES_DIR + i)\n dimensions = img.shape\n height = img.shape[0]\n width = img.shape[1]\n image_dict = {\"idx\": idx, \"width\": width, \"height\": height}\n rows_list.append(image_dict)\n\n sizes_df = pd.DataFrame(rows_list)\n print(\"Image sizes:\\n\" + str(sizes_df.head()))\n\n sizes_df[SIZE_COLS].to_csv(SIZE_FILE, sep=\" \", index=False, header=None)\n\n\ngen_image_size_file()",
"_____no_output_____"
]
],
[
[
"## Step 2. Generate list files for producing RecordIO files \n\n[RecordIO](https://mxnet.incubator.apache.org/architecture/note_data_loading.html) files can be created using the [im2rec tool](https://mxnet.incubator.apache.org/faq/recordio.html) (images to RecordIO), which takes as input a pair of list files, one for training images and the other for validation images. Each list file has one row for each image. For object detection, each row must contain bounding box data and a class label.\n\nFor the CalTech birds dataset, we need to convert absolute bounding box dimensions to relative dimensions based on image size. We also need to adjust class id's to be zero-based (instead of 1 to 200, they need to be 0 to 199). This dataset comes with recommended train/test split information (\"is_training_image\" flag). This notebook is built flexibly to either leverage this suggestion, or to create a random train/test split with a specific train/test ratio. The `RAMDOM_SPLIT` variable defined earlier controls whether or not the split happens randomly.",
"_____no_output_____"
]
],
[
[
"def split_to_train_test(df, label_column, train_frac=0.8):\n train_df, test_df = pd.DataFrame(), pd.DataFrame()\n labels = df[label_column].unique()\n for lbl in labels:\n lbl_df = df[df[label_column] == lbl]\n lbl_train_df = lbl_df.sample(frac=train_frac)\n lbl_test_df = lbl_df.drop(lbl_train_df.index)\n print(\n \"\\n{}:\\n---------\\ntotal:{}\\ntrain_df:{}\\ntest_df:{}\".format(\n lbl, len(lbl_df), len(lbl_train_df), len(lbl_test_df)\n )\n )\n train_df = train_df.append(lbl_train_df)\n test_df = test_df.append(lbl_test_df)\n return train_df, test_df\n\n\ndef gen_list_files():\n # use generated sizes file\n sizes_df = pd.read_csv(\n SIZE_FILE, sep=\" \", names=[\"image_pretty_name\", \"width\", \"height\"], header=None\n )\n bboxes_df = pd.read_csv(\n BBOX_FILE,\n sep=\" \",\n names=[\"image_pretty_name\", \"x_abs\", \"y_abs\", \"bbox_width\", \"bbox_height\"],\n header=None,\n )\n split_df = pd.read_csv(\n SPLIT_FILE, sep=\" \", names=[\"image_pretty_name\", \"is_training_image\"], header=None\n )\n print(IMAGE_FILE)\n images_df = pd.read_csv(\n IMAGE_FILE, sep=\" \", names=[\"image_pretty_name\", \"image_file_name\"], header=None\n )\n print(\"num images total: \" + str(images_df.shape[0]))\n image_class_labels_df = pd.read_csv(\n LABEL_FILE, sep=\" \", names=[\"image_pretty_name\", \"class_id\"], header=None\n )\n\n # Merge the metadata into a single flat dataframe for easier processing\n full_df = pd.DataFrame(images_df)\n full_df.reset_index(inplace=True)\n full_df = pd.merge(full_df, image_class_labels_df, on=\"image_pretty_name\")\n full_df = pd.merge(full_df, sizes_df, on=\"image_pretty_name\")\n full_df = pd.merge(full_df, bboxes_df, on=\"image_pretty_name\")\n full_df = pd.merge(full_df, split_df, on=\"image_pretty_name\")\n full_df.sort_values(by=[\"index\"], inplace=True)\n\n # Define the bounding boxes in the format required by SageMaker's built in Object Detection algorithm.\n # the xmin/ymin/xmax/ymax parameters are specified as ratios to the total image pixel size\n full_df[\"header_cols\"] = 2 # one col for the number of header cols, one for the label width\n full_df[\"label_width\"] = 5 # number of cols for each label: class, xmin, ymin, xmax, ymax\n full_df[\"xmin\"] = full_df[\"x_abs\"] / full_df[\"width\"]\n full_df[\"xmax\"] = (full_df[\"x_abs\"] + full_df[\"bbox_width\"]) / full_df[\"width\"]\n full_df[\"ymin\"] = full_df[\"y_abs\"] / full_df[\"height\"]\n full_df[\"ymax\"] = (full_df[\"y_abs\"] + full_df[\"bbox_height\"]) / full_df[\"height\"]\n\n # object detection class id's must be zero based. map from\n # class_id's given by CUB to zero-based (1 is 0, and 200 is 199).\n\n if SAMPLE_ONLY:\n # grab a small subset of species for testing\n criteria = full_df[\"class_id\"].isin(CLASSES)\n full_df = full_df[criteria]\n\n unique_classes = full_df[\"class_id\"].drop_duplicates()\n sorted_unique_classes = sorted(unique_classes)\n\n id_to_zero = {}\n i = 0.0\n for c in sorted_unique_classes:\n id_to_zero[c] = i\n i += 1.0\n\n full_df[\"zero_based_id\"] = full_df[\"class_id\"].map(id_to_zero)\n\n full_df.reset_index(inplace=True)\n\n # use 4 decimal places, as it seems to be required by the Object Detection algorithm\n pd.set_option(\"display.precision\", 4)\n\n train_df = []\n val_df = []\n\n if RANDOM_SPLIT:\n # split into training and validation sets\n train_df, val_df = split_to_train_test(full_df, \"class_id\", TRAIN_RATIO)\n\n train_df[IM2REC_SSD_COLS].to_csv(TRAIN_LST_FILE, sep=\"\\t\", float_format=\"%.4f\", header=None)\n val_df[IM2REC_SSD_COLS].to_csv(VAL_LST_FILE, sep=\"\\t\", float_format=\"%.4f\", header=None)\n else:\n train_df = full_df[(full_df.is_training_image == 1)]\n train_df[IM2REC_SSD_COLS].to_csv(TRAIN_LST_FILE, sep=\"\\t\", float_format=\"%.4f\", header=None)\n\n val_df = full_df[(full_df.is_training_image == 0)]\n val_df[IM2REC_SSD_COLS].to_csv(VAL_LST_FILE, sep=\"\\t\", float_format=\"%.4f\", header=None)\n\n print(\"num train: \" + str(train_df.shape[0]))\n print(\"num val: \" + str(val_df.shape[0]))\n return train_df, val_df",
"_____no_output_____"
],
[
"train_df, val_df = gen_list_files()",
"_____no_output_____"
]
],
[
[
"Here we take a look at a few records from the training list file to understand better what is being fed to the RecordIO files.\n\nThe first column is the image number or index. The second column indicates that the label is made up of 2 columns (column 2 and column 3). The third column specifies the label width of a single object. In our case, the value 5 indicates each image has 5 numbers to describe its label information: the class index, and the 4 bounding box coordinates. If there are multiple objects within one image, all the label information should be listed in one line. Our dataset contains only one bounding box per image.\n\nThe fourth column is the class label. This identifies the bird species using a zero-based class id. Columns 4 through 7 represent the bounding box for where the bird is found in this image.\n\nThe classes should be labeled with successive numbers and start with 0. The bounding box coordinates are ratios of its top-left (xmin, ymin) and bottom-right (xmax, ymax) corner indices to the overall image size. Note that the top-left corner of the entire image is the origin (0, 0). The last column specifies the relative path of the image file within the images directory.",
"_____no_output_____"
]
],
[
[
"!tail -3 $TRAIN_LST_FILE",
"_____no_output_____"
]
],
[
[
"## Step 2. Convert data into RecordIO format\n\nNow we create im2rec databases (.rec files) for training and validation based on the list files created earlier.",
"_____no_output_____"
]
],
[
[
"!python tools/im2rec.py --resize $RESIZE_SIZE --pack-label birds_ssd_sample $BASE_DIR/images/",
"_____no_output_____"
]
],
[
[
"## Step 3. Upload RecordIO files to S3\nUpload the training and validation data to the S3 bucket. We do this in multiple channels. Channels are simply directories in the bucket that differentiate the types of data provided to the algorithm. For the object detection algorithm, we call these directories `train` and `validation`.",
"_____no_output_____"
]
],
[
[
"# Upload the RecordIO files to train and validation channels\ntrain_channel = prefix + \"/train\"\nvalidation_channel = prefix + \"/validation\"\n\nsess.upload_data(path=\"birds_ssd_sample_train.rec\", bucket=bucket, key_prefix=train_channel)\nsess.upload_data(path=\"birds_ssd_sample_val.rec\", bucket=bucket, key_prefix=validation_channel)\n\ns3_train_data = \"s3://{}/{}\".format(bucket, train_channel)\ns3_validation_data = \"s3://{}/{}\".format(bucket, validation_channel)",
"_____no_output_____"
]
],
[
[
"# Train the model",
"_____no_output_____"
],
[
"Next we define an output location in S3, where the model artifacts will be placed on completion of the training. These artifacts are the output of the algorithm's traning job. We also get the URI to the Amazon SageMaker Object Detection docker image. This ensures the estimator uses the correct algorithm from the current region.",
"_____no_output_____"
]
],
[
[
"from sagemaker.amazon.amazon_estimator import get_image_uri\n\ntraining_image = get_image_uri(sess.boto_region_name, \"object-detection\", repo_version=\"latest\")\nprint(training_image)",
"_____no_output_____"
],
[
"s3_output_location = \"s3://{}/{}/output\".format(bucket, prefix)",
"_____no_output_____"
],
[
"od_model = sagemaker.estimator.Estimator(\n training_image,\n role,\n train_instance_count=1,\n train_instance_type=\"ml.p3.2xlarge\",\n train_volume_size=50,\n train_max_run=360000,\n input_mode=\"File\",\n output_path=s3_output_location,\n sagemaker_session=sess,\n)",
"_____no_output_____"
]
],
[
[
"## Define hyperparameters",
"_____no_output_____"
],
[
"The object detection algorithm at its core is the [Single-Shot Multi-Box detection algorithm (SSD)](https://arxiv.org/abs/1512.02325). This algorithm uses a `base_network`, which is typically a [VGG](https://arxiv.org/abs/1409.1556) or a [ResNet](https://arxiv.org/abs/1512.03385). The Amazon SageMaker object detection algorithm supports VGG-16 and ResNet-50. It also has a number of hyperparameters that help configure the training job. The next step in our training, is to setup these hyperparameters and data channels for training the model. See the SageMaker Object Detection [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection.html) for more details on its specific hyperparameters.\n\nOne of the hyperparameters here for example is `epochs`. This defines how many passes of the dataset we iterate over and drives the training time of the algorithm. Based on our tests, we can achieve 70% accuracy on a sample mix of 5 species with 100 epochs. When using the full 200 species, we can achieve 52% accuracy with 1,200 epochs.\n\nNote that Amazon SageMaker also provides [Automatic Model Tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html). Automatic model tuning, also known as hyperparameter tuning, finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by a metric that you choose. When [tuning an Object Detection](https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection-tuning.html) algorithm for example, the tuning job could find the best `validation:mAP` score by trying out various values for certain hyperparameters such as `mini_batch_size`, `weight_decay`, and `momentum`.",
"_____no_output_____"
]
],
[
[
"def set_hyperparameters(num_epochs, lr_steps):\n num_classes = classes_df.shape[0]\n num_training_samples = train_df.shape[0]\n print(\"num classes: {}, num training images: {}\".format(num_classes, num_training_samples))\n\n od_model.set_hyperparameters(\n base_network=\"resnet-50\",\n use_pretrained_model=1,\n num_classes=num_classes,\n mini_batch_size=16,\n epochs=num_epochs,\n learning_rate=0.001,\n lr_scheduler_step=lr_steps,\n lr_scheduler_factor=0.1,\n optimizer=\"sgd\",\n momentum=0.9,\n weight_decay=0.0005,\n overlap_threshold=0.5,\n nms_threshold=0.45,\n image_shape=512,\n label_width=350,\n num_training_samples=num_training_samples,\n )",
"_____no_output_____"
],
[
"set_hyperparameters(100, \"33,67\")",
"_____no_output_____"
]
],
[
[
"Now that the hyperparameters are setup, we define the data channels to be passed to the algorithm. To do this, we need to create the `sagemaker.session.s3_input` objects from our data channels. These objects are then put in a simple dictionary, which the algorithm consumes. Note that you could add a third channel named `model` to perform incremental training (continue training from where you had left off with a prior model).",
"_____no_output_____"
]
],
[
[
"train_data = sagemaker.session.s3_input(\n s3_train_data,\n distribution=\"FullyReplicated\",\n content_type=\"application/x-recordio\",\n s3_data_type=\"S3Prefix\",\n)\nvalidation_data = sagemaker.session.s3_input(\n s3_validation_data,\n distribution=\"FullyReplicated\",\n content_type=\"application/x-recordio\",\n s3_data_type=\"S3Prefix\",\n)\ndata_channels = {\"train\": train_data, \"validation\": validation_data}",
"_____no_output_____"
]
],
[
[
"## Submit training job",
"_____no_output_____"
],
[
"We have our `Estimator` object, we have set the hyperparameters for this object, and we have our data channels linked with the algorithm. The only remaining thing to do is to train the algorithm using the `fit` method. This will take more than 10 minutes in our example.\n\nThe training process involves a few steps. First, the instances that we requested while creating the `Estimator` classes are provisioned and setup with the appropriate libraries. Then, the data from our channels are downloaded into the instance. Once this is done, the actual training begins. The provisioning and data downloading will take time, depending on the size of the data. Therefore it might be a few minutes before our training job logs show up in CloudWatch. The logs will also print out Mean Average Precision (mAP) on the validation data, among other losses, for every run of the dataset (once per epoch). This metric is a proxy for the accuracy of the model.\n\nOnce the job has finished, a `Job complete` message will be printed. The trained model artifacts can be found in the S3 bucket that was setup as `output_path` in the estimator.",
"_____no_output_____"
]
],
[
[
"%%time\nod_model.fit(inputs=data_channels, logs=True)",
"_____no_output_____"
]
],
[
[
"Now that the training job is complete, you can also see the job listed in the `Training jobs` section of your SageMaker console. Note that the job name is uniquely identified by the name of the algorithm concatenated with the date and time stamp. You can click on the job to see the details including the hyperparameters, the data channel definitions, and the full path to the resulting model artifacts. You could even clone the job from the console, and tweak some of the parameters to generate a new training job.",
"_____no_output_____"
],
[
"Without having to go to the CloudWatch console, you can see how the job progressed in terms of the key object detection algorithm metric, mean average precision (mAP). This function below prepares a simple chart of that metric against the epochs.",
"_____no_output_____"
]
],
[
[
"import boto3\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\n\n%matplotlib inline\n\nclient = boto3.client(\"logs\")\nBASE_LOG_NAME = \"/aws/sagemaker/TrainingJobs\"\n\n\ndef plot_object_detection_log(model, title):\n logs = client.describe_log_streams(\n logGroupName=BASE_LOG_NAME, logStreamNamePrefix=model._current_job_name\n )\n cw_log = client.get_log_events(\n logGroupName=BASE_LOG_NAME, logStreamName=logs[\"logStreams\"][0][\"logStreamName\"]\n )\n\n mAP_accs = []\n for e in cw_log[\"events\"]:\n msg = e[\"message\"]\n if \"validation mAP <score>=\" in msg:\n num_start = msg.find(\"(\")\n num_end = msg.find(\")\")\n mAP = msg[num_start + 1 : num_end]\n mAP_accs.append(float(mAP))\n\n print(title)\n print(\"Maximum mAP: %f \" % max(mAP_accs))\n\n fig, ax = plt.subplots()\n plt.xlabel(\"Epochs\")\n plt.ylabel(\"Mean Avg Precision (mAP)\")\n (val_plot,) = ax.plot(range(len(mAP_accs)), mAP_accs, label=\"mAP\")\n plt.legend(handles=[val_plot])\n ax.yaxis.set_ticks(np.arange(0.0, 1.05, 0.1))\n ax.yaxis.set_major_formatter(ticker.FormatStrFormatter(\"%0.2f\"))\n plt.show()",
"_____no_output_____"
],
[
"plot_object_detection_log(od_model, \"mAP tracking for job: \" + od_model._current_job_name)",
"_____no_output_____"
]
],
[
[
"# Host the model",
"_____no_output_____"
],
[
"Once the training is done, we can deploy the trained model as an Amazon SageMaker real-time hosted endpoint. This lets us make predictions (or inferences) from the model. Note that we don't have to host using the same type of instance that we used to train. Training is a prolonged and compute heavy job with different compute and memory requirements that hosting typically does not. In our case we chose the `ml.p3.2xlarge` instance to train, but we choose to host the model on the less expensive cpu instance, `ml.m4.xlarge`. The endpoint deployment takes several minutes, and can be accomplished with a single line of code calling the `deploy` method.\n\nNote that some use cases require large sets of inferences on a predefined body of images. In those cases, you do not need to make the inferences in real time. Instead, you could use SageMaker's [batch transform jobs](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html).",
"_____no_output_____"
]
],
[
[
"%%time\nobject_detector = od_model.deploy(initial_instance_count=1, instance_type=\"ml.m4.xlarge\")",
"_____no_output_____"
]
],
[
[
"# Test the model",
"_____no_output_____"
],
[
"Now that the trained model is deployed at an endpoint that is up-and-running, we can use this endpoint for inference. The results of a call to the inference endpoint are in a format that is similar to the .lst format, with the addition of a confidence score for each detected object. The format of the output can be represented as `[class_index, confidence_score, xmin, ymin, xmax, ymax]`. Typically, we don't visualize low-confidence predictions.\n\nWe have provided a script to easily visualize the detection outputs. You can visulize the high-confidence preditions with bounding box by filtering out low-confidence detections using the script below:",
"_____no_output_____"
]
],
[
[
"def visualize_detection(img_file, dets, classes=[], thresh=0.6):\n \"\"\"\n visualize detections in one image\n Parameters:\n ----------\n img : numpy.array\n image, in bgr format\n dets : numpy.array\n ssd detections, numpy.array([[id, score, x1, y1, x2, y2]...])\n each row is one object\n classes : tuple or list of str\n class names\n thresh : float\n score threshold\n \"\"\"\n import random\n import matplotlib.pyplot as plt\n import matplotlib.image as mpimg\n\n img = mpimg.imread(img_file)\n plt.imshow(img)\n height = img.shape[0]\n width = img.shape[1]\n colors = dict()\n num_detections = 0\n for det in dets:\n (klass, score, x0, y0, x1, y1) = det\n if score < thresh:\n continue\n num_detections += 1\n cls_id = int(klass)\n if cls_id not in colors:\n colors[cls_id] = (random.random(), random.random(), random.random())\n xmin = int(x0 * width)\n ymin = int(y0 * height)\n xmax = int(x1 * width)\n ymax = int(y1 * height)\n rect = plt.Rectangle(\n (xmin, ymin),\n xmax - xmin,\n ymax - ymin,\n fill=False,\n edgecolor=colors[cls_id],\n linewidth=3.5,\n )\n plt.gca().add_patch(rect)\n class_name = str(cls_id)\n if classes and len(classes) > cls_id:\n class_name = classes[cls_id]\n print(\"{},{}\".format(class_name, score))\n plt.gca().text(\n xmin,\n ymin - 2,\n \"{:s} {:.3f}\".format(class_name, score),\n bbox=dict(facecolor=colors[cls_id], alpha=0.5),\n fontsize=12,\n color=\"white\",\n )\n\n print(\"Number of detections: \" + str(num_detections))\n plt.show()",
"_____no_output_____"
]
],
[
[
"Now we use our endpoint to try to detect objects within an image. Since the image is a jpeg, we use the appropriate content_type to run the prediction. The endpoint returns a JSON object that we can simply load and peek into. We have packaged the prediction code into a function to make it easier to test other images. Note that we are defaulting the confidence threshold to 30% in our example, as a couple of the birds in our sample images were not being detected as clearly. Defining an appropriate threshold is entirely dependent on your use case.",
"_____no_output_____"
]
],
[
[
"OBJECT_CATEGORIES = classes_df[\"class_id\"].values.tolist()\n\n\ndef show_bird_prediction(filename, ep, thresh=0.40):\n b = \"\"\n with open(filename, \"rb\") as image:\n f = image.read()\n b = bytearray(f)\n endpoint_response = runtime.invoke_endpoint(EndpointName=ep, ContentType=\"image/jpeg\", Body=b)\n results = endpoint_response[\"Body\"].read()\n detections = json.loads(results)\n visualize_detection(filename, detections[\"prediction\"], OBJECT_CATEGORIES, thresh)",
"_____no_output_____"
]
],
[
[
"Here we download images that the algorithm has not yet seen.",
"_____no_output_____"
]
],
[
[
"!wget -q -O multi-goldfinch-1.jpg https://t3.ftcdn.net/jpg/01/44/64/36/500_F_144643697_GJRUBtGc55KYSMpyg1Kucb9yJzvMQooW.jpg\n!wget -q -O northern-flicker-1.jpg https://upload.wikimedia.org/wikipedia/commons/5/5c/Northern_Flicker_%28Red-shafted%29.jpg\n!wget -q -O northern-cardinal-1.jpg https://cdn.pixabay.com/photo/2013/03/19/04/42/bird-94957_960_720.jpg\n!wget -q -O blue-jay-1.jpg https://cdn12.picryl.com/photo/2016/12/31/blue-jay-bird-feather-animals-b8ee04-1024.jpg\n!wget -q -O hummingbird-1.jpg http://res.freestockphotos.biz/pictures/17/17875-hummingbird-close-up-pv.jpg",
"_____no_output_____"
],
[
"def test_model():\n show_bird_prediction(\"hummingbird-1.jpg\", object_detector.endpoint)\n show_bird_prediction(\"blue-jay-1.jpg\", object_detector.endpoint)\n show_bird_prediction(\"multi-goldfinch-1.jpg\", object_detector.endpoint)\n show_bird_prediction(\"northern-flicker-1.jpg\", object_detector.endpoint)\n show_bird_prediction(\"northern-cardinal-1.jpg\", object_detector.endpoint)\n\n\ntest_model()",
"_____no_output_____"
]
],
[
[
"# Clean up\nHere we delete the SageMaker endpoint, as we will no longer be performing any inferences. This is an important step, as your account is billed for the amount of time an endpoint is running, even when it is idle.",
"_____no_output_____"
]
],
[
[
"sagemaker.Session().delete_endpoint(object_detector.endpoint)",
"_____no_output_____"
]
],
[
[
"# Improve the model",
"_____no_output_____"
],
[
"## Define Function to Flip the Images Horizontally (on the X Axis)",
"_____no_output_____"
]
],
[
[
"from PIL import Image\n\n\ndef flip_images():\n print(\"Flipping images...\")\n\n SIZE_COLS = [\"idx\", \"width\", \"height\"]\n IMAGE_COLS = [\"image_pretty_name\", \"image_file_name\"]\n LABEL_COLS = [\"image_pretty_name\", \"class_id\"]\n BBOX_COLS = [\"image_pretty_name\", \"x_abs\", \"y_abs\", \"bbox_width\", \"bbox_height\"]\n SPLIT_COLS = [\"image_pretty_name\", \"is_training_image\"]\n\n images_df = pd.read_csv(BASE_DIR + \"images.txt\", sep=\" \", names=IMAGE_COLS, header=None)\n image_class_labels_df = pd.read_csv(\n BASE_DIR + \"image_class_labels.txt\", sep=\" \", names=LABEL_COLS, header=None\n )\n bboxes_df = pd.read_csv(BASE_DIR + \"bounding_boxes.txt\", sep=\" \", names=BBOX_COLS, header=None)\n split_df = pd.read_csv(\n BASE_DIR + \"train_test_split.txt\", sep=\" \", names=SPLIT_COLS, header=None\n )\n\n NUM_ORIGINAL_IMAGES = images_df.shape[0]\n\n rows_list = []\n bbox_rows_list = []\n size_rows_list = []\n label_rows_list = []\n split_rows_list = []\n\n idx = 0\n\n full_df = images_df.copy()\n full_df.reset_index(inplace=True)\n full_df = pd.merge(full_df, image_class_labels_df, on=\"image_pretty_name\")\n full_df = pd.merge(full_df, bboxes_df, on=\"image_pretty_name\")\n full_df = pd.merge(full_df, split_df, on=\"image_pretty_name\")\n full_df.sort_values(by=[\"index\"], inplace=True)\n\n if SAMPLE_ONLY:\n # grab a small subset of species for testing\n criteria = full_df[\"class_id\"].isin(CLASSES)\n full_df = full_df[criteria]\n\n for rel_image_fn in full_df[\"image_file_name\"]:\n idx += 1\n full_img_content = full_df[(full_df.image_file_name == rel_image_fn)]\n\n class_id = full_img_content.iloc[0].class_id\n\n img = Image.open(IMAGES_DIR + rel_image_fn)\n\n width, height = img.size\n\n new_idx = idx + NUM_ORIGINAL_IMAGES\n\n flip_core_file_name = rel_image_fn[:-4] + \"_flip.jpg\"\n flip_full_file_name = IMAGES_DIR + flip_core_file_name\n\n img_flip = img.transpose(Image.FLIP_LEFT_RIGHT)\n img_flip.save(flip_full_file_name)\n\n # append a new image\n dict = {\"image_pretty_name\": new_idx, \"image_file_name\": flip_core_file_name}\n rows_list.append(dict)\n\n # append a new split, use same flag for flipped image from original image\n is_training_image = full_img_content.iloc[0].is_training_image\n split_dict = {\"image_pretty_name\": new_idx, \"is_training_image\": is_training_image}\n split_rows_list.append(split_dict)\n\n # append a new image class label\n label_dict = {\"image_pretty_name\": new_idx, \"class_id\": class_id}\n label_rows_list.append(label_dict)\n\n # add a size row for the original and the flipped image, same height and width\n size_dict = {\"idx\": idx, \"width\": width, \"height\": height}\n size_rows_list.append(size_dict)\n\n size_dict = {\"idx\": new_idx, \"width\": width, \"height\": height}\n size_rows_list.append(size_dict)\n\n # append bounding box for flipped image\n\n x_abs = full_img_content.iloc[0].x_abs\n y_abs = full_img_content.iloc[0].y_abs\n bbox_width = full_img_content.iloc[0].bbox_width\n bbox_height = full_img_content.iloc[0].bbox_height\n flipped_x_abs = width - bbox_width - x_abs\n\n bbox_dict = {\n \"image_pretty_name\": new_idx,\n \"x_abs\": flipped_x_abs,\n \"y_abs\": y_abs,\n \"bbox_width\": bbox_width,\n \"bbox_height\": bbox_height,\n }\n bbox_rows_list.append(bbox_dict)\n\n print(\"Done looping through original images\")\n\n images_df = images_df.append(rows_list)\n images_df[IMAGE_COLS].to_csv(IMAGE_FILE, sep=\" \", index=False, header=None)\n bboxes_df = bboxes_df.append(bbox_rows_list)\n bboxes_df[BBOX_COLS].to_csv(BBOX_FILE, sep=\" \", index=False, header=None)\n split_df = split_df.append(split_rows_list)\n split_df[SPLIT_COLS].to_csv(SPLIT_FILE, sep=\" \", index=False, header=None)\n sizes_df = pd.DataFrame(size_rows_list)\n sizes_df[SIZE_COLS].to_csv(SIZE_FILE, sep=\" \", index=False, header=None)\n image_class_labels_df = image_class_labels_df.append(label_rows_list)\n image_class_labels_df[LABEL_COLS].to_csv(LABEL_FILE, sep=\" \", index=False, header=None)\n\n print(\"Done saving metadata in text files\")",
"_____no_output_____"
]
],
[
[
"## Re-train the model with the expanded dataset",
"_____no_output_____"
]
],
[
[
"%%time\n\nBBOX_FILE = BASE_DIR + \"bounding_boxes_with_flip.txt\"\nIMAGE_FILE = BASE_DIR + \"images_with_flip.txt\"\nLABEL_FILE = BASE_DIR + \"image_class_labels_with_flip.txt\"\nSIZE_FILE = BASE_DIR + \"sizes_with_flip.txt\"\nSPLIT_FILE = BASE_DIR + \"train_test_split_with_flip.txt\"\n\n# add a set of flipped images\nflip_images()\n\n# show the new full set of images for a species\nshow_species(\"017.Cardinal\")\n\n# create new sizes file\ngen_image_size_file()\n\n# re-create and re-deploy the RecordIO files with the updated set of images\ntrain_df, val_df = gen_list_files()\n!python tools/im2rec.py --resize $RESIZE_SIZE --pack-label birds_ssd_sample $BASE_DIR/images/\nsess.upload_data(path=\"birds_ssd_sample_train.rec\", bucket=bucket, key_prefix=train_channel)\nsess.upload_data(path=\"birds_ssd_sample_val.rec\", bucket=bucket, key_prefix=validation_channel)\n\n# account for the new number of training images\nset_hyperparameters(100, \"33,67\")\n\n# re-train\nod_model.fit(inputs=data_channels, logs=True)\n\n# check out the new accuracy\nplot_object_detection_log(od_model, \"mAP tracking for job: \" + od_model._current_job_name)",
"_____no_output_____"
]
],
[
[
"## Re-deploy and test",
"_____no_output_____"
]
],
[
[
"# host the updated model\nobject_detector = od_model.deploy(initial_instance_count=1, instance_type=\"ml.m4.xlarge\")\n\n# test the new model\ntest_model()",
"_____no_output_____"
]
],
[
[
"## Final cleanup\nHere we delete the SageMaker endpoint, as we will no longer be performing any inferences. This is an important step, as your account is billed for the amount of time an endpoint is running, even when it is idle.",
"_____no_output_____"
]
],
[
[
"# delete the new endpoint\nsagemaker.Session().delete_endpoint(object_detector.endpoint)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d05380d7ac43a1f64343d6fc1d1cd89511c4f668 | 6,371 | ipynb | Jupyter Notebook | Interactive.ipynb | MichaMucha/awdlstm-integrated-gradients | c61f3ece75b612b95026f171a62ab109b1109551 | [
"Apache-2.0"
] | 8 | 2020-06-03T01:55:41.000Z | 2022-03-08T00:28:01.000Z | Interactive.ipynb | MichaMucha/awdlstm-integrated-gradients | c61f3ece75b612b95026f171a62ab109b1109551 | [
"Apache-2.0"
] | null | null | null | Interactive.ipynb | MichaMucha/awdlstm-integrated-gradients | c61f3ece75b612b95026f171a62ab109b1109551 | [
"Apache-2.0"
] | 3 | 2020-07-24T20:06:30.000Z | 2021-11-23T04:34:02.000Z | 28.959091 | 404 | 0.536337 | [
[
[
"# Test",
"_____no_output_____"
]
],
[
[
"import fastai.train\nimport pandas as pd\nimport torch\nimport torch.nn as nn\nfrom captum.attr import LayerIntegratedGradients\n\n# --- Model Setup ---\n\n# Load a fast.ai `Learner` trained to predict IMDB review category `[negative, positive]`\nawd = fastai.train.load_learner(\".\", \"imdb_fastai_trained_lm_clf.pth\")\nawd.model[0].bptt = 200\n\n# getting to the actual layer that holds embeddings\nembedding_layer = awd.model[0]._modules[\"module\"]._modules[\"encoder_dp\"]\n\n# working around the model prediction - first output only, apply softmax\nforward_func = lambda x: torch.softmax(awd.model(x)[0], dim=-1)\n\n# make integrated gradients instance\nlig = LayerIntegratedGradients(forward_func, embedding_layer)\n\n# Explainer logic\n\n\ndef get_attributions_for_sentence(\n sentence,\n awd_model=awd,\n lig_instance=lig,\n target=None,\n lig_n_steps=200,\n baseline_token=\"\\n \\n \",\n):\n awd = awd_model\n lig = lig_instance\n vocab = awd.data.x.vocab\n sentence_tokens = awd.data.one_item(sentence)[0]\n reversed_tokens = [vocab.itos[w] for w in sentence_tokens[0]]\n baseline = (\n torch.ones_like(sentence_tokens) * vocab.stoi[baseline_token]\n ) # see \"how to choose a good baseline\"\n baseline[0, 0] = vocab.stoi[\"xxbos\"] # beginning of sentence is always #1\n y = awd.predict(sentence)\n if target is None:\n target = y[1].item()\n attrs = lig.attribute(sentence_tokens, baseline, target, n_steps=lig_n_steps)\n a = attrs.sum(-1)\n a = a / torch.norm(a)\n return (pd.Series(a.numpy()[0], index=reversed_tokens), y)",
"_____no_output_____"
],
[
"# https://www.imdb.com/review/rw5384922/?ref_=tt_urv\nreview_1917 = \"\"\"I sat in a packed yet silent theater this morning and watched, what I believe to be, the next Academy Award winner for the Best Picture.\"\"\"\n\"\"\"I'm not at all a fan of war movies but I am a fan of great movies... and 1917 is a great movie. I have never been so mesmerized by set design and direction, the mass human emotion of this film is astonishingly captured and embedded magically in the audience. It keeps running through my mind...the poetry and beauty intertwined with the raw misery of war. Treat yourself... see this movie!\n\"\"\";",
"_____no_output_____"
],
[
"import ipyvuetify as v\nimport ipywidgets as w",
"_____no_output_____"
],
[
"class Chip(v.Chip):\n positive = \"0, 255, 0\"\n negative = \"255, 0, 0\"\n\n def __init__(self, word, attribution):\n direction = self.positive if attribution >= 0 else self.negative\n color = f\"rgba({direction}, {abs(attribution):.2f})\"\n super().__init__(\n class_=\"mx-0 px-1\",\n children=[word],\n color=color,\n value=attribution,\n label=True,\n small=True,\n )\n\n\ndef saliency_chips(attributions: pd.Series) -> v.ChipGroup:\n children = [Chip(w, a) for w, a in attributions.iteritems()]\n return v.ChipGroup(column=True, children=children)",
"_____no_output_____"
],
[
"@w.interact_manual(\n sentence=w.Textarea(review_1917),\n target=[None, 0, 1],\n baseline_token=[\"\\n \\n\", \".\", \"<BOS>\"],\n)\ndef display_attributions(sentence=\"Great film\", target=None, baseline_token=\"\\n \\n \"):\n \n attributions, prediction = get_attributions_for_sentence(sentence)\n \n return saliency_chips(attributions)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0538609b8d48429e81d28c1499bb8baede4040e | 458,752 | ipynb | Jupyter Notebook | 2. data_check.ipynb | DUYONGBEAK/Insurance-fraud-detection-model | 630bcf40f1ab91c6b220e05a47357eb9ec129548 | [
"BSD-Source-Code"
] | null | null | null | 2. data_check.ipynb | DUYONGBEAK/Insurance-fraud-detection-model | 630bcf40f1ab91c6b220e05a47357eb9ec129548 | [
"BSD-Source-Code"
] | null | null | null | 2. data_check.ipynb | DUYONGBEAK/Insurance-fraud-detection-model | 630bcf40f1ab91c6b220e05a47357eb9ec129548 | [
"BSD-Source-Code"
] | null | null | null | 414.036101 | 99,276 | 0.932713 | [
[
[
"## 데이터 불러오기",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport datetime as dt\nimport warnings\nwarnings.filterwarnings('ignore')\n%matplotlib inline",
"_____no_output_____"
],
[
"import matplotlib as mat\nimport matplotlib.font_manager as fonm\n\nfont_list = [font.name for font in fonm.fontManager.ttflist]\n# for f in font_list:\n# print(f\"{f}.ttf\")\n\nmat.rcParams['font.family'] = 'Hancom Gothic'",
"_____no_output_____"
],
[
"def str_col(df):\n col = []\n for i in range(0,len(df.dtypes)):\n if str(df.dtypes[i]) == 'object':\n col.append(df.dtypes.index[i])\n print(col) \n return col\n\n\ndef int_col(df):\n col = []\n for i in range(0,len(df.dtypes)):\n if str(df.dtypes[i]) != 'object':\n col.append(df.dtypes.index[i])\n print(col) \n return col \n\ndef p_100(a, b):\n print( round( (a/(a+b))*100,2), \"%\" )\n \ndef extraction_func(df, col_name, num_list):\n temp = pd.DataFrame()\n for i in num_list:\n temp = pd.concat([ temp, df.loc[df[col_name] == i ] ],axis=0)\n return temp\ndef unique_check(df):\n \n for i in range(0,len(df.columns)):\n if df[df.columns[i]].isnull().sum() > 0:\n print(\"Impossible if there are None : \",df.columns[i])\n \n col_1 = []\n col_2 = []\n for i in range(0,len(df.columns)):\n if type(df[df.columns[i]][0]) == str:\n col_1.append(df.columns[i])\n \n if df[df.columns[i]].nunique() > 5:\n col_2.append(df.columns[i])\n print(df.columns[i],\"컬럼의 unique 개수는 \",df[df.columns[i]].nunique(),\"개\")\n \n return col_1, col_2 ",
"_____no_output_____"
],
[
"insurance = pd.read_csv('./temp_data/insurance.csv',encoding='utf-8')\nprint(insurance.shape)\nprint(insurance.dtypes)\nprint(insurance.isnull().sum())\ninsurance.tail(5)",
"(20585, 30)\nCUST_ID int64\nSIU_CUST_YN object\nSEX object\nAGE int64\nRESI_COST float64\nRESI_TYPE_CODE float64\nFP_CAREER object\nCUST_RGST float64\nCTPR object\nOCCP_GRP_1 object\nOCCP_GRP_2 object\nTOTALPREM float64\nMINCRDT float64\nMAXCRDT float64\nWEDD_YN object\nCHLD_CNT float64\nLTBN_CHLD_AGE float64\nCUST_INCM float64\nRCBASE_HSHD_INCM int64\nJPBASE_HSHD_INCM float64\nCLAIM_NUM float64\nACCI_DVSN int64\nHOUSE_HOSP_DIST float64\nDMND_RESN_CODE int64\nHEED_HOSP_YN object\nSUM_ORIG_PREM float64\nDISTANCE float64\nRESN_DATE_NUM float64\nCUST_ROLE int64\nPAYM_AMT float64\ndtype: object\nCUST_ID 0\nSIU_CUST_YN 0\nSEX 0\nAGE 0\nRESI_COST 0\nRESI_TYPE_CODE 0\nFP_CAREER 0\nCUST_RGST 0\nCTPR 0\nOCCP_GRP_1 0\nOCCP_GRP_2 0\nTOTALPREM 0\nMINCRDT 0\nMAXCRDT 0\nWEDD_YN 0\nCHLD_CNT 0\nLTBN_CHLD_AGE 0\nCUST_INCM 0\nRCBASE_HSHD_INCM 0\nJPBASE_HSHD_INCM 0\nCLAIM_NUM 0\nACCI_DVSN 0\nHOUSE_HOSP_DIST 0\nDMND_RESN_CODE 0\nHEED_HOSP_YN 0\nSUM_ORIG_PREM 0\nDISTANCE 0\nRESN_DATE_NUM 0\nCUST_ROLE 0\nPAYM_AMT 0\ndtype: int64\n"
],
[
"insurance = insurance.astype({'RESI_TYPE_CODE': str,\n 'MINCRDT':str,\n 'MAXCRDT':str,\n 'ACCI_DVSN':str,\n 'DMND_RESN_CODE':str,\n 'CUST_ROLE':str})",
"_____no_output_____"
]
],
[
[
"## 데이터 복사",
"_____no_output_____"
]
],
[
[
"copy_insurance = insurance.copy()",
"_____no_output_____"
]
],
[
[
"## 비식별화 및 고유값이 많은 컬럼 삭제 \n - unique한 값이 많으면 인코딩이 어려움으로 해당하는 컬럼들 삭제 \n - 실제로 컬럼삭제를 진행하지 않은 결과 인코딩 시 차원이 60000여개로 늘어나는 문제 발생",
"_____no_output_____"
]
],
[
[
"col_1, col_2 = unique_check(copy_insurance)",
"RESI_TYPE_CODE 컬럼의 unique 개수는 10 개\nCTPR 컬럼의 unique 개수는 17 개\nOCCP_GRP_1 컬럼의 unique 개수는 8 개\nOCCP_GRP_2 컬럼의 unique 개수는 25 개\nMINCRDT 컬럼의 unique 개수는 10 개\nMAXCRDT 컬럼의 unique 개수는 10 개\nDMND_RESN_CODE 컬럼의 unique 개수는 8 개\nCUST_ROLE 컬럼의 unique 개수는 7 개\n"
],
[
"col_2.remove('RESI_TYPE_CODE')\ncol_2.remove('OCCP_GRP_1')\ncol_2.remove('MINCRDT')\ncol_2.remove('MAXCRDT')\ncol_2.remove('DMND_RESN_CODE')\ncol_2.remove('CUST_ROLE')\n\n# index를 CUST_ID로 변경\ncopy_insurance.set_index('CUST_ID', inplace=True)\n\ncopy_insurance.drop(col_2, axis=1, inplace=True)",
"_____no_output_____"
]
],
[
[
"## 데이터 파악하기",
"_____no_output_____"
],
[
"#### 변수간 상관관계 확인",
"_____no_output_____"
]
],
[
[
"### 필요한 모듈 불러오기\n#%matplotlib inline\t# 시각화 결과를 Jupyter Notebook에서 바로 보기\n# import matplotlib.pyplot as plt # 모듈 불러오기\n\n### 상관계수 테이블\ncorr = copy_insurance.corr() # 'df'라는 데이터셋을 'corr'라는 이름의 상관계수 테이블로 저장 \n\n### 상관계수 히트맵 그리기\n\n# 히트맵 사이즈 설정\nplt.figure(figsize = (20, 15))\t\n\n# 히트맵 형태 정의. 여기서는 삼각형 형태(위 쪽 삼각형에 True, 아래 삼각형에 False)\nmask = np.zeros_like(corr, dtype=np.bool) \nmask[np.triu_indices_from(mask)] = True\n\n# 히트맵 그리기\nsns.heatmap(data = corr, # 'corr' = 상관계수 테이블\n annot = True, # 히트맵에 값 표시\n mask=mask, # 히트맵 형태. 여기서는 위에서 정의한 삼각형 형태\n fmt = '.2f', # 값 표시 방식. 소숫점 2번째자리까지 \n linewidths = 1., # 경계면 실선 구분 여부\n cmap = 'RdYlBu_r') # 사용할 색 지정 ('python colormap 검색')\nplt.title('상관계수 히트맵')\nplt.show()",
"_____no_output_____"
]
],
[
[
"##### 연관성이 높은 컬럼 제거",
"_____no_output_____"
]
],
[
[
"copy_insurance = copy_insurance[copy_insurance.columns.difference(['LTBN_CHLD_AGE','JPBASE_HSHD_INCM'])]",
"_____no_output_____"
]
],
[
[
"#### 데이터가 정규분포를 이루는지 확인하기\n - 최소 최대 정규화: 모든 feature들의 스케일이 동일하지만, 이상치(outlier)를 잘 처리하지 못한다. (X - MIN) / (MAX-MIN) \n - Z-점수 정규화(표준화) : 이상치(outlier)를 잘 처리하지만, 정확히 동일한 척도로 정규화 된 데이터를 생성하지는 않는다. (X - 평균) / 표준편차",
"_____no_output_____"
]
],
[
[
"plot_target = int_col(copy_insurance)",
"['AGE', 'CHLD_CNT', 'CLAIM_NUM', 'CUST_INCM', 'CUST_RGST', 'DISTANCE', 'HOUSE_HOSP_DIST', 'PAYM_AMT', 'RCBASE_HSHD_INCM', 'RESI_COST', 'RESN_DATE_NUM', 'SUM_ORIG_PREM', 'TOTALPREM']\n"
],
[
"import scipy.stats as stats\n\nfor i in plot_target:\n print(i,\"의 가우시안 분포 확인\")\n fig = plt.figure(figsize=(15,3))\n ax1 = fig.add_subplot(1,2,1)\n ax2 = fig.add_subplot(1,2,2)\n\n stats.probplot(copy_insurance[i], dist=stats.norm,plot=ax1)\n\n mu = copy_insurance[i].mean()\n variance = copy_insurance[i].var()\n sigma = variance ** 0.5\n x=np.linspace(mu - 3*sigma, mu + 3*sigma, 100)\n ax2.plot(x, stats.norm.pdf(x,mu,sigma), color=\"blue\",label=\"theoretical\")\n\n sns.distplot(ax=ax2, a=copy_insurance[i], bins=100, color=\"red\", label=\"observed\")\n ax2.legend()\n plt.show()\n print()",
"AGE 의 가우시안 분포 확인\n"
]
],
[
[
"#### stats.kstest으로 가설검증하기\n - 귀무가설은 '정규분포를 따른다' 이다.",
"_____no_output_____"
]
],
[
[
"for i in plot_target:\n print(i,\"귀무가설의 기각 여부 확인\")\n test_state, p_val = stats.kstest(copy_insurance[i],'norm',args=(copy_insurance[i].mean(), copy_insurance[i].var()**0.5) )\n print(\"Test-statistics : {:.5f}, p-value : {:.5f}\".format(test_state, p_val))\n print()",
"AGE 귀무가설의 기각 여부 확인\nTest-statistics : 0.05453, p-value : 0.00000\n\nCHLD_CNT 귀무가설의 기각 여부 확인\nTest-statistics : 0.36416, p-value : 0.00000\n\nCLAIM_NUM 귀무가설의 기각 여부 확인\nTest-statistics : 0.25656, p-value : 0.00000\n\nCUST_INCM 귀무가설의 기각 여부 확인\nTest-statistics : 0.29274, p-value : 0.00000\n\nCUST_RGST 귀무가설의 기각 여부 확인\nTest-statistics : 0.30788, p-value : 0.00000\n\nDISTANCE 귀무가설의 기각 여부 확인\nTest-statistics : 0.27584, p-value : 0.00000\n\nHOUSE_HOSP_DIST 귀무가설의 기각 여부 확인\nTest-statistics : 0.32806, p-value : 0.00000\n\nPAYM_AMT 귀무가설의 기각 여부 확인\nTest-statistics : 0.42303, p-value : 0.00000\n\nRCBASE_HSHD_INCM 귀무가설의 기각 여부 확인\nTest-statistics : 0.11014, p-value : 0.00000\n\nRESI_COST 귀무가설의 기각 여부 확인\nTest-statistics : 0.13425, p-value : 0.00000\n\nRESN_DATE_NUM 귀무가설의 기각 여부 확인\nTest-statistics : 0.25856, p-value : 0.00000\n\nSUM_ORIG_PREM 귀무가설의 기각 여부 확인\nTest-statistics : 0.44341, p-value : 0.00000\n\nTOTALPREM 귀무가설의 기각 여부 확인\nTest-statistics : 0.27630, p-value : 0.00000\n\n"
]
],
[
[
"##### AGE를 제외한 모든 컬럼이 정규분포를 따르지 않으므로 MinMaxScaler를 이용해 정규화 적용",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import MinMaxScaler\n\nint_data = copy_insurance[plot_target]\n\n# 인덱스 빼두기 \nindex = int_data.index\n\n# MinMaxcaler 객체 생성\nscaler = MinMaxScaler()\n\n# MinMaxcaler로 데이터 셋 변환 .fit( ) 과 .transform( ) 호출\nscaler.fit(int_data)\n\ndata_scaled = scaler.transform(int_data)\n\n# int_data.loc[:,:] = data_scaled\n\n# transform( )시 scale 변환된 데이터 셋이 numpy ndarry로 반환되어 이를 DataFrame으로 변환\ndata_scaled = pd.DataFrame(data=data_scaled, columns=int_data.columns, index=index)\n\nprint('feature 들의 정규화 최소 값')\nprint(data_scaled.min())\nprint('\\nfeature 들의 정규화 최대 값')\nprint(data_scaled.max())",
"feature 들의 정규화 최소 값\nAGE 0.0\nCHLD_CNT 0.0\nCLAIM_NUM 0.0\nCUST_INCM 0.0\nCUST_RGST 0.0\nDISTANCE 0.0\nHOUSE_HOSP_DIST 0.0\nPAYM_AMT 0.0\nRCBASE_HSHD_INCM 0.0\nRESI_COST 0.0\nRESN_DATE_NUM 0.0\nSUM_ORIG_PREM 0.0\nTOTALPREM 0.0\ndtype: float64\n\nfeature 들의 정규화 최대 값\nAGE 1.0\nCHLD_CNT 1.0\nCLAIM_NUM 1.0\nCUST_INCM 1.0\nCUST_RGST 1.0\nDISTANCE 1.0\nHOUSE_HOSP_DIST 1.0\nPAYM_AMT 1.0\nRCBASE_HSHD_INCM 1.0\nRESI_COST 1.0\nRESN_DATE_NUM 1.0\nSUM_ORIG_PREM 1.0\nTOTALPREM 1.0\ndtype: float64\n"
]
],
[
[
"##### label컬럼을 제외한 나머지 카테고리 데이터들은 원핫 인코딩을 진행",
"_____no_output_____"
]
],
[
[
"onehot_target = str_col(copy_insurance)\n\nonehot_target.remove('SIU_CUST_YN')\n\nstr_data = copy_insurance[onehot_target]\n\nonehot_data = pd.get_dummies(str_data)",
"['ACCI_DVSN', 'CUST_ROLE', 'DMND_RESN_CODE', 'FP_CAREER', 'HEED_HOSP_YN', 'MAXCRDT', 'MINCRDT', 'OCCP_GRP_1', 'RESI_TYPE_CODE', 'SEX', 'SIU_CUST_YN', 'WEDD_YN']\n"
]
],
[
[
"#### 인코딩과 스케일링 데이터, 라벨을 합쳐서 저장",
"_____no_output_____"
]
],
[
[
"concat_data = pd.concat([data_scaled, onehot_data, copy_insurance['SIU_CUST_YN']], axis=1)\n\nconcat_data.to_csv('./temp_data/save_scaled_insurance.csv',index = True)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d05389960c75a2c2d12e3104a9ba39468758aa5e | 4,925 | ipynb | Jupyter Notebook | data_processing/05_repertoire-classification-subsampling.ipynb | Linda-Lan/grp_paper | 4383d036fcd38b5a3c1d5911f4939e5810bd6330 | [
"MIT"
] | 22 | 2018-07-12T18:12:48.000Z | 2022-03-31T05:55:34.000Z | data_processing/05_repertoire-classification-subsampling.ipynb | Linda-Lan/grp_paper | 4383d036fcd38b5a3c1d5911f4939e5810bd6330 | [
"MIT"
] | 6 | 2019-01-24T09:14:03.000Z | 2021-06-14T17:21:03.000Z | data_processing/05_repertoire-classification-subsampling.ipynb | Linda-Lan/grp_paper | 4383d036fcd38b5a3c1d5911f4939e5810bd6330 | [
"MIT"
] | 7 | 2019-02-23T17:13:26.000Z | 2021-11-09T17:27:05.000Z | 36.481481 | 582 | 0.60264 | [
[
[
"# Repertoire classification subsampling\n\nWhen training a classifier to assign repertoires to the subject from which they were obtained, we need a set of subsampled sequences. The sequences have been condensed to just the V- and J-gene assignments and the CDR3 length (VJ-CDR3len). Subsample sizes range from 10 to 10,000 sequences per biological replicate.\n\nThe [`abutils`](https://www.github.com/briney/abutils) Python package is required for this notebook, and can be installed by running `pip install abutils`.\n\n*NOTE: this notebook requires the use of the Unix command line tool `shuf`. Thus, it requires a Unix-based operating system to run correctly (MacOS and most flavors of Linux should be fine). Running this notebook on Windows 10 may be possible using the [Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/about) but we have not tested this.*",
"_____no_output_____"
]
],
[
[
"from __future__ import print_function, division\n\nfrom collections import Counter\nimport os\nimport subprocess as sp\nimport sys\nimport tempfile\n\nfrom abutils.utils.pipeline import make_dir",
"_____no_output_____"
]
],
[
[
"## Subjects, subsample sizes, and directories\n\nThe `input_dir` should contain deduplicated clonotype sequences. The datafiles are too large to be included in the Github repository, but may be downloaded [**here**](http://burtonlab.s3.amazonaws.com/GRP_github_data/techrep-merged_vj-cdr3len_no-header.tar.gz). If downloading the data (which will be downloaded as a compressed archive), decompress the archive in the `data` directory (in the same parent directory as this notebook) and you should be ready to go. If you want to store the downloaded data in some other location, adjust the `input_dir` path below as needed.\n\nBy default, subsample sizes increase by 10 from 10 to 100, by 100 from 100 to 1,000, and by 1,000 from 1,000 to 10,000.",
"_____no_output_____"
]
],
[
[
"with open('./data/subjects.txt') as f:\n subjects = sorted(f.read().split())\n\nsubsample_sizes = list(range(10, 100, 10)) + list(range(100, 1000, 100)) + list(range(1000, 11000, 1000))\n\ninput_dir = './data/techrep-merged_vj-cdr3len_no-header/'\nsubsample_dir = './data/repertoire_classification/user-created_subsamples_vj-cdr3len'\nmake_dir(subsample_dir)",
"_____no_output_____"
]
],
[
[
"## Subsampling",
"_____no_output_____"
]
],
[
[
"def subsample(infile, outfile, n_seqs, iterations):\n with open(outfile, 'w') as f:\n f.write('')\n shuf_cmd = 'shuf -n {} {}'.format(n_seqs, infile)\n p = sp.Popen(shuf_cmd, stdout=sp.PIPE, stderr=sp.PIPE, shell=True)\n stdout, stderr = p.communicate()\n with open(outfile, 'a') as f:\n for iteration in range(iterations):\n seqs = ['_'.join(s.strip().split()) for s in stdout.strip().split('\\n') if s.strip()]\n counts = Counter(seqs)\n count_strings = []\n for k, v in counts.items():\n count_strings.append('{}:{}'.format(k, v))\n f.write(','.join(count_strings) + '\\n')",
"_____no_output_____"
],
[
"for subject in subjects:\n print(subject)\n files = list_files(os.path.join(input_dir, subject))\n for file_ in files:\n for subsample_size in subsample_sizes:\n num = os.path.basename(file_).split('_')[0]\n ofile = os.path.join(subsample_dir, '{}_{}-{}'.format(subject, subsample_size, num))\n subsample(file_, ofile, subsample_size, 50)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d05390fe30983a8869e2a97bdfd6864c0d0306de | 51,562 | ipynb | Jupyter Notebook | docs/examples/strata.ipynb | giocaizzi/mplStrater | a44179a528b78be38c8ee63c62d353476a24e84a | [
"MIT"
] | 2 | 2022-01-13T15:45:00.000Z | 2022-01-19T21:11:05.000Z | docs/examples/strata.ipynb | giocaizzi/mplStrater | a44179a528b78be38c8ee63c62d353476a24e84a | [
"MIT"
] | 3 | 2021-12-25T09:00:50.000Z | 2021-12-26T20:53:05.000Z | docs/examples/strata.ipynb | giocaizzi/mplStrater | a44179a528b78be38c8ee63c62d353476a24e84a | [
"MIT"
] | null | null | null | 111.365011 | 36,898 | 0.805787 | [
[
[
"# Strata objects: Legend and Column\n\nStrata is stratigraphic data.\n\nThe main object of `strata` submodule is `mplStrater.strata.Column` which represents the single stratigraphic column.\nThis example shows the structure of the class and how to use it.\n\nFirst, import all required packages and load the example dataset.",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2\nfrom mplStrater.data import StrataFrame\nfrom mplStrater.strata import Column,Legend\n\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"df=pd.read_csv(\"../../../data/example.csv\")\ndf.head()",
"_____no_output_____"
]
],
[
[
"Then, initiate a `mpl.StrataFrame` providing a `pandas.DataFrame` and specifying its `epsg` code. ",
"_____no_output_____"
]
],
[
[
"sf=StrataFrame(\n df=df,\n epsg=32633)",
"_____no_output_____"
]
],
[
[
"## Define a `Legend`.\n\nThis is done providing a dictionary containing pairs of (value-specification) the `fill_dict` parameter and for the `hatch_fill` parameter.\n\nThe dictionary matches dataframe `fill` and `hatch` column values to either a *matplotlib encoded color* or *encoded hatch* string.\n\nThe example uses the following dictionaries.",
"_____no_output_____"
]
],
[
[
"fill_dict={\n 'Terreno conforme': 'lightgreen',\n 'Riporto conforme': 'darkgreen',\n 'Riporto non conforme': 'orange',\n 'Rifiuto': 'red',\n 'Assenza campione': 'white'\n }\n\nhatch_dict={\n 'Non pericoloso': '',\n 'Pericoloso': 'xxxxxxxxx',\n '_': ''\n }",
"_____no_output_____"
],
[
"l=Legend(\n fill_dict=fill_dict,\n hatch_dict=hatch_dict\n)",
"_____no_output_____"
]
],
[
[
"## Plot stand-alone `Column` objects\n\nImagine we would need to inspect closely a column. It's not sure that we would be able to clearly do it on the map with all other elements (labels, basemap...). Unless exporting the map in pdf with a high resolution, open the local file... would take sooo long! Therefore `Column` object has its own `plot()` method.\n\nLet's plot the first three columns of the strataframe.",
"_____no_output_____"
]
],
[
[
"sf.strataframe[:3]",
"_____no_output_____"
]
],
[
[
"Plot the first three columns contained in the `StrataFrame`.",
"_____no_output_____"
]
],
[
[
"#create figure\nf,axes=plt.subplots(1,4,figsize=(5,3),dpi=200,frameon=False)\nfor ax,i in zip(axes,range(4)):\n ax.axis('off')\n #instantiate class\n c=Column(\n #figure\n ax,l,\n #id\n sf.strataframe.loc[i,\"ID\"],\n #coords\n (0.9,0.9),\n #scale\n sf.strataframe.loc[i,\"scale\"],\n 3,\n #stratigraphic data\n sf.strataframe.loc[i,\"layers\"],\n sf.strataframe.loc[i,\"fill_list\"],\n sf.strataframe.loc[i,\"hatch_list\"],\n #labels\n sf.strataframe.loc[i,\"lbl1_list\"],\n sf.strataframe.loc[i,\"lbl2_list\"],\n sf.strataframe.loc[i,\"lbl3_list\"])\n ax.set_title(c.id)\n c.fill_column()\n c.set_inset_params()\n c.label_column(hardcoding=None)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d053a20196cf22aca69ada6738b97301fda30d77 | 7,588 | ipynb | Jupyter Notebook | notebooks/10/random_choice.ipynb | matthew-brett/cfd-uob | cc9233a26457f5e688ed6297ebbf410786cfd806 | [
"CC-BY-4.0"
] | 1 | 2019-09-30T13:31:41.000Z | 2019-09-30T13:31:41.000Z | notebooks/10/random_choice.ipynb | matthew-brett/cfd-uob | cc9233a26457f5e688ed6297ebbf410786cfd806 | [
"CC-BY-4.0"
] | 1 | 2020-08-14T11:16:11.000Z | 2020-08-14T11:16:11.000Z | notebooks/10/random_choice.ipynb | matthew-brett/cfd-uob | cc9233a26457f5e688ed6297ebbf410786cfd806 | [
"CC-BY-4.0"
] | 5 | 2019-12-03T00:54:39.000Z | 2020-09-21T14:30:43.000Z | 21.994203 | 234 | 0.52359 | [
[
[
"Sometimes it is useful to take a random choice between two or more options.\n\nNumpy has a function for that, called `random.choice`:",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
]
],
[
[
"Say we want to choose randomly between 0 and 1. We want an equal probability of getting 0 and getting 1. We could do it like this:",
"_____no_output_____"
]
],
[
[
"np.random.randint(0, 2)",
"_____no_output_____"
]
],
[
[
"If we do that lots of times, we see that we have a roughly 50% chance of getting 0 (and therefore, a roughly 50% chance of getting 1).",
"_____no_output_____"
]
],
[
[
"# Make 10000 random numbers that can be 0 or 1, with equal probability.\nlots_of_0_1 = np.random.randint(0, 2, size=10000)\n# Count the proportion that are 1.\nnp.count_nonzero(lots_of_0_1) / 10000",
"_____no_output_____"
]
],
[
[
"Run the cell above a few times to confirm you get numbers very close to 0.5.",
"_____no_output_____"
],
[
"Another way of doing this is to use `np.random.choice`.\n\nAs usual, check the arguments that the function expects with `np.random.choice?` in a notebook cell.\n\nThe first argument is a sequence, like a list, with the options that Numpy should chose from.\n\nFor example, we can ask Numpy to choose randomly from the list `[0, 1]`:",
"_____no_output_____"
]
],
[
[
"np.random.choice([0, 1])",
"_____no_output_____"
]
],
[
[
"A second `size` argument to the function says how many items to choose:",
"_____no_output_____"
]
],
[
[
"# Ten numbers, where each has a 50% chance of 0 and 50% chance of 1.\nnp.random.choice([0, 1], size=10)",
"_____no_output_____"
]
],
[
[
"By default, Numpy will chose each item in the sequence with equal probability, In this case, Numpy will chose 0 with 50% probability, and 1 with 50% probability:",
"_____no_output_____"
]
],
[
[
"# Use choice to make another 10000 random numbers that can be 0 or 1,\n# with equal probability.\nmore_0_1 = np.random.choice([0, 1], size=10000)\n# Count the proportion that are 1.\nnp.count_nonzero(more_0_1) / 10000",
"_____no_output_____"
]
],
[
[
"If you want, you can change these proportions with the `p` argument:",
"_____no_output_____"
]
],
[
[
"# Use choice to make another 10000 random numbers that can be 0 or 1,\n# where 0 has probability 0.25, and 1 has probability 0.75.\nweighted_0_1 = np.random.choice([0, 1], size=10000, p=[0.25, 0.75])\n# Count the proportion that are 1.\nnp.count_nonzero(weighted_0_1) / 10000",
"_____no_output_____"
]
],
[
[
"There can be more than two choices:",
"_____no_output_____"
]
],
[
[
"# Use choice to make another 10000 random numbers that can be 0 or 10 or 20, or\n# 30, where each has probability 0.25.\nmulti_nos = np.random.choice([0, 10, 20, 30], size=10000)\nmulti_nos[:10]",
"_____no_output_____"
],
[
"np.count_nonzero(multi_nos == 30) / 10000",
"_____no_output_____"
]
],
[
[
"The choices don't have to be numbers:",
"_____no_output_____"
]
],
[
[
"np.random.choice(['Heads', 'Tails'], size=10)",
"_____no_output_____"
]
],
[
[
"You can also do choices *without replacement*, so once you have chosen an element, all subsequent choices cannot chose that element again. For example, this *must* return all the elements from the choices, but in random order:",
"_____no_output_____"
]
],
[
[
"np.random.choice([0, 10, 20, 30], size=4, replace=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d053a807915d27aa8d94dada0a07fdfda860375c | 55,086 | ipynb | Jupyter Notebook | resources/adult.ipynb | maropu/spark-data-repair-plugin | 17118ef431313a7d78d6a8de3c5c4cd2f98851d7 | [
"Apache-2.0"
] | 6 | 2021-04-07T20:23:45.000Z | 2022-01-28T08:12:00.000Z | resources/adult.ipynb | maropu/spark-data-repair-plugin | 17118ef431313a7d78d6a8de3c5c4cd2f98851d7 | [
"Apache-2.0"
] | null | null | null | resources/adult.ipynb | maropu/spark-data-repair-plugin | 17118ef431313a7d78d6a8de3c5c4cd2f98851d7 | [
"Apache-2.0"
] | null | null | null | 55.086 | 6,135 | 0.488454 | [
[
[
"package_jar = '../target/spark-data-repair-plugin_2.12_spark3.2_0.1.0-EXPERIMENTAL-with-dependencies.jar'",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nfrom pyspark.sql import *\nfrom pyspark.sql.types import *\nfrom pyspark.sql import functions as f\n\nspark = SparkSession.builder \\\n .config('spark.jars', package_jar) \\\n .config('spark.deriver.memory', '8g') \\\n .enableHiveSupport() \\\n .getOrCreate()\n\n# Suppresses user warinig messages in Python\nimport warnings\nwarnings.simplefilter(\"ignore\", UserWarning)\n\n# Suppresses `WARN` messages in JVM\nspark.sparkContext.setLogLevel(\"ERROR\")",
"NOTE: SPARK_PREPEND_CLASSES is set, placing locally compiled Spark classes ahead of assembly.\n21/10/26 21:55:42 WARN Utils: Your hostname, maropus-MacBook-Pro.local resolves to a loopback address: 127.0.0.1; using 192.168.3.4 instead (on interface en0)\n21/10/26 21:55:42 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address\n21/10/26 21:55:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable\nUsing Spark's default log4j profile: org/apache/spark/log4j-defaults.properties\nSetting default log level to \"WARN\".\nTo adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).\n"
],
[
"from repair.api import Scavenger\nScavenger().version()",
"_____no_output_____"
],
[
"spark.read.option(\"header\", True).csv(\"../testdata/adult.csv\").createOrReplaceTempView(\"adult\")\nspark.table('adult').printSchema()",
"root\n |-- tid: string (nullable = true)\n |-- Age: string (nullable = true)\n |-- Education: string (nullable = true)\n |-- Occupation: string (nullable = true)\n |-- Relationship: string (nullable = true)\n |-- Sex: string (nullable = true)\n |-- Country: string (nullable = true)\n |-- Income: string (nullable = true)\n\n"
],
[
"import altair as alt\n\ncharts = []\npdf = spark.table('adult').toPandas()\n\nfor c in [c for c in pdf.columns if c != 'tid']:\n charts.append(alt.Chart(pdf).mark_bar().encode(x=alt.X(c), y=alt.Y('count()', axis=alt.Axis(title='freq'))).properties(width=300, height=300))\n\nalt.hconcat(*charts)",
"_____no_output_____"
],
[
"from repair.detectors import NullErrorDetector, ConstraintErrorDetector\nerror_detectors = [ \n ConstraintErrorDetector(constraint_path=\"../testdata/adult_constraints.txt\"),\n NullErrorDetector()\n]\n\nfrom repair.model import RepairModel\nmodel = RepairModel().setTableName('adult').setRowId('tid')\nnoisy_cells_df, noisy_columns = model.setErrorDetectors(error_detectors)._detect_errors('adult', 8, 20)",
" \r"
],
[
"import altair as alt\n\npdf = noisy_cells_df.toPandas()\nalt.Chart(pdf).mark_bar().encode(x=alt.X('attribute'), y=alt.Y('count()', axis=alt.Axis(title='freq'))).properties(width=400, height=400)",
"_____no_output_____"
],
[
"discretized_table, discretized_columns, distinct_stats = model._discretize_attrs('adult')\ndiscretized_columns",
"_____no_output_____"
],
[
"target_columns = list(filter(lambda c: c in discretized_columns, noisy_columns))\ntarget_columns",
"_____no_output_____"
],
[
"cell_domain, pairwise_stats = model._analyze_error_cell_domain(noisy_cells_df, discretized_table, [], target_columns, discretized_columns, 20)",
" \r"
],
[
"import altair as alt\n\ncharts = []\n\nfor target, cols in pairwise_stats.items():\n pdf = pd.DataFrame(cols, columns=[target, 'cor'])\n pdf['cor'] = pdf['cor'].astype('float')\n charts.append(alt.Chart(pdf).mark_bar().encode(x=alt.X(target), y=alt.Y('cor')).properties(width=200, height=200))\n \nalt.hconcat(*charts)",
"_____no_output_____"
],
[
"error_cells_df, weak_labeled_cells_df_opt = model._extract_error_cells(noisy_cells_df, cell_domain, 20, 8)",
" \r"
],
[
"repair_base_df = model._prepare_repair_base_cells('adult', noisy_cells_df, target_columns, 20, 8)\nrepair_base_df = model._repair_attrs(weak_labeled_cells_df_opt, repair_base_df)",
"_____no_output_____"
],
[
"import altair as alt\n\ncharts = []\npdf = repair_base_df.toPandas()\n\nfor c in [c for c in pdf.columns if c != 'tid']:\n charts.append(alt.Chart(pdf).mark_bar().encode(x=alt.X(c), y=alt.Y('count()', axis=alt.Axis(title='freq'))).properties(width=300, height=300))\n\nalt.hconcat(*charts)",
" \r"
],
[
"target = 'Sex'",
"_____no_output_____"
],
[
"pdf = repair_base_df.toPandas()\npdf = pdf.dropna()\nX = pdf.drop(['tid', target], axis=1).reset_index(drop=True)\ny = pdf[target].reset_index(drop=True)",
"_____no_output_____"
],
[
"import category_encoders as ce\nse = ce.OrdinalEncoder(handle_unknown='impute')\nX = se.fit_transform(X)\nX",
"_____no_output_____"
],
[
"import altair as alt\n\npdf = pd.concat([X, y], axis=1)\n\nalt.Chart(pdf).mark_circle().encode(\n alt.X(alt.repeat(\"column\"), type='quantitative'),\n alt.Y(alt.repeat(\"row\"), type='quantitative'),\n color=f'{target}:N'\n).properties(width=200, height=200).repeat(row=X.columns.tolist(), column=X.columns.tolist())",
"_____no_output_____"
],
[
"# One of non-linear embedding in sklearn\nfrom sklearn.manifold import TSNE\n\ntsne = TSNE(n_components=2, random_state=0)\n_X = tsne.fit_transform(X)\ntsne.kl_divergence_",
"_____no_output_____"
],
[
"import altair as alt\n\n_X = pd.DataFrame({'tSNE-X': _X[:, 0], 'tSNE-Y': _X[:, 1], target: y})\nalt.Chart(_X).mark_point().encode(x='tSNE-X', y='tSNE-Y', color=f'{target}:N').properties(width=600, height=400).interactive()",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier\nfrom boruta import BorutaPy\n\nrf = RandomForestClassifier(n_jobs=-1, max_depth=5)\nrf.fit(X, y)\nprint('SCORE with ALL Features: %1.2f\\n' % rf.score(X, y))\n\nrf = RandomForestClassifier(n_jobs=-1, max_depth=5)\nfs = BorutaPy(rf, n_estimators='auto', random_state=0)\nfs.fit(X.values, y.values)\n\nselected = fs.support_\nprint('Selected Features: %s' % ','.join(X.columns[selected]))\n\nX_selected = X[X.columns[selected]]\nrf = RandomForestClassifier(n_jobs=-1, max_depth=5)\nrf.fit(X_selected, y)\nprint('SCORE with selected Features: %1.2f' % rf.score(X_selected, y))",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d053cc56e99419356b527530267331bad2a4a03d | 664,798 | ipynb | Jupyter Notebook | Capsule_ network.ipynb | noureldinalaa/Capsule-Networks | a2a24789d6c9f69f13b5ea0eedda9a6d02942fb2 | [
"MIT"
] | 5 | 2020-12-08T11:57:15.000Z | 2020-12-22T01:02:25.000Z | Capsule_ network.ipynb | noureldinalaa/Capsule-Networks | a2a24789d6c9f69f13b5ea0eedda9a6d02942fb2 | [
"MIT"
] | null | null | null | Capsule_ network.ipynb | noureldinalaa/Capsule-Networks | a2a24789d6c9f69f13b5ea0eedda9a6d02942fb2 | [
"MIT"
] | 1 | 2020-12-08T09:10:25.000Z | 2020-12-08T09:10:25.000Z | 509.814417 | 112,932 | 0.935963 | [
[
[
"# Capsule Network",
"_____no_output_____"
],
[
"In this notebook i will try to explain and implement Capsule Network. MNIST images will be used as an input.",
"_____no_output_____"
],
[
"To implement capsule Network, we need to understand what are capsules first and what advantages do they have compared to convolutional neural network.\n\n### so what are capsules?\n\n* Briefly explaining it, capsules are small group of neurons where each neuron in a capsule represents various properties of a particular image part.\n* Capsules represent relationships between parts of a whole object by using **dynamic routing** to weight the connections between one layer of capsules and the next and creating strong connections between spatially-related object parts, will be discussed later.\n\n* The output of each capsule is a vector, this vector has a magnitude and orientation.\n * Magnitude : It is an indicates if that particular part of image is present or not. Basically we can summerize it as the probability of the part existance (It has to be between 0 and 1). \n \n * Oriantation : It changes if one of the properties of that particular image has changed.\n",
"_____no_output_____"
],
[
"Let us have an example to understand it more and make it clear. \nAs shown in the following image, capsules will detect a cat's face. As shown in the image the capsule consists of neurals with properties like the position,color,width and etc.. .Then we get a vector output with magnitude 0.9 which means we have 90% confidence that this is a cat face and we will get an orientation as well.\n\n(image from : https://cezannec.github.io/Capsule_Networks/)",
"_____no_output_____"
],
[
"But what if we have changed in these properties like we have flipped the cat's face,what will happen ? will it detect the cat face? \nYes it still will detect the cat's face with 90% confidance(with magnitude 0.9) but there will be a change in the oriantation(theta)to indicate a change in the properties.\n\n(image from: https://cezannec.github.io/Capsule_Networks/ )",
"_____no_output_____"
],
[
"### What advantages does it have compared to Convolutional Neural Network(CNN)?\n\n* CNN is looking for key features regadless their position. As shown in the following image, CNN will detect the left image as a face while capsule network will not detect them as it will check if they are in the correct postition or not.\n\n\n(image from:https://kndrck.co/posts/capsule_networks_explained/)\n\n* Capsules network is more rubust to affine transformations in data. if translation or rotation is done on test data, atrained Capsule network will preform better and will give higher accuracy than normal CNN.",
"_____no_output_____"
],
[
"# Model Architecture",
"_____no_output_____"
],
[
"The capsule network is consisting of two main parts:\n\n* A convolutional encoder.\n* A fully connected, linear decoder.\n\n\n\n(image from :[Hinton's paper(capsule networks orignal paper)](https://arxiv.org/pdf/1710.09829.pdf) )\n\nIn this Explantaion and implementation i will follow the architecture from [Hinton paper(capsule networks orignal paper)](https://arxiv.org/pdf/1710.09829.pdf)\n",
"_____no_output_____"
],
[
"# 1)Encoder",
"_____no_output_____"
],
[
"The ecnoder consists of three main layers as shown in the following image and the input layer which is from MNIST which has a dimension of 28 x28 \n\nplease notice the difference between this image and the previous image where the last layer is the decoder in the pravious image.\n\n\n",
"_____no_output_____"
],
[
"## A)The convolutional layer",
"_____no_output_____"
],
[
"So in Hinton's paper they have applied a kernel of size 9x9 to the input layer. This kernel has a depth of 256,stride =1 and padding = 0.This will give us an output of a dimenstion 20x20.\n\n**Note** :\nyou can calculate the output dimenstion by this eqaution, output = [(w-k+2p)/s]+1 , where:\n- w is the input size\n- k is the kernel size\n- p is padding \n- s is stride\n\nSo to clarify this more:\n- The input's dimension is (28,28,1) where the 28x28 is the input size and 1 is the number of channels.\n- Kernel's dimention is (9,9,1,256) where 9x9 is the kernel size ,1 is the number of channels and 256 is the depth of the kernel .\n- The output's dimension is (20,20,256) where 20x20 is the ouptut size and 256 is the stack of filtered images.",
"_____no_output_____"
],
[
"I think we are ready to start implementing the code now, so let us start by obtaining the MNIST data and create our DataLoaders for training and testing purposes.",
"_____no_output_____"
]
],
[
[
"# import resources\nimport numpy as np\nimport torch\n\n# random seed (for reproducibility)\nseed = 1\n# set random seed for numpy\nnp.random.seed(seed)\n# set random seed for pytorch\ntorch.manual_seed(seed)",
"_____no_output_____"
],
[
"from torchvision import datasets\nimport torchvision.transforms as transforms\n\n# number of subprocesses to use for data loading\nnum_workers = 0\n# how many samples per batch to load\nbatch_size = 20\n\n# convert data to Tensors\ntransform = transforms.ToTensor()\n\n# choose the training and test datasets\ntrain_data = datasets.MNIST(root='data', train=True,\n download=True, transform=transform)\n\ntest_data = datasets.MNIST(root='data', train=False, \n download=True, transform=transform)\n\n# prepare data loaders\ntrain_loader = torch.utils.data.DataLoader(train_data, \n batch_size=batch_size, \n num_workers=num_workers)\n\ntest_loader = torch.utils.data.DataLoader(test_data, \n batch_size=batch_size, \n num_workers=num_workers)",
"_____no_output_____"
]
],
[
[
"The nexts step is to create the convolutional layer as we explained:",
"_____no_output_____"
]
],
[
[
"import torch.nn as nn\nimport torch.nn.functional as F\n",
"_____no_output_____"
],
[
"class ConvLayer(nn.Module):\n \n def __init__(self, in_channels=1, out_channels=256):\n '''Constructs the ConvLayer with a specified input and output size.\n These sizes has initial values from the paper.\n param input_channel: input depth of an image, default value = 1\n param output_channel: output depth of the convolutional layer, default value = 256\n '''\n super(ConvLayer, self).__init__()\n\n # defining a convolutional layer of the specified size\n self.conv = nn.Conv2d(in_channels, out_channels, \n kernel_size=9, stride=1, padding=0)\n\n def forward(self, x):\n \n # applying a ReLu activation to the outputs of the conv layer\n output = F.relu(self.conv(x)) # we will have dimensions (batch_size, 20, 20, 256)\n return output",
"_____no_output_____"
]
],
[
[
"## B)Primary capsules",
"_____no_output_____"
],
[
"This layer is tricky but i will try to simplify it as much as i can.\nWe would like to convolute the first layer to a new layer with 8 primary capsules.\nTo do so we will follow Hinton's paper steps: \n- First step is to convolute our first Convolutional layer which has a dimension of (20 ,20 ,256) with a kernel of dimension(9,9,256,256) in which 9 is the kernel size,first 256 is the number of chanels from the first layer and the second 256 is the number of filters or the depth of the kernel.We will get an output with a dimension of (6,6,256) .\n- second step is to reshape this output to (6,6,8,32) where 8 is the number of capsules and 32 is the depth of each capsule .\n- Now the output of each capsule will have a dimension of (6,6,32) and we will reshape it to (32x32x6,1) = (1152,1) for each capsule.\n- Final step we will squash the output to have a magnitute between 0 and 1 as we have discussed earlier using the following equation :\n\n\nwhere Vj is the normalized output vector of capsule j, Sj is the total inputs of each capsule (which is the sum of weights over all the output vectors from the capsules in the layer below capsule).\n\n\n\nWe will use ModuleList container to loop on each capsule we have.\n ",
"_____no_output_____"
]
],
[
[
"class PrimaryCaps(nn.Module):\n \n def __init__(self, num_capsules=8, in_channels=256, out_channels=32):\n '''Constructs a list of convolutional layers to be used in \n creating capsule output vectors.\n param num_capsules: number of capsules to create\n param in_channels: input depth of features, default value = 256\n param out_channels: output depth of the convolutional layers, default value = 32\n '''\n super(PrimaryCaps, self).__init__()\n\n # creating a list of convolutional layers for each capsule I want to create\n # all capsules have a conv layer with the same parameters\n self.capsules = nn.ModuleList([\n nn.Conv2d(in_channels=in_channels, out_channels=out_channels, \n kernel_size=9, stride=2, padding=0)\n for _ in range(num_capsules)])\n \n def forward(self, x):\n '''Defines the feedforward behavior.\n param x: the input; features from a convolutional layer\n return: a set of normalized, capsule output vectors\n '''\n # get batch size of inputs\n batch_size = x.size(0)\n # reshape convolutional layer outputs to be (batch_size, vector_dim=1152, 1)\n u = [capsule(x).view(batch_size, 32 * 6 * 6, 1) for capsule in self.capsules]\n # stack up output vectors, u, one for each capsule\n u = torch.cat(u, dim=-1)\n # squashing the stack of vectors\n u_squash = self.squash(u)\n return u_squash\n \n def squash(self, input_tensor):\n '''Squashes an input Tensor so it has a magnitude between 0-1.\n param input_tensor: a stack of capsule inputs, s_j\n return: a stack of normalized, capsule output vectors, v_j\n '''\n squared_norm = (input_tensor ** 2).sum(dim=-1, keepdim=True)\n scale = squared_norm / (1 + squared_norm) # normalization coeff\n output_tensor = scale * input_tensor / torch.sqrt(squared_norm) \n return output_tensor",
"_____no_output_____"
]
],
[
[
"## c)Digit capsules",
"_____no_output_____"
],
[
"As we have 10 digit classes from 0 to 9, this layer will have 10 capsules each capsule is for one digit.\nEach capsule takes an input of a batch of 1152 dimensional vector while the output is a ten 16 dimnsional vector.",
"_____no_output_____"
],
[
"### Dynamic Routing ",
"_____no_output_____"
],
[
"Dynamic routing is used to find the best matching between the best connections between the child layer and the possible parent.Main companents of the dynamic routing is the capsule routing.\nTo make it easier we can think of the capsule routing as it is backprobagation.we can use it to obtain the probability that a certain capsule’s output should go to a parent capsule in the next layer.\n\nAs shown in the following figure The first child capsule is connected to $s_{1}$ which is the fist possible parent capsule and to $s_{2}$ which is the second possible parent capsule.In the begining the coupling will have equal values like both of them are zeros then we start apply dynamic routing to adjust it.We will find for example that coupling coffecient connected with $s_{1}$ is 0.9 and coupling coffecient connected with $s_{2}$ is 0.1, that means the probability that first child capsule’s output should go to a parent capsule in the next layer.\n\n\n\n**Notes** \n\n - Across all connections between one child capsule and all possible parent capsules, the coupling coefficients should sum to 1.This means That $c_{11}$ + $c_{12}$ = 1\n \n - As shown in the following figure $s_{1}$ is the total inputs of each capsule (which is the sum of weights over all the output vectors from the capsules in the layer below capsule).\n \n - To check the similarity between the total inputs $s_{1}$ and each vector we will calculate the dot product between both of them, in this example we will find that $s_{1}$ is more similar to $u_{1}$ than $u_{2}$ or $u_{3}$ , This similarity called (agreement)\n \n \n",
"_____no_output_____"
],
[
"### Dynamic Routing Algorithm",
"_____no_output_____"
],
[
"The followin algorithm is from [Hinton's paper(capsule networks orignal paper)](https://arxiv.org/pdf/1710.09829.pdf)\n\n\n\n\n\n",
"_____no_output_____"
],
[
"we can simply explain the algorithm as folowing :\n- First we initialize the initial logits $b_{ij}$ of the softmax function with zero\n- calculate the capsule coefficiant using the softmax equation.\n$$c_{ij} = \\frac{e^{\\ b_{ij}}}{\\sum_{k}\\ {e^{\\ b_{ik}}}} $$\n\n- calculate the total capsule inputs $s_{1}$ .\n\n**Note**\n\n''\n- $ s_j = \\sum{c_{ij} \\ \\hat{u}}$\n\n- $ \\hat{u} = Wu $ where W is the weight matrix and u is the input vector\n\n''\n- squash to get a normalized vector output $v_{j}$\n- last step is composed of two steps, we will calculate agreement and the new $b_{ij}$ .The similarity (agremeent) is that we have discussed before,which is the cross product between prediction vector $\\hat{u}$ and parent capsule's output vector $s_{1}$ . The second step is to update $b_{ij}$ . \n\n $$\\hat{u} = W u $$$$a = v \\cdot u $$$$b_{ij} = b_{ij} + a $$",
"_____no_output_____"
]
],
[
[
"def softmax(input_tensor, dim=1): # to get transpose softmax function # for multiplication reason s_J\n # transpose input\n transposed_input = input_tensor.transpose(dim, len(input_tensor.size()) - 1)\n # calculate softmax\n softmaxed_output = F.softmax(transposed_input.contiguous().view(-1, transposed_input.size(-1)), dim=-1)\n # un-transpose result\n return softmaxed_output.view(*transposed_input.size()).transpose(dim, len(input_tensor.size()) - 1)",
"_____no_output_____"
],
[
"# dynamic routing\ndef dynamic_routing(b_ij, u_hat, squash, routing_iterations=3):\n '''Performs dynamic routing between two capsule layers.\n param b_ij: initial log probabilities that capsule i should be coupled to capsule j\n param u_hat: input, weighted capsule vectors, W u\n param squash: given, normalizing squash function\n param routing_iterations: number of times to update coupling coefficients\n return: v_j, output capsule vectors\n ''' \n # update b_ij, c_ij for number of routing iterations\n for iteration in range(routing_iterations):\n # softmax calculation of coupling coefficients, c_ij\n c_ij = softmax(b_ij, dim=2)\n\n # calculating total capsule inputs, s_j = sum(c_ij*u_hat)\n s_j = (c_ij * u_hat).sum(dim=2, keepdim=True)\n\n # squashing to get a normalized vector output, v_j\n v_j = squash(s_j)\n\n # if not on the last iteration, calculate agreement and new b_ij\n if iteration < routing_iterations - 1:\n # agreement\n a_ij = (u_hat * v_j).sum(dim=-1, keepdim=True)\n \n # new b_ij\n b_ij = b_ij + a_ij\n \n return v_j # return latest v_j",
"_____no_output_____"
]
],
[
[
"After implementing the dynamic routing we are ready to implement the Digitcaps class,which consisits of :\n- This layer is composed of 10 \"digit\" capsules, one for each of our digit classes 0-9.\n- Each capsule takes, as input, a batch of 1152-dimensional vectors produced by our 8 primary capsules, above.\n- Each of these 10 capsules is responsible for producing a 16-dimensional output vector.\n- we will inizialize the weights matrix randomly.\n",
"_____no_output_____"
]
],
[
[
"# it will also be relevant, in this model, to see if I can train on gpu\nTRAIN_ON_GPU = torch.cuda.is_available()\n\nif(TRAIN_ON_GPU):\n print('Training on GPU!')\nelse:\n print('Only CPU available')",
"Training on GPU!\n"
],
[
"class DigitCaps(nn.Module):\n \n def __init__(self, num_capsules=10, previous_layer_nodes=32*6*6, \n in_channels=8, out_channels=16):\n '''Constructs an initial weight matrix, W, and sets class variables.\n param num_capsules: number of capsules to create\n param previous_layer_nodes: dimension of input capsule vector, default value = 1152\n param in_channels: number of capsules in previous layer, default value = 8\n param out_channels: dimensions of output capsule vector, default value = 16\n '''\n super(DigitCaps, self).__init__()\n\n # setting class variables\n self.num_capsules = num_capsules\n self.previous_layer_nodes = previous_layer_nodes # vector input (dim=1152)\n self.in_channels = in_channels # previous layer's number of capsules\n\n # starting out with a randomly initialized weight matrix, W\n # these will be the weights connecting the PrimaryCaps and DigitCaps layers\n self.W = nn.Parameter(torch.randn(num_capsules, previous_layer_nodes, \n in_channels, out_channels))\n\n def forward(self, u):\n '''Defines the feedforward behavior.\n param u: the input; vectors from the previous PrimaryCaps layer\n return: a set of normalized, capsule output vectors\n '''\n \n # adding batch_size dims and stacking all u vectors\n u = u[None, :, :, None, :]\n # 4D weight matrix\n W = self.W[:, None, :, :, :]\n \n # calculating u_hat = W*u\n u_hat = torch.matmul(u, W)\n\n # getting the correct size of b_ij\n # setting them all to 0, initially\n b_ij = torch.zeros(*u_hat.size())\n \n # moving b_ij to GPU, if available\n if TRAIN_ON_GPU:\n b_ij = b_ij.cuda()\n\n # update coupling coefficients and calculate v_j\n v_j = dynamic_routing(b_ij, u_hat, self.squash, routing_iterations=3)\n\n return v_j # return final vector outputs\n \n \n def squash(self, input_tensor):\n '''Squashes an input Tensor so it has a magnitude between 0-1.\n param input_tensor: a stack of capsule inputs, s_j\n return: a stack of normalized, capsule output vectors, v_j\n '''\n # same squash function as before\n squared_norm = (input_tensor ** 2).sum(dim=-1, keepdim=True)\n scale = squared_norm / (1 + squared_norm) # normalization coeff\n output_tensor = scale * input_tensor / torch.sqrt(squared_norm) \n return output_tensor",
"_____no_output_____"
]
],
[
[
"# 2)Decoder",
"_____no_output_____"
],
[
"As shown in the following figure from [Hinton's paper(capsule networks orignal paper)](https://arxiv.org/pdf/1710.09829.pdf), The decoder is made of three fully-connected, linear layers. The first layer sees the 10, 16-dimensional output vectors from the digit capsule layer and produces hidden_dim=512 number of outputs. The next hidden layer = 1024 , and the third and final linear layer produces an output of 784 values which is a 28x28 image! \n\n",
"_____no_output_____"
]
],
[
[
"class Decoder(nn.Module):\n \n def __init__(self, input_vector_length=16, input_capsules=10, hidden_dim=512):\n '''Constructs an series of linear layers + activations.\n param input_vector_length: dimension of input capsule vector, default value = 16\n param input_capsules: number of capsules in previous layer, default value = 10\n param hidden_dim: dimensions of hidden layers, default value = 512\n '''\n super(Decoder, self).__init__()\n \n # calculate input_dim\n input_dim = input_vector_length * input_capsules\n \n # define linear layers + activations\n self.linear_layers = nn.Sequential(\n nn.Linear(input_dim, hidden_dim), # first hidden layer\n nn.ReLU(inplace=True),\n nn.Linear(hidden_dim, hidden_dim*2), # second, twice as deep\n nn.ReLU(inplace=True),\n nn.Linear(hidden_dim*2, 28*28), # can be reshaped into 28*28 image\n nn.Sigmoid() # sigmoid activation to get output pixel values in a range from 0-1\n )\n \n def forward(self, x):\n '''Defines the feedforward behavior.\n param x: the input; vectors from the previous DigitCaps layer\n return: two things, reconstructed images and the class scores, y\n '''\n classes = (x ** 2).sum(dim=-1) ** 0.5\n classes = F.softmax(classes, dim=-1)\n \n # find the capsule with the maximum vector length\n # here, vector length indicates the probability of a class' existence\n _, max_length_indices = classes.max(dim=1)\n \n # create a sparse class matrix\n sparse_matrix = torch.eye(10) # 10 is the number of classes\n if TRAIN_ON_GPU:\n sparse_matrix = sparse_matrix.cuda()\n # get the class scores from the \"correct\" capsule\n y = sparse_matrix.index_select(dim=0, index=max_length_indices.data)\n \n # create reconstructed pixels\n x = x * y[:, :, None]\n # flatten image into a vector shape (batch_size, vector_dim)\n flattened_x = x.contiguous().view(x.size(0), -1)\n # create reconstructed image vectors\n reconstructions = self.linear_layers(flattened_x)\n \n # return reconstructions and the class scores, y\n return reconstructions, y",
"_____no_output_____"
]
],
[
[
"Now let us collect all these layers (classes that we have created i.e ConvLayer,PrimaryCaps,DigitCaps,Decoder) in one class called CapsuleNetwork.",
"_____no_output_____"
]
],
[
[
"class CapsuleNetwork(nn.Module):\n \n def __init__(self):\n '''Constructs a complete Capsule Network.'''\n super(CapsuleNetwork, self).__init__()\n self.conv_layer = ConvLayer()\n self.primary_capsules = PrimaryCaps()\n self.digit_capsules = DigitCaps()\n self.decoder = Decoder()\n \n def forward(self, images):\n '''Defines the feedforward behavior.\n param images: the original MNIST image input data\n return: output of DigitCaps layer, reconstructed images, class scores\n '''\n primary_caps_output = self.primary_capsules(self.conv_layer(images))\n caps_output = self.digit_capsules(primary_caps_output).squeeze().transpose(0,1)\n reconstructions, y = self.decoder(caps_output)\n return caps_output, reconstructions, y",
"_____no_output_____"
]
],
[
[
"Let us now instantiate the model and print it.",
"_____no_output_____"
]
],
[
[
"# instantiate and print net\ncapsule_net = CapsuleNetwork()\n\nprint(capsule_net)\n\n# move model to GPU, if available \nif TRAIN_ON_GPU:\n capsule_net = capsule_net.cuda()",
"CapsuleNetwork(\n (conv_layer): ConvLayer(\n (conv): Conv2d(1, 256, kernel_size=(9, 9), stride=(1, 1))\n )\n (primary_capsules): PrimaryCaps(\n (capsules): ModuleList(\n (0): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2))\n (1): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2))\n (2): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2))\n (3): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2))\n (4): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2))\n (5): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2))\n (6): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2))\n (7): Conv2d(256, 32, kernel_size=(9, 9), stride=(2, 2))\n )\n )\n (digit_capsules): DigitCaps()\n (decoder): Decoder(\n (linear_layers): Sequential(\n (0): Linear(in_features=160, out_features=512, bias=True)\n (1): ReLU(inplace=True)\n (2): Linear(in_features=512, out_features=1024, bias=True)\n (3): ReLU(inplace=True)\n (4): Linear(in_features=1024, out_features=784, bias=True)\n (5): Sigmoid()\n )\n )\n)\n"
]
],
[
[
"# Loss",
"_____no_output_____"
],
[
"The loss for a capsule network is a weighted combination of two losses:\n1. Reconstraction loss\n2. Margin loss",
"_____no_output_____"
],
[
"### Reconstraction Loss",
"_____no_output_____"
],
[
"- It checks how the reconstracted image which we get from the decoder diferent from the original input image.\n\n- It is calculated using mean squared error which is nn.MSELoss in pytorch.\n- In [Hinton's paper(capsule networks orignal paper)](https://arxiv.org/pdf/1710.09829.pdf) they have weighted reconstraction loss with a coefficient of 0.0005, so it wouldn't overpower margin loss.",
"_____no_output_____"
],
[
"### Margin Loss",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\nImage(filename='images/margin_loss.png')",
"_____no_output_____"
]
],
[
[
"Margin Loss is a classification loss (we can think of it as cross entropy) which is based on the length of the output vectors coming from the DigitCaps layer.\n\nso let us try to elaborate it more on our example.Let us say we have an output vector called (x) coming from the digitcap layer, this ouput vector represents a certain digit from 0 to 9 as we are using MNIST. Then we will square the length(take the square root of the squared value) of the corresponding output vector of that digit capsule $v_k = \\sqrt{x^2}$ . The right capsule should have an output vector of greater than or equal 0.9 ($v_k >=0.9$) value while other capsules should output of smaller than or eqaul 0.1( $v_k<=0.1$ ).\n\nSo, if we have an input image of a 0, then the \"correct,\" zero-detecting, digit capsule should output a vector of magnitude 0.9 or greater! For all the other digits (1-9, in this example) the corresponding digit capsule output vectors should have a magnitude that is 0.1 or less.\n\nThe following function is used to calculate the margin loss as it sums both sides of the 0.9 and 0.1 and k is the digit capsule.\n\n\n\n\nwhere($T_k = 1 $) if a digit of class k is present\nand $m^{+}$ = 0.9 and $m^{-}$ = 0.1. The λ down-weighting\nof the loss for absent digit classes stops the initial learning from shrinking the lengths of the activity vectors of all the digit capsules. In the paper they have choosen λ = 0.5. \n\n**Note** :\n\n\nThe total loss is simply the sum of the losses of all digit capsules.",
"_____no_output_____"
]
],
[
[
"class CapsuleLoss(nn.Module):\n \n def __init__(self):\n '''Constructs a CapsuleLoss module.'''\n super(CapsuleLoss, self).__init__()\n self.reconstruction_loss = nn.MSELoss(reduction='sum') # cumulative loss, equiv to size_average=False\n\n def forward(self, x, labels, images, reconstructions):\n '''Defines how the loss compares inputs.\n param x: digit capsule outputs\n param labels: \n param images: the original MNIST image input data\n param reconstructions: reconstructed MNIST image data\n return: weighted margin and reconstruction loss, averaged over a batch\n '''\n batch_size = x.size(0)\n\n ## calculate the margin loss ##\n \n # get magnitude of digit capsule vectors, v_c\n v_c = torch.sqrt((x**2).sum(dim=2, keepdim=True))\n\n # calculate \"correct\" and incorrect loss\n left = F.relu(0.9 - v_c).view(batch_size, -1)\n right = F.relu(v_c - 0.1).view(batch_size, -1)\n \n # sum the losses, with a lambda = 0.5\n margin_loss = labels * left + 0.5 * (1. - labels) * right\n margin_loss = margin_loss.sum()\n\n ## calculate the reconstruction loss ##\n images = images.view(reconstructions.size()[0], -1)\n reconstruction_loss = self.reconstruction_loss(reconstructions, images)\n\n # return a weighted, summed loss, averaged over a batch size\n return (margin_loss + 0.0005 * reconstruction_loss) / images.size(0)",
"_____no_output_____"
]
],
[
[
"Now we have to call the custom loss class we have implemented and we will use Adam optimizer as in the paper.",
"_____no_output_____"
]
],
[
[
"import torch.optim as optim\n\n# custom loss\ncriterion = CapsuleLoss()\n\n# Adam optimizer with default params\noptimizer = optim.Adam(capsule_net.parameters())",
"_____no_output_____"
]
],
[
[
"# Train the network",
"_____no_output_____"
],
[
"So the normal steps to do the training from a batch of data:\n\n1. Clear the gradients of all optimized variables, by making them zero.\n2. Forward pass: compute predicted outputs by passing inputs to the model\n3. Calculate the loss .\n4. Backward pass: compute gradient of the loss with respect to model parameters\n5. Perform a single optimization step (parameter update)\n6. Update average training loss \n ",
"_____no_output_____"
]
],
[
[
"def train(capsule_net, criterion, optimizer, \n n_epochs, print_every=300):\n '''Trains a capsule network and prints out training batch loss statistics.\n Saves model parameters if *validation* loss has decreased.\n param capsule_net: trained capsule network\n param criterion: capsule loss function\n param optimizer: optimizer for updating network weights\n param n_epochs: number of epochs to train for\n param print_every: batches to print and save training loss, default = 100\n return: list of recorded training losses\n '''\n\n # track training loss over time\n losses = []\n\n # one epoch = one pass over all training data \n for epoch in range(1, n_epochs+1):\n\n # initialize training loss\n train_loss = 0.0\n \n capsule_net.train() # set to train mode\n \n # get batches of training image data and targets\n for batch_i, (images, target) in enumerate(train_loader):\n\n # reshape and get target class\n target = torch.eye(10).index_select(dim=0, index=target)\n\n if TRAIN_ON_GPU:\n images, target = images.cuda(), target.cuda()\n\n # zero out gradients\n optimizer.zero_grad()\n # get model outputs\n caps_output, reconstructions, y = capsule_net(images)\n # calculate loss\n loss = criterion(caps_output, target, images, reconstructions)\n # perform backpropagation and optimization\n loss.backward()\n optimizer.step()\n\n train_loss += loss.item() # accumulated training loss\n \n # print and record training stats\n if batch_i != 0 and batch_i % print_every == 0:\n avg_train_loss = train_loss/print_every\n losses.append(avg_train_loss)\n print('Epoch: {} \\tTraining Loss: {:.8f}'.format(epoch, avg_train_loss))\n train_loss = 0 # reset accumulated training loss\n \n return losses",
"_____no_output_____"
],
[
"# training for 5 epochs\nn_epochs = 5\nlosses = train(capsule_net, criterion, optimizer, n_epochs=n_epochs)",
"Epoch: 1 \tTraining Loss: 0.25108408\nEpoch: 1 \tTraining Loss: 0.09796484\nEpoch: 1 \tTraining Loss: 0.07615296\nEpoch: 1 \tTraining Loss: 0.06122471\nEpoch: 1 \tTraining Loss: 0.05977095\nEpoch: 1 \tTraining Loss: 0.05478950\nEpoch: 1 \tTraining Loss: 0.05140611\nEpoch: 1 \tTraining Loss: 0.05044698\nEpoch: 1 \tTraining Loss: 0.04870245\nEpoch: 2 \tTraining Loss: 0.04324130\nEpoch: 2 \tTraining Loss: 0.04060882\nEpoch: 2 \tTraining Loss: 0.03622841\nEpoch: 2 \tTraining Loss: 0.03470477\nEpoch: 2 \tTraining Loss: 0.03626744\nEpoch: 2 \tTraining Loss: 0.03480921\nEpoch: 2 \tTraining Loss: 0.03538792\nEpoch: 2 \tTraining Loss: 0.03432405\nEpoch: 2 \tTraining Loss: 0.03438207\nEpoch: 3 \tTraining Loss: 0.03111325\nEpoch: 3 \tTraining Loss: 0.02989269\nEpoch: 3 \tTraining Loss: 0.02743311\nEpoch: 3 \tTraining Loss: 0.02656386\nEpoch: 3 \tTraining Loss: 0.02738586\nEpoch: 3 \tTraining Loss: 0.02737884\nEpoch: 3 \tTraining Loss: 0.02820305\nEpoch: 3 \tTraining Loss: 0.02727670\nEpoch: 3 \tTraining Loss: 0.02587884\nEpoch: 4 \tTraining Loss: 0.02593555\nEpoch: 4 \tTraining Loss: 0.02382935\nEpoch: 4 \tTraining Loss: 0.02312145\nEpoch: 4 \tTraining Loss: 0.02189966\nEpoch: 4 \tTraining Loss: 0.02289272\nEpoch: 4 \tTraining Loss: 0.02197252\nEpoch: 4 \tTraining Loss: 0.02546153\nEpoch: 4 \tTraining Loss: 0.02200746\nEpoch: 4 \tTraining Loss: 0.02378933\nEpoch: 5 \tTraining Loss: 0.02140641\nEpoch: 5 \tTraining Loss: 0.02041025\nEpoch: 5 \tTraining Loss: 0.02020690\nEpoch: 5 \tTraining Loss: 0.01983862\nEpoch: 5 \tTraining Loss: 0.02128812\nEpoch: 5 \tTraining Loss: 0.01994716\nEpoch: 5 \tTraining Loss: 0.02163137\nEpoch: 5 \tTraining Loss: 0.02023643\nEpoch: 5 \tTraining Loss: 0.02078038\n"
]
],
[
[
"Now let us plot the training loss to get more feeling how does the loss look like:",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(losses)\nplt.title(\"Training Loss\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Test the trained network",
"_____no_output_____"
],
[
"Test the trained network on unseen data:",
"_____no_output_____"
]
],
[
[
"def test(capsule_net, test_loader):\n '''Prints out test statistics for a given capsule net.\n param capsule_net: trained capsule network\n param test_loader: test dataloader\n return: returns last batch of test image data and corresponding reconstructions\n '''\n class_correct = list(0. for i in range(10))\n class_total = list(0. for i in range(10))\n \n test_loss = 0 # loss tracking\n\n capsule_net.eval() # eval mode\n\n for batch_i, (images, target) in enumerate(test_loader):\n target = torch.eye(10).index_select(dim=0, index=target)\n\n batch_size = images.size(0)\n\n if TRAIN_ON_GPU:\n images, target = images.cuda(), target.cuda()\n\n # forward pass: compute predicted outputs by passing inputs to the model\n caps_output, reconstructions, y = capsule_net(images)\n # calculate the loss\n loss = criterion(caps_output, target, images, reconstructions)\n # update average test loss \n test_loss += loss.item()\n # convert output probabilities to predicted class\n _, pred = torch.max(y.data.cpu(), 1)\n _, target_shape = torch.max(target.data.cpu(), 1)\n\n # compare predictions to true label\n correct = np.squeeze(pred.eq(target_shape.data.view_as(pred)))\n # calculate test accuracy for each object class\n for i in range(batch_size):\n label = target_shape.data[i]\n class_correct[label] += correct[i].item()\n class_total[label] += 1\n\n # avg test loss\n avg_test_loss = test_loss/len(test_loader)\n print('Test Loss: {:.8f}\\n'.format(avg_test_loss))\n\n for i in range(10):\n if class_total[i] > 0:\n print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (\n str(i), 100 * class_correct[i] / class_total[i],\n np.sum(class_correct[i]), np.sum(class_total[i])))\n else:\n print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))\n\n print('\\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (\n 100. * np.sum(class_correct) / np.sum(class_total),\n np.sum(class_correct), np.sum(class_total)))\n \n # return last batch of capsule vectors, images, reconstructions\n return caps_output, images, reconstructions",
"_____no_output_____"
],
[
"# call test function and get reconstructed images\ncaps_output, images, reconstructions = test(capsule_net, test_loader)",
"Test Loss: 0.03073818\n\nTest Accuracy of 0: 99% (975/980)\nTest Accuracy of 1: 99% (1132/1135)\nTest Accuracy of 2: 99% (1027/1032)\nTest Accuracy of 3: 99% (1001/1010)\nTest Accuracy of 4: 98% (971/982)\nTest Accuracy of 5: 99% (886/892)\nTest Accuracy of 6: 98% (947/958)\nTest Accuracy of 7: 99% (1020/1028)\nTest Accuracy of 8: 99% (967/974)\nTest Accuracy of 9: 98% (993/1009)\n\nTest Accuracy (Overall): 99% (9919/10000)\n"
]
],
[
[
"Now it is time to dispaly the reconstructions:",
"_____no_output_____"
]
],
[
[
"def display_images(images, reconstructions):\n '''Plot one row of original MNIST images and another row (below) \n of their reconstructions.'''\n # convert to numpy images\n images = images.data.cpu().numpy()\n reconstructions = reconstructions.view(-1, 1, 28, 28)\n reconstructions = reconstructions.data.cpu().numpy()\n \n # plot the first ten input images and then reconstructed images\n fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(26,5))\n\n # input images on top row, reconstructions on bottom\n for images, row in zip([images, reconstructions], axes):\n for img, ax in zip(images, row):\n ax.imshow(np.squeeze(img), cmap='gray')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)",
"_____no_output_____"
],
[
"# display original and reconstructed images, in rows\ndisplay_images(images, reconstructions)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d053d089867eb30b7ae11f9da9fa10629b562d4d | 111,090 | ipynb | Jupyter Notebook | Lecture 5 Monte-Carlo Control.ipynb | oesst/rl_lecture_examples | 65fa4649ed5ab982e53cb3953818f03dd8735337 | [
"MIT"
] | null | null | null | Lecture 5 Monte-Carlo Control.ipynb | oesst/rl_lecture_examples | 65fa4649ed5ab982e53cb3953818f03dd8735337 | [
"MIT"
] | null | null | null | Lecture 5 Monte-Carlo Control.ipynb | oesst/rl_lecture_examples | 65fa4649ed5ab982e53cb3953818f03dd8735337 | [
"MIT"
] | null | null | null | 208.815789 | 57,844 | 0.903943 | [
[
[
"from IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:60% !important; }</style>\"))",
"_____no_output_____"
]
],
[
[
"# Monte Carlo Control\n\nSo far, we assumed that we know the underlying model of the environment and that the agent has access to it. \nNow, we considere the case in which do not have access to the full MDP. That is, we do __model-free control__ now.\n\nTo illustrate this, we implement the black jack example from the RL Lecture 5 by David Silver for Monte Carlo Control [see example](https://youtu.be/0g4j2k_Ggc4?t=2193)\n\nWe use Monte-Carlo policy evaluation based on the action-value function $Q=q_\\pi$ and then a $\\epsilon$-greedy exploration (greedy exploration with probability to choose a random move).\n\nRemember: $ G_t = R_{t+1} + \\gamma R_{t+2} + ... + \\sum_{k=0} \\gamma^k \\cdot R_{t+k+1}$\n\n__Algorithm:__\n* Update $V(s)$ incrementally after each episode\n* For each state $S_t$ with return $G_t$ do:\n * $N(S_t) \\gets N(S_t) +1$\n * $Q(S_t,A_t) \\gets Q(S_t,A_t) + \\frac{1}{N(S_t)} \\cdot (G_t - V(S_t,A_t))$\n * Which corresponds to the _actual return_ ($G_t$) - the _estimated return_ ($Q(S_t,A_t)$)\n * $\\frac{1}{N(S_t)}$ is a weighting factor that let us forget old episodes slowly\n* Improve policy based on new action-value function\n * $\\epsilon \\gets \\frac{1}{k}$\n * $\\lambda \\gets \\epsilon-greedy(Q)$\n\nMC converges to solution with minimum mean squared error.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\nimport plotting\nfrom operator import itemgetter\nplotting.set_layout(drawing_size=15)\n",
"_____no_output_____"
]
],
[
[
"## The Environment\n\nFor this example we use the python package [gym](https://gym.openai.com/docs/) which provides a ready-to-use implementation of a BlackJack environment.\n\nThe states are stored in this tuple format: \\n(Agent's score , Dealer's visible score, and whether or not the agent has a usable ace)\n\nHere, we can look at the number of different states:",
"_____no_output_____"
]
],
[
[
"import gym\nenv = gym.make('Blackjack-v0')\nenv.observation_space",
"_____no_output_____"
]
],
[
[
"And the number of actions we can take:",
"_____no_output_____"
]
],
[
[
"env.action_space",
"_____no_output_____"
]
],
[
[
"To start a game call `env.reset()` which will return the obersavtion space",
"_____no_output_____"
]
],
[
[
"env.reset()",
"_____no_output_____"
]
],
[
[
"We can take two different actions: `hit` = 1 or `stay` = 0. \n\nThe result of this function call shows the _obersavtion space_, the reward (winning=+1, loosing =-1) and if the game is over, ",
"_____no_output_____"
]
],
[
[
"env.step(1)",
"_____no_output_____"
]
],
[
[
"## Define the Agent\n\n",
"_____no_output_____"
]
],
[
[
"\nclass agents():\n \"\"\" This class defines the agent \n \"\"\"\n \n def __init__(self, state_space, action_space, ):\n \"\"\" TODO \"\"\" \n \n # Store the discount factor \n self.gamma = 0.7\n # Store the epsilon parameters\n self.epsilon = 1\n \n n_player_states = state_space[0].n\n n_dealer_states = state_space[1].n\n n_usable_ace = state_space[0].n\n \n # two available actions stay (0) and hit (1)\n self.actions = list(range(action_space.n))\n \n # Store the action value function for each state and action\n self.q = np.zeros((n_player_states,n_dealer_states,n_usable_ace, action_space.n))\n \n # incremental counter for a state\n self.N = np.zeros((n_player_states,n_dealer_states,n_usable_ace,action_space.n))\n \n\n \n def greedy_move(self,s, k_episode):\n # given a state return the next move according to epsilon greedy algorithm\n \n # find optimal action a^*\n v_a = []\n for i_a,a in enumerate(self.actions):\n # get value for action state pair\n s2 = 1 if s[2] else 0\n v = self.q[s[0],s[1],s2,a]\n v_a.append((v,a))\n \n # get action with maximal value\n a_max = max(v_a,key=itemgetter(0))[1]\n \n # with probabiliyt 1-eps execute the best action otherwise choose other action\n if np.random.rand() < (1-self.epsilon):\n a = a_max\n else:\n a = int(not a_max)\n \n # decrement epsilon\n self.epsilon = 1/(k_episode)\n \n return a\n \n \n def incre_counter(self, state, action):\n # Increments the counter for a given state and action \n \n # convert the true/false state to 0/1\n s2 = 1 if state[2] else 0\n # increment the counter for that state\n self.N[state[0],state[1],s2,action] += 1\n \n def get_counter(self, state, action):\n # convert the true/false state to 0/1\n s2 = 1 if state[2] else 0\n # increment the counter for that state\n return self.N[state[0],state[1],s2,action]\n \n def policy_evaluation(self,all_states,all_rewards, all_actions):\n # Update V(s) incrementally \n for i_s,s in enumerate(all_states):\n \n # get corresponding action for given state\n a = all_actions[i_s]\n \n # convert the true/false state to 0/1\n s2 = 1 if s[2] else 0\n # Get the value function for that state\n Q_s = self.q[s[0],s[1],s2,a]\n # calculate the total reward\n G = np.sum([agent.gamma**k * r for k,r in enumerate(all_rewards)])\n # Update the value funtion\n self.q[s[0],s[1],s2,a] = Q_s + 1/self.get_counter(s,a) * (G - Q_s)\n \n",
"_____no_output_____"
],
[
"# how many episodes should be played\nn_episodes = 500000\n# initialize the agent. let it know the number of states and actions\nagent = agents(env.observation_space, env.action_space)\n\n# Incremental MC updates\n# Play one episode then update V(s)\nfor i in range(n_episodes):\n all_states = []\n all_rewards = []\n all_actions = []\n \n # start the game\n s = env.reset()\n \n\n # play until environment tells you that the game is over\n game_ended = False\n while not game_ended:\n # increment counter\n \n # choose a movement according to eps-greedy algorithm and update policy\n move = agent.greedy_move(s,i+1)\n \n # use the old state for evaluation\n all_states.append(s)\n # increment the counter for a given state and action\n agent.incre_counter(s,move)\n # move\n s,r,game_ended,_ = env.step(move)\n \n # save everything\n# all_states.append(s)\n all_rewards.append(r)\n all_actions.append(move)\n \n\n # Evaluate policy\n agent.policy_evaluation(all_states,all_rewards,all_actions)\n \n ### END OF EPISODE ###\n\n\n",
"_____no_output_____"
]
],
[
[
"## Plotting",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(10,5))\n\naxes = fig.subplots(1,2,squeeze=False)\n\nax = axes[0,0]\n\n\nc = ax.pcolormesh(agent.q[13:22,1:,0,:].max(2),vmin=-1,vmax=1)\nax.set_yticklabels(range(13,22))\nax.set_xticklabels(range(1,11,2))\nax.set_xlabel('Dealer Showing')\nax.set_ylabel('Player Sum')\nax.set_title('No Usable Aces')\n# plt.colorbar(c)\n\nax = axes[0,1]\nc = ax.pcolormesh(agent.q[13:22,1:,1,:].max(2),vmin=-1,vmax=1)\nax.set_yticklabels(range(13,22))\nax.set_xticklabels(range(1,11,2))\nax.set_title('Usable Aces')\nax.set_xlabel('Dealer Showing')\nplt.colorbar(c)\n\nplt.show()\n\n",
"<ipython-input-40-7f66e47054f4>:9: UserWarning: FixedFormatter should only be used together with FixedLocator\n ax.set_yticklabels(range(13,22))\n<ipython-input-40-7f66e47054f4>:10: UserWarning: FixedFormatter should only be used together with FixedLocator\n ax.set_xticklabels(range(1,11,2))\n<ipython-input-40-7f66e47054f4>:18: UserWarning: FixedFormatter should only be used together with FixedLocator\n ax.set_yticklabels(range(13,22))\n<ipython-input-40-7f66e47054f4>:19: UserWarning: FixedFormatter should only be used together with FixedLocator\n ax.set_xticklabels(range(1,11,2))\n"
],
[
"fig = plt.figure(figsize=(10,5))\n\naxes = fig.subplots(1,2,squeeze=False)\n\nax = axes[0,0]\n\n\nc = ax.contour(agent.q[13:22,1:,0,:].max(2),levels=1,vmin=-1,vmax=1)\nax.set_yticklabels(range(13,22))\nax.set_xticklabels(range(1,11,2))\nax.set_xlabel('Dealer Showing')\nax.set_ylabel('Player Sum')\nax.set_title('No Usable Aces')\n# plt.colorbar(c)\n\nax = axes[0,1]\nc = ax.contour(agent.q[13:22,1:,1,:].max(2),levels=1,vmin=-1,vmax=1)\nax.set_yticklabels(range(13,22))\nax.set_xticklabels(range(1,11,2))\nax.set_title('Usable Aces')\nax.set_xlabel('Dealer Showing')\nplt.colorbar(c)\n\nplt.show()",
"<ipython-input-41-ea3605012f37>:9: UserWarning: FixedFormatter should only be used together with FixedLocator\n ax.set_yticklabels(range(13,22))\n<ipython-input-41-ea3605012f37>:10: UserWarning: FixedFormatter should only be used together with FixedLocator\n ax.set_xticklabels(range(1,11,2))\n<ipython-input-41-ea3605012f37>:18: UserWarning: FixedFormatter should only be used together with FixedLocator\n ax.set_yticklabels(range(13,22))\n<ipython-input-41-ea3605012f37>:19: UserWarning: FixedFormatter should only be used together with FixedLocator\n ax.set_xticklabels(range(1,11,2))\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d053ed8228b81276d5330087aa40eb8b1e00ecf5 | 51,490 | ipynb | Jupyter Notebook | plots/analyze-throughput-battery.ipynb | johanpel/j2a | 3e1f9a98ec16312652fa0d949a14b5cf6e3ed58d | [
"MIT"
] | null | null | null | plots/analyze-throughput-battery.ipynb | johanpel/j2a | 3e1f9a98ec16312652fa0d949a14b5cf6e3ed58d | [
"MIT"
] | null | null | null | plots/analyze-throughput-battery.ipynb | johanpel/j2a | 3e1f9a98ec16312652fa0d949a14b5cf6e3ed58d | [
"MIT"
] | null | null | null | 76.622024 | 30,252 | 0.760322 | [
[
[
"import pandas as pd\nimport numpy as np\nimport glob\nimport os\n\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))\ndisplay(HTML(\"<style>div.output_scroll { height: 44em; }</style>\"))",
"_____no_output_____"
],
[
"def get_meta(path):\n \"\"\"Returns (threads, num_jsons, repeats)\"\"\"\n\n props = os.path.splitext(os.path.basename(path))[0].split('_')\n values = [int(x[1:]) for x in props[1:]]\n\n return {'max_value':values[0],\n 'max_num_values':values[1],\n 'threads':values[2],\n 'input_size_approx':values[3],\n 'repeats':values[4]}",
"_____no_output_____"
],
[
"def load(file):\n \"\"\"Load the experiment data from a CSV file with converter metrics.\"\"\"\n \n schema = {\n 'num_threads': np.int64(),\n 'num_jsons_converted': np.int64(),\n 'num_json_bytes_converted': np.int64(),\n 'num_recordbatch_bytes': np.int64(),\n 'num_ipc': np.int64(),\n 'ipc_bytes': np.int64(),\n 'num_buffers_converted': np.int64(),\n 't_parse': np.float64(),\n 't_resize': np.float64(),\n 't_serialize': np.float64(),\n 't_thread': np.float64(),\n 't_enqueue': np.float64(),\n 'status': np.int64()\n }\n \n df = pd.read_csv(file, dtype=schema)\n \n meta = get_meta(file)\n\n for key, value in meta.items(): \n df.insert(0, key, value)\n \n # Make sure there were no errors for converters.\n assert(df['status'].sum() == len(df.index))\n \n return df",
"_____no_output_____"
],
[
"def analyze(df):\n \"\"\"Analyze the experiment data, deriving various metrics such as throughput.\"\"\"\n # Calculate time spent within the thread as 'other'.\n df['t_other'] = df['t_thread'] - df[['t_parse', 't_resize', 't_serialize', 't_enqueue']].sum(axis=1)\n \n # Calculate the throughput per thread\n df['Parse throughput (in)'] = df['num_json_bytes_converted'] / df['t_parse']\n df['Parse throughput (out)'] = df['num_recordbatch_bytes'] / df['t_parse']\n \n return df",
"_____no_output_____"
],
[
"def aggr_counts(digit_counts):\n total = 0\n for n, d in digit_counts:\n total = total + n \n return total\n \ndef avg_number_of_decimals(max_value):\n ''' Return avg number of decimals of uniform random numbers from 0 up to max_value. '''\n digits = 1\n digit_counts = []\n while (pow(10, digits) < max_value):\n nums = pow(10,digits) - aggr_counts(digit_counts)\n digit_counts.append((nums, digits))\n digits = digits + 1\n\n digit_counts.append((max_value - aggr_counts(digit_counts), digits))\n \n avg_num_digits = 0\n for n, d in digit_counts:\n avg_num_digits += n/max_value * d\n \n return avg_num_digits",
"_____no_output_____"
],
[
"def summarize(df):\n \"\"\"Summarize the data from one run into one row with averages.\"\"\"\n \n assert(len(pd.unique(df['max_value'])==1))\n assert(len(pd.unique(df['max_num_values'])==1))\n assert(len(pd.unique(df['threads'])==1))\n assert(len(pd.unique(df['input_size_approx'])==1))\n assert(df['num_threads'].sum()==pd.unique(df['threads'])[0])\n repeats = pd.unique(df['repeats'])[0]\n \n # Avg. value bytes per JSON is the average array size (which is half the max, it is uniform random)\n # times the average number of bytes for uniform random numbers between 0 and max value\n max_value = pd.unique(df['max_value'])[0]\n max_num_values = pd.unique(df['max_num_values'])[0]\n value_bytes = avg_number_of_decimals(max_value) * max_num_values / 2\n \n row = {'Max. value': max_value,\n 'Max. number of values': max_num_values,\n 'Value bytes': value_bytes,\n 'Input size': pd.unique(df['input_size_approx'])[0],\n 'Repeats': pd.unique(df['repeats'])[0],\n 'Threads': df['num_threads'].sum(),\n 'JSONs': df['num_jsons_converted'].sum() / repeats,\n 'Bytes (in)': df['num_json_bytes_converted'].sum() / repeats,\n 'RecordBatch bytes': df['num_recordbatch_bytes'].sum() / repeats,\n 'IPC messages': df['num_ipc'].sum() / repeats,\n 'IPC bytes': df['ipc_bytes'].sum() / repeats,\n 'Buffers converted': df['num_buffers_converted'].sum() / repeats,\n # For time, we use the max time of all threads, \n # since the throughput is determined by the slowest thread in the pool,\n # and they all start operating simultaneously\n 'Parse time': df['t_parse'].max(),\n 'Resize time': df['t_resize'].max(),\n 'Serialize time': df['t_serialize'].max(),\n 'Enqueue time': df['t_enqueue'].max(),\n 'Other time': df['t_other'].max(),\n 'Thread time': df['t_thread'].max(),\n 'Parse throughput (in)': df['num_json_bytes_converted'].sum() / df['t_parse'].max(),\n 'Parse throughput (out)': df['num_recordbatch_bytes'].sum() / df['t_parse'].max()}\n \n return row;",
"_____no_output_____"
],
[
"def get_all_data(data_path, schema, impl):\n path = '{}/{}/latency/threads/metrics/{}/'.format(data_path, schema, impl.lower())\n csv_files = []\n for file in glob.glob(\"{}*.csv\".format(path)):\n csv_files.append(file)\n print(\"Found {} files in {}\".format(len(csv_files), path))\n\n records = []\n for file in csv_files:\n records.append(summarize(analyze(load(file))))\n\n\n df = pd.DataFrame.from_records(records)\n df.sort_values(by=['Threads', 'JSONs'], inplace=True)\n df.insert(0,'Implementation', impl)\n \n # Use only max value\n df = df[df['Max. value'] == 18446744073709551615]\n display(pd.unique(df['Max. number of values']))\n \n # Print max throughput\n display('{} max: {}'.format(impl, df['Parse throughput (in)'].max() * 1e-9))\n # Print mean throughput of highest throughput per input size\n display('{} mean: {}'.format(impl, df.groupby(['Implementation', 'Input size']).agg({'Parse throughput (in)': 'max'})['Parse throughput (in)'].mean() * 1e-9))\n \n return df",
"_____no_output_____"
],
[
"def get_max_throughput_for_max_size(df):\n df = df[df.JSONs == df.JSONs.max()]\n #df.set_index('Threads', inplace=True)\n\n result = df[df['Parse throughput (in)'] == df['Parse throughput (in)'].max()]\n\n return result",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nfrom utils import lighten_color\n\nplt.rcParams.update({\n \"text.usetex\": True,\n \"font.family\": \"serif\",\n \"font.serif\": [\"Palatino\"],\n \"font.size\": 14\n})\n\ncolors = ['#4878d0', '#6acc64', '#d65f5f', '#d5bb67', '#dc7ec0', '#8c613c']\nmarkers = ['o', 's', 'd']",
"_____no_output_____"
],
[
"d_impls = []\n\nd_impls.append(get_all_data('../experiments/data-p9-battery', 'battery', 'Arrow'))\nd_impls.append(get_all_data('../experiments/data-p9-battery', 'battery', 'Custom'))\nd_impls.append(get_all_data('../experiments/data-p9-battery', 'battery', 'FPGA'))\n#d_impls.append(get_all_data('../experiments/data-intel-battery', 'battery', 'Arrow'))\n#d_impls.append(get_all_data('../experiments/data-intel-battery', 'battery', 'Custom'))\n#d_impls.append(get_all_data('../experiments/data-intel-battery', 'battery', 'FPGA'))\n\ndf = pd.concat(d_impls)\n\n#with pd.option_context('display.max_rows', None, 'display.max_columns', None): \n\n# Average the throughput of various number of max. array sizes\ndf = df.groupby(['Implementation', 'Threads', 'Input size']).agg({'Parse throughput (in)': 'mean'})\ndf = df.reset_index()\ndisplay(df)\n\nmax_tp = df['Parse throughput (in)'].max()\n\n# Get all dimensions for plots\n#max_values = pd.unique(df['Max. value'])\n#max_num_values = pd.unique(df['Max. number of values'])\n#value_bytes = np.sort(pd.unique(df['Value bytes']))\ninput_sizes = np.sort(pd.unique(df['Input size']))\nthreads = np.sort(pd.unique(df['Threads']))\nimpls = pd.unique(df['Implementation'])\n\n#print(\"Value bytes :\", value_bytes)\nprint(\"Input sizes :\", input_sizes)\nprint(\"Threads :\", threads)\nprint(\"Impls :\", impls)",
"Found 176 files in ../experiments/data-p9-battery/battery/latency/threads/metrics/arrow/\n"
],
[
"fig, axs = plt.subplots(ncols=len(input_sizes), figsize=[10, 3], sharey=True, sharex=True)\n\nhandles = {}\n\nfor xa, inps in enumerate(input_sizes):\n ax = axs[xa]\n\n for i, impl in enumerate(impls):\n # Prepare plotting data\n dl = df[(df['Input size'] == inps) & (df['Implementation'] == impl)]\n y = dl['Parse throughput (in)'] * 1e-9\n x = dl['Threads']\n\n # Plot FPGA data\n handles[impl], = ax.plot(x, y, c=lighten_color(colors[i],0.3), marker=markers[i], mfc=colors[i], mec=colors[i], linewidth=3)\n\n if impl == 'FPGA':\n handles['FPGA max.'] = ax.axhline(y=max(y.to_numpy()), color=lighten_color(colors[i],0.7), ls='--')\n\n\n\n # Set inline \n ax.annotate(\"Input size:{:.0f} MiB\".format(inps / (1<<20)), \n xycoords='axes fraction', \n xy=(0.05, 0.875), \n fontsize=12,\n backgroundcolor='#FFFFFF80')\n\n ax.set_xticks(threads)\n ax.set_xticklabels(threads, rotation=0, fontsize=8)\n\n ax.set_yticks(range(0, 25,2))\n ax.set_ylim(0, 1.25*max_tp * 1e-9)\n\n ax.grid(which='both')\n\n if xa == 0:\n ax.set_xlabel('Threads / Parser instances')\n ax.set_ylabel('Throughput (GB/s)')\n \nleg_handles = [v for k,v in handles.items()]\nleg_labels = [k for k,v in handles.items()]\nfig.legend(leg_handles, leg_labels, ncol=4, bbox_to_anchor=(-0.17, 0.93, 1.0, 0.1), frameon=False)\nplt.subplots_adjust(hspace = .1, wspace = .075, bottom=0.15)\n\nfig.savefig(\"throughput-battery-p9.pdf\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d053f39da9d1c0ab402daedb9f8256ee39fe8f0b | 7,835 | ipynb | Jupyter Notebook | 4.AML-Functions-notebook/AML-AzureFunctionsPackager.ipynb | dahatake/Azure-Machine-Learning-sample | 4cb093dbffa403df638f6ae186479cc0ea932262 | [
"MIT"
] | 3 | 2020-09-10T08:29:33.000Z | 2021-06-28T06:35:13.000Z | 4.AML-Functions-notebook/AML-AzureFunctionsPackager.ipynb | dahatake/Azure-Machine-Learning-sample | 4cb093dbffa403df638f6ae186479cc0ea932262 | [
"MIT"
] | null | null | null | 4.AML-Functions-notebook/AML-AzureFunctionsPackager.ipynb | dahatake/Azure-Machine-Learning-sample | 4cb093dbffa403df638f6ae186479cc0ea932262 | [
"MIT"
] | 2 | 2020-07-14T02:59:41.000Z | 2021-09-18T06:27:45.000Z | 29.126394 | 278 | 0.464582 | [
[
[
"# Azure Functions での展開用に Auto MLで作成したファイル群を Container 化する\n\n参考:\nAzure Functions に機械学習モデルをデプロイする (プレビュー)\nhttps://docs.microsoft.com/ja-jp/azure/machine-learning/how-to-deploy-functions",
"_____no_output_____"
]
],
[
[
"#!pip install azureml-contrib-functions",
"_____no_output_____"
]
],
[
[
"# Azure Machine Learnig ワークスペースへの接続",
"_____no_output_____"
]
],
[
[
"from azureml.core import Workspace, Dataset\n\nsubscription_id = '<your azure subscription id>'\nresource_group = '<your resource group>'\nworkspace_name = '<your azure machine learning workspace name>'\n\nws = Workspace(subscription_id, resource_group, workspace_name)",
"_____no_output_____"
],
[
"modelfilespath = 'AutoML1bb3ebb0477'",
"_____no_output_____"
]
],
[
[
"# モデルの登録",
"_____no_output_____"
]
],
[
[
"import os\nfrom azureml.core.model import Model\n\n# Register model\nmodel = Model.register(workspace = ws,\n model_path = modelfilespath + '/model.pkl',\n model_name = 'bankmarketing',\n tags = {'automl': 'use generated file'},\n description = 'AutoML generated model for Bank Marketing')",
"Registering model bankmarketing\n"
]
],
[
[
"# 推論環境定義",
"_____no_output_____"
]
],
[
[
"from azureml.core.environment import Environment\nmyenv = Environment.from_conda_specification(name = 'myenv',\n file_path = modelfilespath + '/conda_env_v_1_0_0.yml')\nmyenv.register(workspace=ws)",
"_____no_output_____"
]
],
[
[
"# 推論環境設定",
"_____no_output_____"
]
],
[
[
"from azureml.core.model import InferenceConfig\n\nmyenv = Environment.get(workspace=ws, name='myenv', version='1')\ninference_config = InferenceConfig(entry_script= modelfilespath + '/scoring_file_v_1_0_0.py',\n environment=myenv)",
"_____no_output_____"
]
],
[
[
"# Azure Functions 用 イメージ作成\n\nHTTP Trigger 用:\n\nhttps://docs.microsoft.com/ja-jp/python/api/azureml-contrib-functions/azureml.contrib.functions?view=azure-ml-py#package-http-workspace--models--inference-config--generate-dockerfile-false--auth-level-none-",
"_____no_output_____"
]
],
[
[
"from azureml.contrib.functions import package_http\n\nhttptrigger = package_http(ws, [model], inference_config, generate_dockerfile=True, auth_level=None)\nhttptrigger.wait_for_creation(show_output=True)\n# Display the package location/ACR path\nprint(httptrigger.location)",
"Package creation Succeeded\nhttps://dahatakeml5466187599.blob.core.windows.net/azureml/LocalUpload/d81db5dd-82ae-41fd-a56c-89010d382c36/build_context_manifest.json?sv=2019-02-02&sr=b&sig=ktxPIr5t%2F00E4lxDUQ4OjfiTxn00Yo0VfABY3BbQ4gQ%3D&st=2020-09-10T07%3A39%3A46Z&se=2020-09-10T15%3A49%3A46Z&sp=r\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d054112ca89e8db71d928c16a513782339cc5c27 | 24,654 | ipynb | Jupyter Notebook | _build/jupyter_execute/notebooks/01/Pop-Up-Shop.ipynb | jckantor/MO-book | f6ead8dc06327ec5cbb7065ead8a6df0631c05fd | [
"MIT"
] | 1 | 2022-02-03T22:07:45.000Z | 2022-02-03T22:07:45.000Z | _build/jupyter_execute/notebooks/01/Pop-Up-Shop.ipynb | jckantor/MO-book | f6ead8dc06327ec5cbb7065ead8a6df0631c05fd | [
"MIT"
] | 20 | 2022-02-11T09:50:30.000Z | 2022-03-31T22:52:48.000Z | _build/jupyter_execute/notebooks/01/Pop-Up-Shop.ipynb | jckantor/MO-book | f6ead8dc06327ec5cbb7065ead8a6df0631c05fd | [
"MIT"
] | 4 | 2022-02-06T02:08:25.000Z | 2022-03-28T11:56:53.000Z | 32.828229 | 399 | 0.479882 | [
[
[
"# Scenario Analysis: Pop Up Shop\n\n\n\nKürschner (talk) 17:51, 1 December 2020 (UTC), CC0, via Wikimedia Commons",
"_____no_output_____"
]
],
[
[
"# install Pyomo and solvers for Google Colab\nimport sys\nif \"google.colab\" in sys.modules:\n !wget -N -q https://raw.githubusercontent.com/jckantor/MO-book/main/tools/install_on_colab.py \n %run install_on_colab.py",
"_____no_output_____"
]
],
[
[
"## The problem\n\nThere is an opportunity to operate a pop-up shop to sell a unique commemorative item for events held at a famous location. The items cost 12 € each and will selL for 40 €. Unsold items can be returned to the supplier at a value of only 2 € due to their commemorative nature.\n\n| Parameter | Symbol | Value |\n| :---: | :---: | :---: |\n| sales price | $r$ | 40 € |\n| unit cost | $c$ | 12 € |\n| salvage value | $w$ | 2 € |\n\nProfit will increase with sales. Demand for these items, however, will be high only if the weather is good. Historical data suggests the following scenarios.",
"_____no_output_____"
],
[
"| Scenario ($s$) | Demand ($d_s$) | Probability ($p_s$) |\n| :---: | :-----: | :----------: |\n| Sunny Skies | 650 | 0.10 |\n| Good Weather | 400 | 0.60 |\n| Poor Weather | 200 | 0.30 |\n\nThe problem is to determine how many items to order for the pop-up shop. \n\nThe dilemma is that the weather won't be known until after the order is placed. Ordering enough items to meet demand for a good weather day results in a financial penalty on returned goods if the weather is poor. But ordering just enough items to satisfy demand on a poor weather day leaves \"money on the table\" if the weather is good.\n\nHow many items should be ordered for sale?",
"_____no_output_____"
],
[
"## Expected value for the mean scenario (EVM)\n \nA naive solution to this problem is to place an order equal to the expected demand. The expected demand is given by\n\n$$\n\\begin{align*}\n\\mathbb E[D] & = \\sum_{s\\in S} p_s d_s \n\\end{align*}\n$$\n\nChoosing an order size $x = \\mathbb E[d]$ results in an expected profit we call the **expected value of the mean scenario (EVM)**. \n\nVariable $y_s$ is the actual number of items sold if scenario $s$ should occur. The number sold is the lesser of the demand $d_s$ and the order size $x$.\n\n$$\n\\begin{align*}\ny_s & = \\min(d_s, x) & \\forall s \\in S\n\\end{align*}\n$$\n\nAny unsold inventory $x - y_s$ remaining after the event will be sold at the salvage price $w$. Taking into account the revenue from sales $r y_s$, the salvage value of the unsold inventory $w(x - y_s)$, and the cost of the order $c x$, the profit $f_s$ for scenario $s$ is given by\n\n$$\n\\begin{align*}\nf_s & = r y_s + w (x - y_s) - c x & \\forall s \\in S\n\\end{align*}\n$$\n\nThe average or expected profit is given by\n\n$$\n\\begin{align*}\n\\text{EVM} = \\mathbb E[f] & = \\sum_{s\\in S} p_s f_s\n\\end{align*}\n$$\n\nThese calculations can be executed using operations on the pandas dataframe. Let's begin by calculating the expected demand.\n\nBelow we create a pandas DataFrame object to store the scenario data.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\n\n# price information\nr = 40\nc = 12\nw = 2\n\n# scenario information\nscenarios = {\n \"sunny skies\" : {\"probability\": 0.10, \"demand\": 650},\n \"good weather\": {\"probability\": 0.60, \"demand\": 400},\n \"poor weather\": {\"probability\": 0.30, \"demand\": 200},\n}\n\ndf = pd.DataFrame.from_dict(scenarios).T\ndisplay(df)",
"_____no_output_____"
],
[
"expected_demand = sum(df[\"probability\"] * df[\"demand\"])\nprint(f\"Expected demand = {expected_demand}\")",
"Expected demand = 365.0\n"
]
],
[
[
"Subsequent calculations can be done directly withthe pandas dataframe holding the scenario data.",
"_____no_output_____"
]
],
[
[
"df[\"order\"] = expected_demand\ndf[\"sold\"] = df[[\"demand\", \"order\"]].min(axis=1)\ndf[\"salvage\"] = df[\"order\"] - df[\"sold\"]\ndf[\"profit\"] = r * df[\"sold\"] + w * df[\"salvage\"] - c * df[\"order\"]\n\nEVM = sum(df[\"probability\"] * df[\"profit\"])\n\nprint(f\"Mean demand = {expected_demand}\")\nprint(f\"Expected value of the mean demand (EVM) = {EVM}\")\ndisplay(df)",
"Mean demand = 365.0\nExpected value of the mean demand (EVM) = 8339.0\n"
]
],
[
[
"## Expected value of the stochastic solution (EVSS)\n\nThe optimization problem is to find the order size $x$ that maximizes expected profit subject to operational constraints on the decision variables. The variables $x$ and $y_s$ are non-negative integers, while $f_s$ is a real number that can take either positive and negative values. The number of goods sold in scenario $s$ has to be less than the order size $x$ and customer demand $d_s$. \n\nThe problem to be solved is\n\n$$\n\\begin{align*}\n\\text{EV} = & \\max_{x, y_s} \\mathbb E[F] = \\sum_{s\\in S} p_s f_s \\\\\n\\text{subject to:} \\\\\nf_s & = r y_s + w(x - y_s) - c x & \\forall s \\in S\\\\\ny_s & \\leq x & \\forall s \\in S \\\\\ny_s & \\leq d_s & \\forall s \\in S\n\\end{align*}\n$$\n\nwhere $S$ is the set of all scenarios under consideration.",
"_____no_output_____"
]
],
[
[
"import pyomo.environ as pyo\nimport pandas as pd\n\n# price information\nr = 40\nc = 12\nw = 2 \n\n# scenario information\nscenarios = {\n \"sunny skies\" : {\"demand\": 650, \"probability\": 0.1},\n \"good weather\": {\"demand\": 400, \"probability\": 0.6},\n \"poor weather\": {\"demand\": 200, \"probability\": 0.3},\n}\n\n# create model instance\nm = pyo.ConcreteModel('Pop-up Shop')\n\n# set of scenarios\nm.S = pyo.Set(initialize=scenarios.keys())\n\n# decision variables\nm.x = pyo.Var(domain=pyo.NonNegativeIntegers)\nm.y = pyo.Var(m.S, domain=pyo.NonNegativeIntegers)\nm.f = pyo.Var(m.S, domain=pyo.Reals)\n\n# objective\[email protected](sense=pyo.maximize)\ndef EV(m):\n return sum([scenarios[s][\"probability\"]*m.f[s] for s in m.S])\n\n# constraints\[email protected](m.S)\ndef profit(m, s):\n return m.f[s] == r*m.y[s] + w*(m.x - m.y[s]) - c*m.x\n\[email protected](m.S)\ndef sales_less_than_order(m, s):\n return m.y[s] <= m.x\n\[email protected](m.S)\ndef sales_less_than_demand(m, s):\n return m.y[s] <= scenarios[s][\"demand\"]\n\n# solve\nsolver = pyo.SolverFactory('glpk')\nresults = solver.solve(m)\n\n# display solution using Pandas\nprint(\"Solver Termination Condition:\", results.solver.termination_condition)\nprint(\"Expected Profit:\", m.EV())\nprint()\nfor s in m.S:\n scenarios[s][\"order\"] = m.x()\n scenarios[s][\"sold\"] = m.y[s]()\n scenarios[s][\"salvage\"] = m.x() - m.y[s]()\n scenarios[s][\"profit\"] = m.f[s]()\n \ndf = pd.DataFrame.from_dict(scenarios).T\ndisplay(df)",
"Solver Termination Condition: optimal\nExpected Profit: 8920.0\n\n"
]
],
[
[
"Optimizing over all scenarios provides an expected profit of 8,920 €, an increase of 581 € over the base case of simply ordering the expected number of items sold. The new solution places a larger order. In poor weather conditions there will be more returns and lower profit that is more than compensated by the increased profits in good weather conditions. \n\nThe addtional value that results from solve of this planning problem is called the **Value of the Stochastic Solution (VSS)**. The value of the stochastic solution is the additional profit compared to ordering to meet expected in demand. In this case,\n\n$$\\text{VSS} = \\text{EV} - \\text{EVM} = 8,920 - 8,339 = 581$$",
"_____no_output_____"
],
[
"## Expected value with perfect information (EVPI)\n\nMaximizing expected profit requires the size of the order be decided before knowing what scenario will unfold. The decision for $x$ has to be made \"here and now\" with probablistic information about the future, but without specific information on which future will actually transpire.\n\nNevertheless, we can perform the hypothetical calculation of what profit would be realized if we could know the future. We are still subject to the variability of weather, what is different is we know what the weather will be at the time the order is placed. \n\nThe resulting value for the expected profit is called the **Expected Value of Perfect Information (EVPI)**. The difference EVPI - EV is the extra profit due to having perfect knowledge of the future.\n\nTo compute the expected profit with perfect information, we let the order variable $x$ be indexed by the subsequent scenario that will unfold. Given decision varaible $x_s$, the model for EVPI becomes\n\n$$\n\\begin{align*}\n\\text{EVPI} = & \\max_{x_s, y_s} \\mathbb E[f] = \\sum_{s\\in S} p_s f_s \\\\\n\\text{subject to:} \\\\\nf_s & = r y_s + w(x_s - y_s) - c x_s & \\forall s \\in S\\\\\ny_s & \\leq x_s & \\forall s \\in S \\\\\ny_s & \\leq d_s & \\forall s \\in S\n\\end{align*}\n$$\n\nThe following implementation is a variation of the prior cell.",
"_____no_output_____"
]
],
[
[
"import pyomo.environ as pyo\nimport pandas as pd\n\n# price information\nr = 40\nc = 12\nw = 2 \n\n# scenario information\nscenarios = {\n \"sunny skies\" : {\"demand\": 650, \"probability\": 0.1},\n \"good weather\": {\"demand\": 400, \"probability\": 0.6},\n \"poor weather\": {\"demand\": 200, \"probability\": 0.3},\n}\n\n# create model instance\nm = pyo.ConcreteModel('Pop-up Shop')\n\n# set of scenarios\nm.S = pyo.Set(initialize=scenarios.keys())\n\n# decision variables\nm.x = pyo.Var(m.S, domain=pyo.NonNegativeIntegers)\nm.y = pyo.Var(m.S, domain=pyo.NonNegativeIntegers)\nm.f = pyo.Var(m.S, domain=pyo.Reals)\n\n# objective\[email protected](sense=pyo.maximize)\ndef EV(m):\n return sum([scenarios[s][\"probability\"]*m.f[s] for s in m.S])\n\n# constraints\[email protected](m.S)\ndef profit(m, s):\n return m.f[s] == r*m.y[s] + w*(m.x[s] - m.y[s]) - c*m.x[s]\n\[email protected](m.S)\ndef sales_less_than_order(m, s):\n return m.y[s] <= m.x[s]\n\[email protected](m.S)\ndef sales_less_than_demand(m, s):\n return m.y[s] <= scenarios[s][\"demand\"]\n\n# solve\nsolver = pyo.SolverFactory('glpk')\nresults = solver.solve(m)\n\n# display solution using Pandas\nprint(\"Solver Termination Condition:\", results.solver.termination_condition)\nprint(\"Expected Profit:\", m.EV())\nprint()\nfor s in m.S:\n scenarios[s][\"order\"] = m.x[s]()\n scenarios[s][\"sold\"] = m.y[s]()\n scenarios[s][\"salvage\"] = m.x[s]() - m.y[s]()\n scenarios[s][\"profit\"] = m.f[s]()\n \ndf = pd.DataFrame.from_dict(scenarios).T\ndisplay(df)",
"Solver Termination Condition: optimal\nExpected Profit: 10220.0\n\n"
]
],
[
[
"## Summary\n\nTo summarize, have computed three different solutions to the problem of order size:\n\n* The expected value of the mean solution (EVM) is the expected profit resulting from ordering the number of items expected to sold under all scenarios. \n\n* The expected value of the stochastic solution (EVSS) is the expected profit found by solving an two-state optimization problem where the order size was the \"here and now\" decision without specific knowledge of which future scenario would transpire.\n\n* The expected value of perfect information (EVPI) is the result of a hypotherical case where knowledge of the future scenario was somehow available when then order had to be placed. \n\nFor this example we found\n\n| Solution | Value (€) |\n| :------ | ----: |\n| Expected Value of the Mean Solution (EVM) | 8,399.0 | \n| Expected Value of the Stochastic Solution (EVSS) | 8,920.0 |\n| Expected Value of Perfect Information (EVPI) | 10,220.0 |\n\nThese results verify our expectation that\n\n$$\n\\begin{align*}\nEVM \\leq EVSS \\leq EVPI\n\\end{align*}\n$$\n\nThe value of the stochastic solution \n\n$$\n\\begin{align*}\nVSS = EVSS - EVM = 581\n\\end{align*}\n$$\n\nThe value of perfect information\n\n$$\n\\begin{align*}\nVPI = EVPI - EVSS = 1,300\n\\end{align*}\n$$\n\n\nAs one might expect, there is a cost that results from lack of knowledge about an uncertain future.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d05424aa1ce6ce807c3cf33d176ce512dfa5e1c4 | 14,373 | ipynb | Jupyter Notebook | tutorial_notebooks/Tutorial_01.ipynb | sandialabs/tracktable-docs | 308aa6979da774293249f4445c6ce79d8ac01f5d | [
"Unlicense"
] | null | null | null | tutorial_notebooks/Tutorial_01.ipynb | sandialabs/tracktable-docs | 308aa6979da774293249f4445c6ce79d8ac01f5d | [
"Unlicense"
] | null | null | null | tutorial_notebooks/Tutorial_01.ipynb | sandialabs/tracktable-docs | 308aa6979da774293249f4445c6ce79d8ac01f5d | [
"Unlicense"
] | null | null | null | 33.348028 | 800 | 0.601614 | [
[
[
"<span style=\"color:#888888\">Copyright (c) 2014-2021 National Technology and Engineering Solutions of Sandia, LLC. Under the terms of Contract DE-NA0003525 with National Technology and Engineering Solutions of Sandia, LLC, the U.S. Government retains certain rights in this software. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:</span>\n\n<span style=\"color:#888888\">1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.</span>\n\n<span style=\"color:#888888\">2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.</span>\n\n<span style=\"color:#888888\">THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.</span>",
"_____no_output_____"
],
[
"# <span style=\"color:#0054a8\">**Tutorial 1:**</span> <span style=\"color:#555555\">How to Create Trajectory Points from a Deliminated File</span>",
"_____no_output_____"
],
[
"## Purpose\n\nThis notebook demonstrates how to create Tracktable Trajectory Point objects from a deliminated (e.g. csv, tsv, etc.) data file. A data file must contain the following columns in order to be compatible with Tracktable:\n\n* **<span style=\"color:#00add0\">an identifier</span>** that is unique to each object\n* **<span style=\"color:#00add0\">a timestamp</span>**\n* **<span style=\"color:#00add0\">longitude</span>**\n* **<span style=\"color:#00add0\">latitude</span>**\n\nBoth ordering and headers for these columns can vary, but they must exist in the file. Each row of the data file should represent the information for a single trajectory point. \n\n**<span style=\"color:#81062e\">IMPORTANT:</span>** Deliminated files must be **sorted by timestamp** to be compatible with Tracktable.\n\n*Note:* This notebook does not cover how to create a Trajectory object (as opposed to a list of Trajectory point objects). Please see [Tutorial 2](Tutorial_02.ipynb) for an example of how to create Trajectory objects from a csv file containing trajectory point information.",
"_____no_output_____"
],
[
"## Step 1: Identify your CSV/TSV File\n\nWe will use the provided example data $^1$ for this tutorial. If you are using another filename, `data_filename` should be set to the string containing the path to your csv file.",
"_____no_output_____"
]
],
[
[
"from tracktable.core import data_directory\nimport os.path\n\ndata_filename = os.path.join(data_directory(), 'NYHarbor_2020_06_30_first_hour.csv')",
"_____no_output_____"
]
],
[
[
"## Step 2: Create a TrajectoryPointReader object.",
"_____no_output_____"
],
[
"We will create a Terrestrial point reader, which will expect **(longitude, latitude)** coordinates. Alternatively, if our data points were in a Cartesian coordinate system, we would import the `TrajectoryPointReader` object from `tracktable.domain.cartesian2d` or `tracktable.domain.cartesian3d`.",
"_____no_output_____"
]
],
[
[
"from tracktable.domain.terrestrial import TrajectoryPointReader\n\nreader = TrajectoryPointReader()",
"_____no_output_____"
]
],
[
[
"## Step 3: Give the TrajectoryPointReader object info about the file.",
"_____no_output_____"
],
[
"Have the reader open an input stream to the data file.",
"_____no_output_____"
]
],
[
[
"reader.input = open(data_filename, 'r')",
"_____no_output_____"
]
],
[
[
"### <span style=\"color:#0f0f0f\">*Additional Settings*</span>",
"_____no_output_____"
],
[
"Identify the comment character for the data file. Any lines with this as the first non-whitespace character will be ignored. This is optional and defaulted to `#`.",
"_____no_output_____"
]
],
[
[
"reader.comment_character = '#'",
"_____no_output_____"
]
],
[
[
"Identify the file's delimiter. For comma-separated (CSV) files, the delimiter should be set to `,`. For tab-separated files, this should be `\\t`. This is optional, and the default value is `,`.",
"_____no_output_____"
]
],
[
[
"reader.field_delimiter = ','",
"_____no_output_____"
]
],
[
[
"Identify the string associated with a null value in a cell. This is optional and defaulted to an empty string.",
"_____no_output_____"
]
],
[
[
"reader.null_value = 'NaN'",
"_____no_output_____"
]
],
[
[
"### <span style=\"color:#0f0f0f\">*Required Columns*</span>",
"_____no_output_____"
],
[
"We must tell the reader where to find the **<span style=\"color:#00add0\">unique object ID</span>**, **<span style=\"color:#00add0\">timestamp</span>**, **<span style=\"color:#00add0\">longitude</span>** and **<span style=\"color:#00add0\">latitude</span>** columns. Column numbering starts at zero.\n\nIf no column numbers are given, the reader will assume they are in the order listed above. Note that terrestrial points are stored as (longitude, latitude) in Tracktable.",
"_____no_output_____"
]
],
[
[
"reader.object_id_column = 3\nreader.timestamp_column = 0\nreader.coordinates[0] = 1 # longitude\nreader.coordinates[1] = 2 # latitude",
"_____no_output_____"
]
],
[
[
"### <span style=\"color:#0f0f0f\">*Optional Columns*</span>",
"_____no_output_____"
],
[
"Your data file may contain additional information (e.g. speed, heading, altitude, etc.) that you wish to store with your trajectory points. These can be stored as either floats, strings or datetime objects. An example of each is shown below, respectively.",
"_____no_output_____"
]
],
[
[
"reader.set_real_field_column('heading', 6)\nreader.set_string_field_column('vessel-name', 7)\nreader.set_time_field_column('eta', 17)",
"_____no_output_____"
]
],
[
[
"## Step 4: Convert the Reader to a List of Trajectory Points",
"_____no_output_____"
]
],
[
[
"trajectory_points = list(reader)",
"_____no_output_____"
]
],
[
[
"How many trajectory points do we have?",
"_____no_output_____"
]
],
[
[
"len(trajectory_points)",
"_____no_output_____"
]
],
[
[
"## Step 5: Accessing Trajectory Point Info",
"_____no_output_____"
],
[
"The information from the required columns of the csv can be accessed for a single `trajectory_point` object as\n\n* **<span style=\"color:#00add0\">unique object identifier:</span>** `trajectory_point.object_id`\n* **<span style=\"color:#00add0\">timestamp:</span>** `trajectory_point.timestamp`\n* **<span style=\"color:#00add0\">longitude:</span>** `trajectory_point[0]`\n* **<span style=\"color:#00add0\">latitude:</span>** `trajectory_point[1]`\n\nThe optional column information is available through the member variable `properties` as follows: `trajectory_point.properties['what-you-named-it']`.\n\nThis is demonstrated below for our first ten trajectory points.",
"_____no_output_____"
]
],
[
[
"for traj_point in trajectory_points[:10]:\n object_id = traj_point.object_id\n timestamp = traj_point.timestamp\n longitude = traj_point[0]\n latitude = traj_point[1]\n heading = traj_point.properties[\"heading\"]\n vessel_name = traj_point.properties[\"vessel-name\"]\n eta = traj_point.properties[\"eta\"]\n \n print(f'Unique ID: {object_id}')\n print(f'Timestamp: {timestamp}')\n print(f'Longitude: {longitude}')\n print(f'Latitude: {latitude}')\n print(f'Heading: {heading}')\n print(f'Vessel Name: {vessel_name}')\n print(f'ETA: {eta}\\n')",
"Unique ID: 367000140\nTimestamp: 2020-06-30 00:00:00+00:00\nLongitude: -74.07157\nLatitude: 40.64409\nHeading: 246.0\nVessel Name: SAMUEL I NEWHOUSE\nETA: 2020-06-30 12:01:00+00:00\n\nUnique ID: 366999618\nTimestamp: 2020-06-30 00:00:00+00:00\nLongitude: -74.02433\nLatitude: 40.54291\nHeading: 349.0\nVessel Name: CG SHRIKE\nETA: 2020-06-30 19:40:00+00:00\n\nUnique ID: 367776270\nTimestamp: 2020-06-30 00:00:00+00:00\nLongitude: -73.97656\nLatitude: 40.70324\nHeading: 290.0\nVessel Name: H200\nETA: 2020-06-30 20:04:00+00:00\n\nUnique ID: 367022550\nTimestamp: 2020-06-30 00:00:00+00:00\nLongitude: -74.07281\nLatitude: 40.63668\nHeading: 511.0\nVessel Name: SAMANTHA MILLER\nETA: 2020-06-30 08:10:00+00:00\n\nUnique ID: 367515850\nTimestamp: 2020-06-30 00:00:00+00:00\nLongitude: -74.11926\nLatitude: 40.64217\nHeading: 163.0\nVessel Name: DISCOVERY COAST\nETA: 2020-06-30 09:53:00+00:00\n\nUnique ID: 367531640\nTimestamp: 2020-06-30 00:00:00+00:00\nLongitude: -74.07176\nLatitude: 40.62947\nHeading: 511.0\nVessel Name: FDNY M9B\nETA: 2020-06-30 13:45:00+00:00\n\nUnique ID: 338531000\nTimestamp: 2020-06-30 00:00:00+00:00\nLongitude: -74.05089\nLatitude: 40.64413\nHeading: 96.0\nVessel Name: GENESIS VIGILANT\nETA: 2020-06-30 09:15:00+00:00\n\nUnique ID: 366516370\nTimestamp: 2020-06-30 00:00:00+00:00\nLongitude: -74.14805\nLatitude: 40.64346\nHeading: 302.0\nVessel Name: STEPHEN REINAUER\nETA: 2020-06-30 04:51:00+00:00\n\nUnique ID: 367779550\nTimestamp: 2020-06-30 00:00:00+00:00\nLongitude: -74.00551\nLatitude: 40.70308\nHeading: 234.0\nVessel Name: SUNSET CROSSING\nETA: 2020-06-30 06:36:00+00:00\n\nUnique ID: 367797260\nTimestamp: 2020-06-30 00:00:00+00:00\nLongitude: -73.9741\nLatitude: 40.70235\nHeading: 51.0\nVessel Name: H208\nETA: 2020-06-30 05:39:00+00:00\n\n"
]
],
[
[
"<span style=\"color:gray\">$^1$ Bureau of Ocean Energy Management (BOEM) and National Oceanic and Atmospheric Administration (NOAA). MarineCadastre.gov. *AIS Data for 2020.* Retrieved February 2021 from [marinecadastre.gov/data](https://marinecadastre.gov/data/). Trimmed down to the first hour of June 30, 2020, restricted to in NY Harbor.</span>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0542ffac676c78f25479c7cae9fa055eaa1fbdc | 91,327 | ipynb | Jupyter Notebook | examples/tutorial/optimize_voigt.ipynb | ykawashima/exojax | 67d1b6c868d69892d4bbf9e620ed05e432cfe61f | [
"MIT"
] | null | null | null | examples/tutorial/optimize_voigt.ipynb | ykawashima/exojax | 67d1b6c868d69892d4bbf9e620ed05e432cfe61f | [
"MIT"
] | null | null | null | examples/tutorial/optimize_voigt.ipynb | ykawashima/exojax | 67d1b6c868d69892d4bbf9e620ed05e432cfe61f | [
"MIT"
] | null | null | null | 224.390663 | 19,756 | 0.922389 | [
[
[
"# Optimization of a Voigt profile",
"_____no_output_____"
]
],
[
[
"from exojax.spec.rlpf import rvoigt\nimport jax.numpy as jnp\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"Let's optimize the Voigt function $V(\\nu, \\beta, \\gamma_L)$ using exojax!\n$V(\\nu, \\beta, \\gamma_L)$ is a convolution of a Gaussian with a STD of $\\beta$ and a Lorentian with a gamma parameter of $\\gamma_L$. \n\nNote that we use spec.rlpf.rvoigt instead of spec.voigt. This function is a voigt profile with VJP while voigt is JVP defined one. For some reason, we do not use rvoigt as a default function of the voigt profile. But in future, we plan to replace the VJP version as a default one. \n",
"_____no_output_____"
]
],
[
[
"nu=jnp.linspace(-10,10,100)\nplt.plot(nu, rvoigt(nu,1.0,2.0)) #beta=1.0, gamma_L=2.0",
"_____no_output_____"
]
],
[
[
"## optimization of a simple absorption model",
"_____no_output_____"
],
[
"Next, we try to fit a simple absorption model to mock data.\nThe absorption model is \n\n$ f= 1 - e^{-a V(\\nu,\\beta,\\gamma_L)}$\n",
"_____no_output_____"
]
],
[
[
"def absmodel(nu,a,beta,gamma_L):\n return 1.0 - jnp.exp(a*rvoigt(nu,beta,gamma_L))",
"_____no_output_____"
]
],
[
[
"Adding a noise...\n",
"_____no_output_____"
]
],
[
[
"from numpy.random import normal\ndata=absmodel(nu,2.0,1.0,2.0)+normal(0.0,0.01,len(nu))\nplt.plot(nu,data,\".\")",
"_____no_output_____"
]
],
[
[
"Let's optimize the multiple parameters",
"_____no_output_____"
]
],
[
[
"from jax import grad, vmap",
"_____no_output_____"
]
],
[
[
"We define the objective function as $obj = |d - f|^2$",
"_____no_output_____"
]
],
[
[
"# loss or objective function\ndef obj(a,beta,gamma_L):\n f=data-absmodel(nu,a,beta,gamma_L)\n g=jnp.dot(f,f)\n return g\n",
"_____no_output_____"
],
[
"#These are the derivative of the objective function\nh_a=grad(obj,argnums=0)\nh_beta=grad(obj,argnums=1)\nh_gamma_L=grad(obj,argnums=2)\nprint(h_a(2.0,1.0,2.0),h_beta(2.0,1.0,2.0),h_gamma_L(2.0,1.0,2.0))",
"0.0069304965 -0.0020095487 -0.0057327496\n"
],
[
"from jax import jit\n\n@jit\ndef step(t,opt_state):\n a,beta,gamma_L=get_params(opt_state)\n value=obj(a,beta,gamma_L)\n \n grads_a = h_a(a,beta,gamma_L)\n grads_beta = h_beta(a,beta,gamma_L)\n grads_gamma_L = h_gamma_L(a,beta,gamma_L)\n\n grads=jnp.array([grads_a,grads_beta,grads_gamma_L])\n \n opt_state = opt_update(t, grads, opt_state)\n return value, opt_state\n\ndef doopt(r0,opt_init,get_params,Nstep):\n opt_state = opt_init(r0)\n traj=[r0]\n for t in range(Nstep):\n value, opt_state = step(t, opt_state)\n p=get_params(opt_state)\n traj.append(p)\n return traj, p",
"_____no_output_____"
]
],
[
[
"Here, we use the ADAM optimizer",
"_____no_output_____"
]
],
[
[
"#adam\nfrom jax.experimental import optimizers\nopt_init, opt_update, get_params = optimizers.adam(1.e-1)\nr0 = jnp.array([1.5,1.5,1.5])\ntrajadam, padam=doopt(r0,opt_init,get_params,1000)",
"_____no_output_____"
]
],
[
[
"Optimized values are given in padam",
"_____no_output_____"
]
],
[
[
"padam",
"_____no_output_____"
],
[
"traj=jnp.array(trajadam)\nplt.plot(traj[:,0],label=\"$\\\\alpha$\")\nplt.plot(traj[:,1],ls=\"dashed\",label=\"$\\\\beta$\")\nplt.plot(traj[:,2],ls=\"dotted\",label=\"$\\\\gamma_L$\")\nplt.xscale(\"log\")\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(nu,data,\".\",label=\"data\")\nplt.plot(nu,absmodel(nu,padam[0],padam[1],padam[2]),label=\"optimized\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"Using SGD instead..., you need to increase the number of iteration for convergence",
"_____no_output_____"
]
],
[
[
"#sgd\nfrom jax.experimental import optimizers\nopt_init, opt_update, get_params = optimizers.sgd(1.e-1)\nr0 = jnp.array([1.5,1.5,1.5])\ntrajsgd, psgd=doopt(r0,opt_init,get_params,10000)",
"_____no_output_____"
],
[
"traj=jnp.array(trajsgd)\nplt.plot(traj[:,0],label=\"$\\\\alpha$\")\nplt.plot(traj[:,1],ls=\"dashed\",label=\"$\\\\beta$\")\nplt.plot(traj[:,2],ls=\"dotted\",label=\"$\\\\gamma_L$\")\nplt.xscale(\"log\")\nplt.legend()\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0543ed61c931cd5a805540a1e7a73797bb232a1 | 546,085 | ipynb | Jupyter Notebook | analysis.ipynb | loganrooks/rl-tic-tac-toe | 5c0a6c563eb01a41b0f4f24fbc47374e91d72488 | [
"MIT"
] | null | null | null | analysis.ipynb | loganrooks/rl-tic-tac-toe | 5c0a6c563eb01a41b0f4f24fbc47374e91d72488 | [
"MIT"
] | null | null | null | analysis.ipynb | loganrooks/rl-tic-tac-toe | 5c0a6c563eb01a41b0f4f24fbc47374e91d72488 | [
"MIT"
] | null | null | null | 892.295752 | 131,840 | 0.953454 | [
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport pickle",
"_____no_output_____"
],
[
"def load_obj(name ):\n with open('ttt/' + name + '.pkl', 'rb') as f:\n return pickle.load(f)",
"_____no_output_____"
],
[
"results = load_obj(\"results\")",
"_____no_output_____"
],
[
"hidden_units = []",
"_____no_output_____"
],
[
"def get_avg(list_,n_episodes):\n avg = np.abs(np.array(list_[:-n_episodes]).mean())\n return avg",
"_____no_output_____"
],
[
"def get_avgs(list_, n_samples=250):\n avgs = np.array(list_).reshape(-1, n_samples).mean(axis=1)\n return avgs",
"_____no_output_____"
],
[
"all_losses = []\nall_returns = []\nfor hidden_unit, result in results.items():\n hidden_units.append(hidden_unit)\n losses = result[\"loss\"]\n avg_reward = result[\"return\"][-1]\n all_returns.append(avg_reward)\n avg_loss = get_avg(losses, 100)\n all_losses.append(avg_loss)",
"_____no_output_____"
],
[
"print len(hidden_units)\nprint len(all_losses)",
"17\n17\n"
],
[
"plt.figure(figsize=(12,9))\nplt.xlabel(\"Hidden Size\")\nplt.ylabel(\"Policy Loss\")\nplt.title(\"Policy Loss versus Number of Hidden Units\")\nplt.scatter(hidden_units, all_losses, marker=\"x\")\nplt.savefig(\"figures/hidden_units_loss.png\", dpi=300, bbox_inches=\"tight\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,9))\nplt.xlabel(\"Hidden Size\")\nplt.ylabel(\"Average Return\")\nplt.title(\"Average Return versus Number of Hidden Units\")\nplt.scatter(hidden_units, all_returns, marker=\"x\")\nplt.savefig(\"figures/hidden_units_returns.png\", dpi=300, bbox_inches=\"tight\")",
"_____no_output_____"
],
[
"results_single_64 = load_obj(\"hidden_units_single_64\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,9))\nepisodes = range(1, 50001, 500)\nplt.xlabel(\"Episode\")\nplt.ylabel(\"Average Return\")\nplt.title(\"Average Return versus Episodes for 64 Hidden Units\")\nplt.plot(episodes, results_single_64['return'])\nplt.savefig(\"figures/return_learning_64.png\", dpi=300, bbox_inches=\"tight\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,9))\nepisodes = range(1, 50001, 250)\nplt.xlabel(\"Episode\")\nplt.ylabel(\"Loss\")\nplt.title(\"Episode Loss versus Episodes for 64 Hidden Units\")\nplt.scatter(episodes, np.abs(get_avgs(results_single_64['loss'])), marker='x')\nplt.savefig(\"figures/loss_curve_64.png\", dpi=300, bbox_inches=\"tight\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,9))\nepisodes = range(1, 50001, 250)\nplt.xlabel(\"Episode\")\nplt.ylabel(\"Number of Invalid Moves per Episode\")\nplt.title(\"Number of Invalid Moves versus Episodes for 64 Hidden Units\")\nplt.scatter(episodes, get_avgs(results_single_64['invalid']), marker='x')\nplt.savefig(\"figures/invalid_curve_64.png\", dpi=300, bbox_inches=\"tight\")",
"_____no_output_____"
],
[
"results_single_256 = load_obj(\"hidden_units_single_256\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,9))\nepisodes = range(1, 50001, 500)\nplt.xlabel(\"Episode\")\nplt.ylabel(\"Average Return\")\nplt.title(\"Average Return versus Episodes for 256 Hidden Units\")\nplt.plot(episodes, results_single_256['return'])\nplt.savefig(\"figures/return_learning_256.png\", dpi=300, bbox_inches=\"tight\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,9))\nepisodes = range(1, 50001, 250)\nplt.xlabel(\"Episode\")\nplt.ylabel(\"Loss\")\nplt.title(\"Episode Loss versus Episodes for 256 Hidden Units\")\nplt.scatter(episodes, get_avgs(results_single_256['loss']), marker='x')\nplt.savefig(\"figures/loss_curve_256.png\", dpi=300, bbox_inches=\"tight\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,9))\nepisodes = range(1, 50001)\nplt.xlabel(\"Episode\")\nplt.ylabel(\"Number of Invalid Moves per Episode\")\nplt.title(\"Number of Invalid Moves versus Episodes for 256 Hidden Units\")\nplt.scatter(episodes, np.abs(results_single_256['invalid']), marker='x')\nplt.savefig(\"figures/invalid_curve_256.png\", dpi=300, bbox_inches=\"tight\")",
"_____no_output_____"
],
[
"results_single_128 = load_obj(\"hidden_units_single_128\")\nresults_single_32 = load_obj(\"hidden_units_single_32\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,9))\nepisodes = range(1, 50001, 250)\nplt.xlabel(\"Episode\")\nplt.ylabel(\"Average Number of Invalid Moves per Episode\")\nplt.title(\"Average Number of Invalid Moves versus Episodes for Different Number of Hidden Units\")\nplt.plot(episodes, get_avgs(results_single_32['invalid']), label='32')\nplt.plot(episodes, get_avgs(results_single_64['invalid']), label='64')\nplt.plot(episodes, get_avgs(results_single_128['invalid']), label='128')\nplt.plot(episodes, get_avgs(results_single_256['invalid']), label='256')\nplt.legend()\nplt.savefig(\"figures/invalid_curve.png\", dpi=300, bbox_inches=\"tight\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,9))\nn_samples = 500\nepisodes = range(1, 50001, n_samples)\nplt.xlabel(\"Episode\")\nplt.ylabel(\"Average Number of Invalid Moves per Episode\")\nplt.ylim(0, 0.1)\nplt.title(\"Average Number of Invalid Moves versus Episodes for Different Number of Hidden Units\")\nplt.plot(episodes, get_avgs(results_single_32['invalid'], n_samples), label='32')\nplt.plot(episodes, get_avgs(results_single_64['invalid'], n_samples), label='64')\nplt.plot(episodes, get_avgs(results_single_128['invalid'], n_samples), label='128')\nplt.plot(episodes, get_avgs(results_single_256['invalid'], n_samples), label='256')\nplt.legend()\nplt.savefig(\"figures/invalid_curve_close.png\", dpi=300, bbox_inches=\"tight\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,9))\nepisodes = range(1, 50001, 500)\nplt.xlabel(\"Episode\")\nplt.ylabel(\"Average Return\")\nplt.title(\"Average Return versus Episodes for Different Number of Hidden Units\")\nplt.plot(episodes, results_single_32['return'], label='32')\nplt.plot(episodes, results_single_64['return'], label='64')\nplt.plot(episodes, results_single_128['return'], label='128')\nplt.plot(episodes, results_single_256['return'], label='256')\nplt.legend()\nplt.savefig(\"figures/return_learning.png\", dpi=300, bbox_inches=\"tight\")",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,9))\nepisodes = range(1, 50001, 500)\nplt.xlabel(\"Episode\")\nplt.ylabel(\"Average Return\")\nplt.ylim(0, 10)\nplt.title(\"Average Return versus Episodes for Different Number of Hidden Units\")\nplt.plot(episodes, results_single_32['return'], label='32')\nplt.plot(episodes, results_single_64['return'], label='64')\nplt.plot(episodes, results_single_128['return'], label='128')\nplt.plot(episodes, results_single_256['return'], label='256')\nplt.legend()\nplt.savefig(\"figures/return_learning_close.png\", dpi=300, bbox_inches=\"tight\")",
"_____no_output_____"
],
[
"ratio_results = results_single_32['ratio']",
"_____no_output_____"
],
[
"win = [result['win'] for result in ratio_results]\nlose = [result['lose'] for result in ratio_results]\ntie = [result['tie'] for result in ratio_results]\nepisodes = range(1, 101)",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,9))\nplt.xlabel(\"Episodes [% of 50000]\")\nplt.ylabel(\"Number of Episodes\")\nplt.title(\"Win / Lose / Tie Ratio for 500 Episodes Played Versus Number of Episodes Trained For\")\n\nplt.bar(episodes, win, label='win')\nplt.bar(episodes, lose, bottom=win, label='lose')\nplt.bar(episodes, tie, bottom=np.add(win,lose), label='tie')\nplt.legend()\nplt.savefig(\"figures/ratio_graph.png\", dpi=300, bbox_inches=\"tight\")",
"_____no_output_____"
],
[
"first_distributions = results_single_128[\"first\"]",
"_____no_output_____"
],
[
"move = {i: [distr[0, i] for distr in first_distributions] for i in range(9)}",
"_____no_output_____"
],
[
"move1 = move[0]\nmove2 = np.add(move[1], move1)\nmove3 = np.add(move[2], move2)\nmove4 = np.add(move[3], move3)\nmove5 = np.add(move[4], move4)\nmove6 = np.add(move[5], move5)\nmove7 = np.add(move[6], move6)\nmove8 = np.add(move[7], move7)\nepisodes = range(1, 101)",
"_____no_output_____"
],
[
"plt.figure(figsize=(12,9))\nplt.xlabel(\"Episodes [% of 50000]\")\nplt.ylabel(\"Number of Episodes\")\nplt.title(\"Win / Lose / Tie Ratio for 500 Episodes Played Versus Number of Episodes Trained For\")\n\nplt.bar(episodes, move[0], label='1')\nplt.bar(episodes, move[1], bottom=move1, label='2')\nplt.bar(episodes, move[2], bottom=move2, label='3')\nplt.bar(episodes, move[3], bottom=move3, label='4')\nplt.bar(episodes, move[4], bottom=move4, label='5')\nplt.bar(episodes, move[5], bottom=move5, label='6')\nplt.bar(episodes, move[6], bottom=move6, label='7')\nplt.bar(episodes, move[7], bottom=move7, label='8')\nplt.bar(episodes, move[8], bottom=move8, label='9')\n\nplt.legend()\nplt.savefig(\"figures/moves_distr_graph_128.png\", dpi=300, bbox_inches=\"tight\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d054502e537b726c38d1ce5bf13469a4ea6f22dd | 150,538 | ipynb | Jupyter Notebook | machine_learning_trading_bot.ipynb | djonathan/Algorithmic-Trading-ML | 8b10e2bd09e7352198459e480974837e2ba2992d | [
"MIT"
] | null | null | null | machine_learning_trading_bot.ipynb | djonathan/Algorithmic-Trading-ML | 8b10e2bd09e7352198459e480974837e2ba2992d | [
"MIT"
] | null | null | null | machine_learning_trading_bot.ipynb | djonathan/Algorithmic-Trading-ML | 8b10e2bd09e7352198459e480974837e2ba2992d | [
"MIT"
] | null | null | null | 67.657528 | 29,528 | 0.709223 | [
[
[
"# Machine Learning Trading Bot\n\nIn this Challenge, you’ll assume the role of a financial advisor at one of the top five financial advisory firms in the world. Your firm constantly competes with the other major firms to manage and automatically trade assets in a highly dynamic environment. In recent years, your firm has heavily profited by using computer algorithms that can buy and sell faster than human traders.\n\nThe speed of these transactions gave your firm a competitive advantage early on. But, people still need to specifically program these systems, which limits their ability to adapt to new data. You’re thus planning to improve the existing algorithmic trading systems and maintain the firm’s competitive advantage in the market. To do so, you’ll enhance the existing trading signals with machine learning algorithms that can adapt to new data.\n\n## Instructions:\n\nUse the starter code file to complete the steps that the instructions outline. The steps for this Challenge are divided into the following sections:\n\n* Establish a Baseline Performance\n\n* Tune the Baseline Trading Algorithm\n\n* Evaluate a New Machine Learning Classifier\n\n* Create an Evaluation Report\n\n#### Establish a Baseline Performance\n\nIn this section, you’ll run the provided starter code to establish a baseline performance for the trading algorithm. To do so, complete the following steps.\n\nOpen the Jupyter notebook. Restart the kernel, run the provided cells that correspond with the first three steps, and then proceed to step four. \n\n1. Import the OHLCV dataset into a Pandas DataFrame.\n\n2. Generate trading signals using short- and long-window SMA values. \n\n3. Split the data into training and testing datasets.\n\n4. Use the `SVC` classifier model from SKLearn's support vector machine (SVM) learning method to fit the training data and make predictions based on the testing data. Review the predictions.\n\n5. Review the classification report associated with the `SVC` model predictions. \n\n6. Create a predictions DataFrame that contains columns for “Predicted” values, “Actual Returns”, and “Strategy Returns”.\n\n7. Create a cumulative return plot that shows the actual returns vs. the strategy returns. Save a PNG image of this plot. This will serve as a baseline against which to compare the effects of tuning the trading algorithm.\n\n8. Write your conclusions about the performance of the baseline trading algorithm in the `README.md` file that’s associated with your GitHub repository. Support your findings by using the PNG image that you saved in the previous step.\n\n#### Tune the Baseline Trading Algorithm\n\nIn this section, you’ll tune, or adjust, the model’s input features to find the parameters that result in the best trading outcomes. (You’ll choose the best by comparing the cumulative products of the strategy returns.) To do so, complete the following steps:\n\n1. Tune the training algorithm by adjusting the size of the training dataset. To do so, slice your data into different periods. Rerun the notebook with the updated parameters, and record the results in your `README.md` file. Answer the following question: What impact resulted from increasing or decreasing the training window?\n\n> **Hint** To adjust the size of the training dataset, you can use a different `DateOffset` value—for example, six months. Be aware that changing the size of the training dataset also affects the size of the testing dataset.\n\n2. Tune the trading algorithm by adjusting the SMA input features. Adjust one or both of the windows for the algorithm. Rerun the notebook with the updated parameters, and record the results in your `README.md` file. Answer the following question: What impact resulted from increasing or decreasing either or both of the SMA windows?\n\n3. Choose the set of parameters that best improved the trading algorithm returns. Save a PNG image of the cumulative product of the actual returns vs. the strategy returns, and document your conclusion in your `README.md` file.\n\n#### Evaluate a New Machine Learning Classifier\n\nIn this section, you’ll use the original parameters that the starter code provided. But, you’ll apply them to the performance of a second machine learning model. To do so, complete the following steps:\n\n1. Import a new classifier, such as `AdaBoost`, `DecisionTreeClassifier`, or `LogisticRegression`. (For the full list of classifiers, refer to the [Supervised learning page](https://scikit-learn.org/stable/supervised_learning.html) in the scikit-learn documentation.)\n\n2. Using the original training data as the baseline model, fit another model with the new classifier.\n\n3. Backtest the new model to evaluate its performance. Save a PNG image of the cumulative product of the actual returns vs. the strategy returns for this updated trading algorithm, and write your conclusions in your `README.md` file. Answer the following questions: Did this new model perform better or worse than the provided baseline model? Did this new model perform better or worse than your tuned trading algorithm?\n\n#### Create an Evaluation Report\n\nIn the previous sections, you updated your `README.md` file with your conclusions. To accomplish this section, you need to add a summary evaluation report at the end of the `README.md` file. For this report, express your final conclusions and analysis. Support your findings by using the PNG images that you created.\n",
"_____no_output_____"
]
],
[
[
"# Imports\nimport pandas as pd\nimport numpy as np\nfrom pathlib import Path\nimport hvplot.pandas\nimport matplotlib.pyplot as plt\nfrom sklearn import svm\nfrom sklearn import metrics\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.preprocessing import StandardScaler\nfrom pandas.tseries.offsets import DateOffset\nfrom sklearn.metrics import classification_report",
"_____no_output_____"
]
],
[
[
"---\n\n## Establish a Baseline Performance\n\nIn this section, you’ll run the provided starter code to establish a baseline performance for the trading algorithm. To do so, complete the following steps.\n\nOpen the Jupyter notebook. Restart the kernel, run the provided cells that correspond with the first three steps, and then proceed to step four. \n",
"_____no_output_____"
],
[
"### Step 1: mport the OHLCV dataset into a Pandas DataFrame.",
"_____no_output_____"
]
],
[
[
"# Import the OHLCV dataset into a Pandas Dataframe\nohlcv_df = pd.read_csv(\n Path(\"./Resources/emerging_markets_ohlcv.csv\"), \n index_col='date', \n infer_datetime_format=True, \n parse_dates=True\n)\n\n# Review the DataFrame\nohlcv_df.head()",
"_____no_output_____"
],
[
"# Filter the date index and close columns\nsignals_df = ohlcv_df.loc[:, [\"close\"]]\n\n# Use the pct_change function to generate returns from close prices\nsignals_df[\"Actual Returns\"] = signals_df[\"close\"].pct_change()\n\n# Drop all NaN values from the DataFrame\nsignals_df = signals_df.dropna()\n\n# Review the DataFrame\ndisplay(signals_df.head())\ndisplay(signals_df.tail())",
"_____no_output_____"
]
],
[
[
"## Step 2: Generate trading signals using short- and long-window SMA values. ",
"_____no_output_____"
]
],
[
[
"# Set the short window and long window\nshort_window = 4\nlong_window = 100\n\n# Generate the fast and slow simple moving averages (4 and 100 days, respectively)\nsignals_df['SMA_Fast'] = signals_df['close'].rolling(window=short_window).mean()\nsignals_df['SMA_Slow'] = signals_df['close'].rolling(window=long_window).mean()\n\nsignals_df = signals_df.dropna()\n\n# Review the DataFrame\ndisplay(signals_df.head())\ndisplay(signals_df.tail())",
"_____no_output_____"
],
[
"# Initialize the new Signal column\nsignals_df['Signal'] = 0.0\n\n# When Actual Returns are greater than or equal to 0, generate signal to buy stock long\nsignals_df.loc[(signals_df['Actual Returns'] >= 0), 'Signal'] = 1\n\n# When Actual Returns are less than 0, generate signal to sell stock short\nsignals_df.loc[(signals_df['Actual Returns'] < 0), 'Signal'] = -1\n\n# Review the DataFrame\ndisplay(signals_df.head())\ndisplay(signals_df.tail())",
"_____no_output_____"
],
[
"signals_df['Signal'].value_counts()",
"_____no_output_____"
],
[
"# Calculate the strategy returns and add them to the signals_df DataFrame\nsignals_df['Strategy Returns'] = signals_df['Actual Returns'] * signals_df['Signal'].shift()\n\n# Review the DataFrame\ndisplay(signals_df.head())\ndisplay(signals_df.tail())",
"_____no_output_____"
],
[
"# Plot Strategy Returns to examine performance\n(1 + signals_df['Strategy Returns']).cumprod().plot()",
"_____no_output_____"
]
],
[
[
"### Step 3: Split the data into training and testing datasets.",
"_____no_output_____"
]
],
[
[
"# Assign a copy of the sma_fast and sma_slow columns to a features DataFrame called X\nX = signals_df[['SMA_Fast', 'SMA_Slow']].shift().dropna()\n\n# Review the DataFrame\nX.head()",
"_____no_output_____"
],
[
"# Create the target set selecting the Signal column and assiging it to y\ny = signals_df['Signal']\n\n# Review the value counts\ny.value_counts()",
"_____no_output_____"
],
[
"# Select the start of the training period\ntraining_begin = X.index.min()\n\n# Display the training begin date\nprint(training_begin)",
"2015-04-02 15:00:00\n"
],
[
"# Select the ending period for the training data with an offset of 3 months\ntraining_end = X.index.min() + DateOffset(months=3)\n\n# Display the training end date\nprint(training_end)",
"2015-07-02 15:00:00\n"
],
[
"# Generate the X_train and y_train DataFrames\nX_train = X.loc[training_begin:training_end]\ny_train = y.loc[training_begin:training_end]\n\n# Review the X_train DataFrame\nX_train.head()",
"_____no_output_____"
],
[
"# Generate the X_test and y_test DataFrames\nX_test = X.loc[training_end+DateOffset(hours=1):]\ny_test = y.loc[training_end+DateOffset(hours=1):]\n\n# Review the X_test DataFrame\nX_train.head()",
"_____no_output_____"
],
[
"# Scale the features DataFrames\n\n# Create a StandardScaler instance\nscaler = StandardScaler()\n\n# Apply the scaler model to fit the X-train data\nX_scaler = scaler.fit(X_train)\n\n# Transform the X_train and X_test DataFrames using the X_scaler\nX_train_scaled = X_scaler.transform(X_train)\nX_test_scaled = X_scaler.transform(X_test)",
"_____no_output_____"
]
],
[
[
"### Step 4: Use the `SVC` classifier model from SKLearn's support vector machine (SVM) learning method to fit the training data and make predictions based on the testing data. Review the predictions.",
"_____no_output_____"
]
],
[
[
"# From SVM, instantiate SVC classifier model instance\nsvm_model = svm.SVC()\n \n# Fit the model to the data using the training data\nsvm_model = svm_model.fit(X_train_scaled, y_train)\n \n# Use the testing data to make the model predictions\nsvm_pred = svm_model.predict(X_test_scaled)\n\n# Review the model's predicted values\nsvm_pred[:10]\n",
"_____no_output_____"
]
],
[
[
"### Step 5: Review the classification report associated with the `SVC` model predictions. ",
"_____no_output_____"
]
],
[
[
"# Use a classification report to evaluate the model using the predictions and testing data\nsvm_testing_report = classification_report(y_test, svm_pred)\n\n# Print the classification report\nprint(svm_testing_report)",
" precision recall f1-score support\n\n -1.0 0.43 0.04 0.07 1804\n 1.0 0.56 0.96 0.71 2288\n\n accuracy 0.55 4092\n macro avg 0.49 0.50 0.39 4092\nweighted avg 0.50 0.55 0.43 4092\n\n"
]
],
[
[
"### Step 6: Create a predictions DataFrame that contains columns for “Predicted” values, “Actual Returns”, and “Strategy Returns”.",
"_____no_output_____"
]
],
[
[
"# Create a new empty predictions DataFrame.\n\n# Create a predictions DataFrame\npredictions_df = pd.DataFrame(index=X_test.index)\n\n# Add the SVM model predictions to the DataFrame\npredictions_df['Predicted'] = svm_pred\n\n# Add the actual returns to the DataFrame\npredictions_df['Actual Returns'] = signals_df['Actual Returns']\n\n# Add the strategy returns to the DataFrame\npredictions_df['Strategy Returns'] = predictions_df['Predicted'] * predictions_df['Actual Returns']\n\n# Review the DataFrame\ndisplay(predictions_df.head())\ndisplay(predictions_df.tail())",
"_____no_output_____"
]
],
[
[
"### Step 7: Create a cumulative return plot that shows the actual returns vs. the strategy returns. Save a PNG image of this plot. This will serve as a baseline against which to compare the effects of tuning the trading algorithm.",
"_____no_output_____"
]
],
[
[
"# Plot the actual returns versus the strategy returns\nbaseline_actual_vs_stragegy_plot = (1 + predictions_df[['Actual Returns', 'Strategy Returns']]).cumprod().plot(title=\"Baseline\")\nbaseline_actual_vs_stragegy_plot.get_figure().savefig('Baseline_actual_vs_strategy.png',bbox_inches='tight')\n(1 + predictions_df[['Actual Returns', 'Strategy Returns']]).cumprod().tail(1)",
"_____no_output_____"
]
],
[
[
"---\n\n## Tune the Baseline Trading Algorithm",
"_____no_output_____"
],
[
"## Step 6: Use an Alternative ML Model and Evaluate Strategy Returns",
"_____no_output_____"
],
[
"In this section, you’ll tune, or adjust, the model’s input features to find the parameters that result in the best trading outcomes. You’ll choose the best by comparing the cumulative products of the strategy returns.",
"_____no_output_____"
],
[
"### Step 1: Tune the training algorithm by adjusting the size of the training dataset. \n\nTo do so, slice your data into different periods. Rerun the notebook with the updated parameters, and record the results in your `README.md` file. \n\nAnswer the following question: What impact resulted from increasing or decreasing the training window?",
"_____no_output_____"
],
[
"### Step 2: Tune the trading algorithm by adjusting the SMA input features. \n\nAdjust one or both of the windows for the algorithm. Rerun the notebook with the updated parameters, and record the results in your `README.md` file. \n\nAnswer the following question: What impact resulted from increasing or decreasing either or both of the SMA windows?",
"_____no_output_____"
],
[
"### Step 3: Choose the set of parameters that best improved the trading algorithm returns. \n\nSave a PNG image of the cumulative product of the actual returns vs. the strategy returns, and document your conclusion in your `README.md` file.",
"_____no_output_____"
],
[
"---\n\n## Evaluate a New Machine Learning Classifier\n\nIn this section, you’ll use the original parameters that the starter code provided. But, you’ll apply them to the performance of a second machine learning model. ",
"_____no_output_____"
],
[
"### Step 1: Import a new classifier, such as `AdaBoost`, `DecisionTreeClassifier`, or `LogisticRegression`. (For the full list of classifiers, refer to the [Supervised learning page](https://scikit-learn.org/stable/supervised_learning.html) in the scikit-learn documentation.)",
"_____no_output_____"
]
],
[
[
"# Initiate the model instance\nabc = AdaBoostClassifier(n_estimators=50)\n",
"_____no_output_____"
]
],
[
[
"### Step 2: Using the original training data as the baseline model, fit another model with the new classifier.",
"_____no_output_____"
]
],
[
[
"# Fit the model using the training data\nmodel = abc.fit(X_train_scaled, y_train)\n\n# Use the testing dataset to generate the predictions for the new model\nabc_pred = model.predict(X_test_scaled)\n\n# Review the model's predicted values\nabc_pred[:10]\n",
"_____no_output_____"
]
],
[
[
"### Step 3: Backtest the new model to evaluate its performance. \n\nSave a PNG image of the cumulative product of the actual returns vs. the strategy returns for this updated trading algorithm, and write your conclusions in your `README.md` file. \n\nAnswer the following questions: \nDid this new model perform better or worse than the provided baseline model? \nDid this new model perform better or worse than your tuned trading algorithm?",
"_____no_output_____"
]
],
[
[
"print(\"Accuracy:\",metrics.accuracy_score(y_test, abc_pred))\n\n# Use a classification report to evaluate the model using the predictions and testing data\nabc_testing_report = classification_report(y_test, abc_pred)\n\n# Print the classification report\nprint(abc_testing_report)\n",
"Accuracy: 0.5505865102639296\n precision recall f1-score support\n\n -1.0 0.44 0.08 0.13 1804\n 1.0 0.56 0.92 0.70 2288\n\n accuracy 0.55 4092\n macro avg 0.50 0.50 0.41 4092\nweighted avg 0.51 0.55 0.45 4092\n\n"
],
[
"# Create a new empty predictions DataFrame.\nabc_pred_df = pd.DataFrame(index=X_test.index)\n\n# Add the ABC model predictions to the DataFrame\nabc_pred_df['Predicted'] = abc_pred\n\n# Add the actual returns to the DataFrame\nabc_pred_df['Actual Returns'] = signals_df['Actual Returns']\n\n# Add the strategy returns to the DataFrame\nabc_pred_df['Strategy Returns'] = abc_pred_df['Predicted'] * abc_pred_df['Actual Returns']\n\n# Review the DataFrame\ndisplay(abc_pred_df.head(3))\ndisplay(abc_pred_df.tail(3))\n",
"_____no_output_____"
],
[
"# Plot the actual returns versus the strategy returns\nabc_strategy_plot = (1 + abc_pred_df[['Actual Returns', 'Strategy Returns']]).cumprod().plot(title=\"AdaBoost: 3-month Train, SMA 4/100\")\nabc_strategy_plot.get_figure().savefig('AdaBoost_actual_vs_strategy.png',bbox_inches='tight')\n(1 + abc_pred_df[['Actual Returns', 'Strategy Returns']]).cumprod().tail(1)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d05457b108c8a56a15ed91cb8972dbf7bb145580 | 3,119 | ipynb | Jupyter Notebook | docs/operations.ipynb | Vykstorm/rlambda | cc8b9f1595e53e4449317546be3fb23198cc3a1d | [
"MIT"
] | 1 | 2019-05-25T11:06:34.000Z | 2019-05-25T11:06:34.000Z | docs/operations.ipynb | Vykstorm/rlambda | cc8b9f1595e53e4449317546be3fb23198cc3a1d | [
"MIT"
] | null | null | null | docs/operations.ipynb | Vykstorm/rlambda | cc8b9f1595e53e4449317546be3fb23198cc3a1d | [
"MIT"
] | null | null | null | 18.90303 | 125 | 0.417441 | [
[
[
"You can build rlambda objects using any python arithmetic, comparision and bitwise operators. Here are some examples...",
"_____no_output_____"
]
],
[
[
"from rlambda.abc import x, y, z",
"_____no_output_____"
],
[
"print((x + 1) + (y - 1) / z)\nprint((x % 2) // y + z ** 2)",
"x, y, z : (x + 1) + (y - 1) / z\nx, y, z : (x % 2) // y + z ** 2\n"
],
[
"print((x + 1) ** 2 > (y * 2))\nprint(x != y)\nprint(x ** 2 == y)",
"x, y : (x + 1) ** 2 > y * 2\nx, y : x != y\nx, y : y == x ** 2\n"
],
[
"print((x > y) & (y > z))\nprint((x < 0) | (y < 0))\nprint(~(x > 0) ^ ~(y > 0))\nprint((x << 1) + (y >> 1))",
"x, y, z : (x > y) & (y > z)\nx, y : (x < 0) | (y < 0)\nx, y : ~(x > 0) ^ ~(y > 0)\nx, y : (x << 1) + (y >> 1)\n"
]
],
[
[
"You can use subscripting and indexing operations...",
"_____no_output_____"
]
],
[
[
"print(x[2:] + y[:2])\nprint(x[::2] + y[1::2])\nprint(x[1, 0:2])",
"x, y : x[2:] + y[:2]\nx, y : x[::2] + y[1::2]\nx : x[1, 0:2]\n"
],
[
"f = x.imag ** 2 + x.real * 2\nprint(f)\nf(complex(1, 2))",
"x : x.imag ** 2 + x.real * 2\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0546512a57943ddf582f3b5916d7642583c4d67 | 377,404 | ipynb | Jupyter Notebook | VGG16 in Keras.ipynb | bilalkhann16/VGG16-In-Keras | d5f09650b523f5382011680822bcdce1b6b1abea | [
"Apache-2.0"
] | 40 | 2019-08-21T08:11:15.000Z | 2022-03-14T03:22:52.000Z | VGG16 in Keras.ipynb | bilalkhann16/VGG16-In-Keras | d5f09650b523f5382011680822bcdce1b6b1abea | [
"Apache-2.0"
] | 6 | 2019-12-02T17:19:17.000Z | 2021-11-20T07:42:39.000Z | VGG16 in Keras.ipynb | bilalkhann16/VGG16-In-Keras | d5f09650b523f5382011680822bcdce1b6b1abea | [
"Apache-2.0"
] | 47 | 2019-08-21T08:44:11.000Z | 2022-03-09T16:17:29.000Z | 106.611299 | 135,600 | 0.788712 | [
[
[
"!wget --no-check-certificate \\\n https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \\\n -O cats_and_dogs_filtered.zip",
"_____no_output_____"
],
[
"! unzip cats_and_dogs_filtered.zip",
"Archive: cats_and_dogs_filtered.zip\r\n creating: cats_and_dogs_filtered/\r\n inflating: cats_and_dogs_filtered/vectorize.py \r\n creating: cats_and_dogs_filtered/validation/\r\n creating: cats_and_dogs_filtered/train/\r\n creating: cats_and_dogs_filtered/validation/dogs/\r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2127.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2126.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2125.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2124.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2123.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2122.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2121.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2120.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2119.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2118.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2117.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2116.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2115.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2114.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2113.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2112.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2111.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2110.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2109.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2108.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2107.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2106.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2105.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2104.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2103.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2102.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2101.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2100.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2099.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2098.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2097.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2096.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2095.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2094.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2093.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2092.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2091.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2090.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2089.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2088.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2087.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2086.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2085.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2084.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2083.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2082.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2081.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2080.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2079.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2078.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2077.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2076.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2075.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2074.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2073.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2072.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2071.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2070.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2069.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2068.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2067.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2066.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2065.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2064.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2063.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2062.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2061.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2060.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2059.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2058.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2057.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2056.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2055.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2054.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2053.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2052.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2051.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2050.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2049.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2048.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2047.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2046.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2045.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2044.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2043.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2042.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2041.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2040.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2039.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2038.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2037.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2036.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2035.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2034.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2033.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2032.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2031.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2030.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2029.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2028.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2027.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2026.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2025.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2024.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2023.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2022.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2021.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2020.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2019.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2018.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2017.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2016.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2015.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2014.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2013.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2012.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2011.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2010.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2009.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2008.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2007.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2006.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2005.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2004.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2003.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2002.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2001.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2000.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2255.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2254.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2253.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2252.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2251.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2250.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2249.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2248.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2247.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2246.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2245.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2244.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2243.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2242.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2241.jpg \r\n inflating: cats_and_dogs_filtered/validation/dogs/dog.2240.jpg \r\n"
],
[
"import keras,os\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Conv2D, MaxPool2D , Flatten\nfrom keras.preprocessing.image import ImageDataGenerator\nimport numpy as np",
"Using TensorFlow backend.\n"
],
[
"trdata = ImageDataGenerator()\ntraindata = trdata.flow_from_directory(directory=\"cats_and_dogs_filtered/train\",target_size=(224,224))\ntsdata = ImageDataGenerator()\ntestdata = tsdata.flow_from_directory(directory=\"cats_and_dogs_filtered/validation\", target_size=(224,224))",
"Found 2000 images belonging to 2 classes.\nFound 1000 images belonging to 2 classes.\n"
],
[
"model = Sequential()",
"_____no_output_____"
],
[
"model.add(Conv2D(input_shape=(224,224,3),filters=64,kernel_size=(3,3),padding=\"same\", activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(Conv2D(filters=64,kernel_size=(3,3),padding=\"same\", activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))",
"_____no_output_____"
],
[
"model.add(Conv2D(filters=128, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(Conv2D(filters=128, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))",
"_____no_output_____"
],
[
"model.add(Conv2D(filters=256, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(Conv2D(filters=256, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(Conv2D(filters=256, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))",
"_____no_output_____"
],
[
"model.add(Conv2D(filters=512, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(Conv2D(filters=512, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(Conv2D(filters=512, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))",
"_____no_output_____"
],
[
"model.add(Conv2D(filters=512, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(Conv2D(filters=512, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(Conv2D(filters=512, kernel_size=(3,3), padding=\"same\", activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))",
"_____no_output_____"
],
[
"model.add(Flatten())",
"_____no_output_____"
],
[
"model.add(Dense(units=4096,activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(Dense(units=4096,activation=\"relu\"))",
"_____no_output_____"
],
[
"model.add(Dense(units=2, activation=\"softmax\"))",
"_____no_output_____"
],
[
"from keras.optimizers import Adam\nopt = Adam(lr=0.001)\nmodel.compile(optimizer=opt, loss=keras.losses.categorical_crossentropy, metrics=['accuracy'])",
"_____no_output_____"
],
[
"model.summary()",
"_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_1 (Conv2D) (None, 224, 224, 64) 1792 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 224, 224, 64) 36928 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 112, 112, 64) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 112, 112, 128) 73856 \n_________________________________________________________________\nconv2d_4 (Conv2D) (None, 112, 112, 128) 147584 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 56, 56, 128) 0 \n_________________________________________________________________\nconv2d_5 (Conv2D) (None, 56, 56, 256) 295168 \n_________________________________________________________________\nconv2d_6 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nconv2d_7 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nmax_pooling2d_3 (MaxPooling2 (None, 28, 28, 256) 0 \n_________________________________________________________________\nconv2d_8 (Conv2D) (None, 28, 28, 512) 1180160 \n_________________________________________________________________\nconv2d_9 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nconv2d_10 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nmax_pooling2d_4 (MaxPooling2 (None, 14, 14, 512) 0 \n_________________________________________________________________\nconv2d_11 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nconv2d_12 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nconv2d_13 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nmax_pooling2d_5 (MaxPooling2 (None, 7, 7, 512) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 25088) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 4096) 102764544 \n_________________________________________________________________\ndense_2 (Dense) (None, 4096) 16781312 \n_________________________________________________________________\ndense_3 (Dense) (None, 2) 8194 \n=================================================================\nTotal params: 134,268,738\nTrainable params: 134,268,738\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"from keras.callbacks import ModelCheckpoint, EarlyStopping\ncheckpoint = ModelCheckpoint(\"vgg16_1.h5\", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1)\nearly = EarlyStopping(monitor='val_acc', min_delta=0, patience=20, verbose=1, mode='auto')\n",
"_____no_output_____"
],
[
"hist = model.fit_generator(steps_per_epoch=100,generator=traindata, validation_data= testdata, validation_steps=10,epochs=100,callbacks=[checkpoint,early])",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nplt.plot(hist.history[\"acc\"])\nplt.plot(hist.history['val_acc'])\nplt.plot(hist.history['loss'])\nplt.plot(hist.history['val_loss'])\nplt.title(\"model accuracy\")\nplt.ylabel(\"Accuracy\")\nplt.xlabel(\"Epoch\")\nplt.legend([\"Accuracy\",\"Validation Accuracy\",\"loss\",\"Validation Loss\"])\nplt.show()",
"_____no_output_____"
],
[
"from keras.preprocessing import image\nimg = image.load_img(\"Pomeranian_01.jpeg\",target_size=(224,224))\nimg = np.asarray(img)\nplt.imshow(img)\nimg = np.expand_dims(img, axis=0)\nfrom keras.models import load_model\nsaved_model = load_model(\"vgg16_1.h5\")\noutput = saved_model.predict(img)\nif output[0][0] > output[0][1]:\n print(\"cat\")\nelse:\n print('dog')",
"dog\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0547038a8e683cd62c38e714028b3a15826e567 | 26,560 | ipynb | Jupyter Notebook | samples/Untitled.ipynb | akio-tomiya/Gaugefields.jl | dd2180dfe54eba7826ddd45a13ab2f5a007857d1 | [
"MIT"
] | 1 | 2022-01-24T14:21:45.000Z | 2022-01-24T14:21:45.000Z | samples/Untitled.ipynb | akio-tomiya/Gaugefields.jl | dd2180dfe54eba7826ddd45a13ab2f5a007857d1 | [
"MIT"
] | 12 | 2022-01-18T01:51:48.000Z | 2022-03-25T01:14:03.000Z | samples/Untitled.ipynb | akio-tomiya/Gaugefields.jl | dd2180dfe54eba7826ddd45a13ab2f5a007857d1 | [
"MIT"
] | null | null | null | 42.360447 | 148 | 0.587877 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0548838f2d882bd78bf4ab7a5520834e37ddf35 | 13,303 | ipynb | Jupyter Notebook | datalab/genomics/Getting started with the Genomics API.ipynb | googlegenomics/datalab-examples | 609542b7d437a5111ea847f49589ed1a025c9453 | [
"Apache-2.0"
] | 24 | 2015-12-06T01:22:34.000Z | 2022-01-15T19:44:56.000Z | datalab/genomics/Getting started with the Genomics API.ipynb | googlegenomics/datalab-examples | 609542b7d437a5111ea847f49589ed1a025c9453 | [
"Apache-2.0"
] | 3 | 2015-10-23T22:24:15.000Z | 2016-02-03T23:27:37.000Z | datalab/genomics/Getting started with the Genomics API.ipynb | googlegenomics/datalab-examples | 609542b7d437a5111ea847f49589ed1a025c9453 | [
"Apache-2.0"
] | 7 | 2015-10-23T22:22:31.000Z | 2020-08-13T06:53:59.000Z | 27.714583 | 457 | 0.577163 | [
[
[
"<!-- Copyright 2015 Google Inc. All rights reserved. -->\n\n<!-- Licensed under the Apache License, Version 2.0 (the \"License\"); -->\n<!-- you may not use this file except in compliance with the License. -->\n<!-- You may obtain a copy of the License at -->\n\n<!-- http://www.apache.org/licenses/LICENSE-2.0 -->\n\n<!-- Unless required by applicable law or agreed to in writing, software -->\n<!-- distributed under the License is distributed on an \"AS IS\" BASIS, -->\n<!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -->\n<!-- See the License for the specific language governing permissions and -->\n<!-- limitations under the License. -->\n\n# Getting started with the Google Genomics API",
"_____no_output_____"
],
[
"In this notebook we'll cover how to make authenticated requests to the [Google Genomics API](https://cloud.google.com/genomics/reference/rest/).\n\n----\n\nNOTE:\n\n* If you're new to notebooks, or want to check out additional samples, check out the full [list](../) of general notebooks.\n* For additional Genomics samples, check out the full [list](./) of Genomics notebooks.",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
],
[
"### Install Python libraries",
"_____no_output_____"
],
[
"We'll be using the [Google Python API client](https://github.com/google/google-api-python-client) for interacting with Genomics API. We can install this library, or any other 3rd-party Python libraries from the [Python Package Index (PyPI)](https://pypi.python.org/pypi) using the `pip` package manager.\n\nThere are [50+ Google APIs](http://api-python-client-doc.appspot.com/) that you can work against with the Google Python API Client, but we'll focus on the Genomics API in this notebook.",
"_____no_output_____"
]
],
[
[
"!pip install --upgrade google-api-python-client",
"Requirement already up-to-date: google-api-python-client in /usr/local/lib/python2.7/dist-packages\nCleaning up...\n"
]
],
[
[
"### Create an Authenticated Client",
"_____no_output_____"
],
[
"Next we construct a Python object that we can use it to make requests. \n\nThe following snippet shows how we can authenticate using the service account on the Datalab host. For more detail about authentication from Python, see [Using OAuth 2.0 for Server to Server Applications](https://developers.google.com/api-client-library/python/auth/service-accounts).",
"_____no_output_____"
]
],
[
[
"from httplib2 import Http\nfrom oauth2client.client import GoogleCredentials\ncredentials = GoogleCredentials.get_application_default()\nhttp = Http()\ncredentials.authorize(http)\n",
"_____no_output_____"
]
],
[
[
"And then we create a client for the Genomics API.",
"_____no_output_____"
]
],
[
[
"from apiclient.discovery import build\ngenomics = build('genomics', 'v1', http=http)",
"_____no_output_____"
]
],
[
[
"### Send a request to the Genomics API",
"_____no_output_____"
],
[
"Now that we have a Python client for the Genomics API, we can access a variety of different resources. For details about each available resource, see the python client [API docs here](https://google-api-client-libraries.appspot.com/documentation/genomics/v1/python/latest/index.html).\n\nUsing our `genomics` client, we'll demonstrate fetching a Dataset resource by ID (the [1000 Genomes dataset](http://googlegenomics.readthedocs.org/en/latest/use_cases/discover_public_data/1000_genomes.html) in this case).\n\nFirst, we need to construct a request object.",
"_____no_output_____"
]
],
[
[
"request = genomics.datasets().get(datasetId='10473108253681171589')",
"_____no_output_____"
]
],
[
[
"Next, we'll send this request to the Genomics API by calling the `request.execute()` method.",
"_____no_output_____"
]
],
[
[
"response = request.execute()",
"_____no_output_____"
]
],
[
[
"You will need enable the Genomics API for your project if you have not done so previously. Click on [this link](https://console.developers.google.com/flows/enableapi?apiid=genomics) to enable the API in your project.",
"_____no_output_____"
],
[
"The response object returned is simply a Python dictionary. Let's take a look at the properties returned in the response.",
"_____no_output_____"
]
],
[
[
"for entry in response.items():\n print \"%s => %s\" % entry",
"projectId => genomics-public-data\nid => 10473108253681171589\ncreateTime => 1970-01-01T00:00:00.000Z\nname => 1000 Genomes\n"
]
],
[
[
"Success! We can see the name of the specified Dataset and a few other pieces of metadata.\n\nAccessing other Genomics API resources will follow this same set of steps. The full [list of available resources within the API is here](https://google-api-client-libraries.appspot.com/documentation/genomics/v1/python/latest/index.html). Each resource has details about the different verbs that can be applied (e.g., [Dataset methods](https://google-api-client-libraries.appspot.com/documentation/genomics/v1/python/latest/genomics_v1.datasets.html)).",
"_____no_output_____"
],
[
"## Access Data",
"_____no_output_____"
],
[
"In this portion of the notebook, we implement [this same example](https://github.com/googlegenomics/getting-started-with-the-api/tree/master/python) implemented as a python script. First let's define a few constants to use within the examples that follow.",
"_____no_output_____"
]
],
[
[
"dataset_id = '10473108253681171589' # This is the 1000 Genomes dataset ID\nsample = 'NA12872'\nreference_name = '22'\nreference_position = 51003835",
"_____no_output_____"
]
],
[
[
"### Get read bases for a sample at specific a position",
"_____no_output_____"
],
[
"First find the read group set ID for the sample.",
"_____no_output_____"
]
],
[
[
"request = genomics.readgroupsets().search(\n body={'datasetIds': [dataset_id], 'name': sample},\n fields='readGroupSets(id)')\nread_group_sets = request.execute().get('readGroupSets', [])\nif len(read_group_sets) != 1:\n raise Exception('Searching for %s didn\\'t return '\n 'the right number of read group sets' % sample)\n\nread_group_set_id = read_group_sets[0]['id']",
"_____no_output_____"
]
],
[
[
"Once we have the read group set ID, lookup the reads at the position in which we are interested.",
"_____no_output_____"
]
],
[
[
"request = genomics.reads().search(\n body={'readGroupSetIds': [read_group_set_id],\n 'referenceName': reference_name,\n 'start': reference_position,\n 'end': reference_position + 1,\n 'pageSize': 1024},\n fields='alignments(alignment,alignedSequence)')\nreads = request.execute().get('alignments', [])",
"_____no_output_____"
]
],
[
[
"And we print out the results.",
"_____no_output_____"
]
],
[
[
"# Note: This is simplistic - the cigar should be considered for real code\nbases = [read['alignedSequence'][\n reference_position - int(read['alignment']['position']['position'])]\n for read in reads]\n\nprint '%s bases on %s at %d are' % (sample, reference_name, reference_position)\n\nfrom collections import Counter\nfor base, count in Counter(bases).items():\n print '%s: %s' % (base, count)",
"NA12872 bases on 22 at 51003835 are\nC: 1\nG: 13\n"
]
],
[
[
"### Get variants for a sample at specific a position",
"_____no_output_____"
],
[
"First find the call set ID for the sample.",
"_____no_output_____"
]
],
[
[
"request = genomics.callsets().search(\n body={'variantSetIds': [dataset_id], 'name': sample},\n fields='callSets(id)')\nresp = request.execute()\ncall_sets = resp.get('callSets', [])\nif len(call_sets) != 1:\n raise Exception('Searching for %s didn\\'t return '\n 'the right number of call sets' % sample)\n\ncall_set_id = call_sets[0]['id']",
"_____no_output_____"
]
],
[
[
"Once we have the call set ID, lookup the variants that overlap the position in which we are interested.",
"_____no_output_____"
]
],
[
[
"request = genomics.variants().search(\n body={'callSetIds': [call_set_id],\n 'referenceName': reference_name,\n 'start': reference_position,\n 'end': reference_position + 1},\n fields='variants(names,referenceBases,alternateBases,calls(genotype))')\nvariant = request.execute().get('variants', [])[0]",
"_____no_output_____"
]
],
[
[
"And we print out the results.",
"_____no_output_____"
]
],
[
[
"variant_name = variant['names'][0]\ngenotype = [variant['referenceBases'] if g == 0\n else variant['alternateBases'][g - 1]\n for g in variant['calls'][0]['genotype']]\n\nprint 'the called genotype is %s for %s' % (','.join(genotype), variant_name)",
"the called genotype is G,G for rs131767\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d054883ad1220e40f6d49e28166a80054571593c | 137,238 | ipynb | Jupyter Notebook | HW2/HW2.ipynb | kaahanmotwani/CS361 | 95673b5f9fd139a3ae614b1a0d17f9a69584a266 | [
"MIT"
] | null | null | null | HW2/HW2.ipynb | kaahanmotwani/CS361 | 95673b5f9fd139a3ae614b1a0d17f9a69584a266 | [
"MIT"
] | null | null | null | HW2/HW2.ipynb | kaahanmotwani/CS361 | 95673b5f9fd139a3ae614b1a0d17f9a69584a266 | [
"MIT"
] | null | null | null | 98.590517 | 33,684 | 0.768118 | [
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"# Question 4",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('data-hw2.csv')\ndf",
"_____no_output_____"
],
[
"plt.figure(figsize=(8,8))\nplt.scatter(df['LUNG'], df['CIG'])\nplt.xlabel(\"LUNG DEATHS\")\nplt.ylabel(\"CIG SALES\")\nplt.title(\"Scatter plot of Lung Cancer Deaths vs. Cigarette Sales\")\nfor i in range(len(df)):\n plt.annotate(df.iloc[i]['STATE'], xy=(df.iloc[i]['LUNG'], df.iloc[i]['CIG']))",
"_____no_output_____"
],
[
"df.corr()",
"_____no_output_____"
],
[
"df_clean = df\ndf_clean = df_clean.drop([6, 24], axis=0)\ndf_clean",
"_____no_output_____"
],
[
"plt.figure(figsize=(8,8))\nplt.scatter(df_clean['LUNG'], df_clean['CIG'])\nplt.xlabel(\"LUNG DEATHS\")\nplt.ylabel(\"CIG SALES\")\nplt.title(\"Scatter plot of Lung Cancer Deaths vs. Cigarette Sales\")\nfor i in range(len(df_clean)):\n plt.annotate(df_clean.iloc[i]['STATE'], xy=(df_clean.iloc[i]['LUNG'], df_clean.iloc[i]['CIG']))",
"_____no_output_____"
],
[
"df_clean.corr()",
"_____no_output_____"
]
],
[
[
"# Question 5",
"_____no_output_____"
]
],
[
[
"df_ko = pd.read_csv('KO.csv')\ndf_pep = pd.read_csv('PEP.csv')\ndel df_ko['Open'], df_ko['High'], df_ko['Low'], df_ko['Close'], df_ko['Volume']\ndel df_pep['Open'], df_pep['High'], df_pep['Low'], df_pep['Close'], df_pep['Volume']\ndf_comb = pd.DataFrame(columns=[\"Date\", \"KO Adj Close\", \"PEP Adj Close\"])\ndf_comb[\"Date\"] = df_ko[\"Date\"]\ndf_comb[\"KO Adj Close\"] = df_ko[\"Adj Close\"]\ndf_comb[\"PEP Adj Close\"] = df_pep[\"Adj Close\"]",
"_____no_output_____"
],
[
"df_comb.corr()",
"_____no_output_____"
],
[
"x_vals = np.array([np.min(df_comb[\"KO Adj Close\"]), np.max(df_comb[\"PEP Adj Close\"])])\nx_vals_standardized = (x_vals-df_comb[\"KO Adj Close\"].mean())/df_comb[\"KO Adj Close\"].std(ddof=0)\ny_predictions_standardized = df_comb.corr()[\"KO Adj Close\"][\"PEP Adj Close\"]*x_vals_standardized\ny_predictions = y_predictions_standardized*df_comb[\"PEP Adj Close\"].std(ddof=0)+df_comb[\"PEP Adj Close\"].mean()\nplt.figure(figsize=(8,8))\nplt.scatter(df_comb['KO Adj Close'], df_comb['PEP Adj Close'])\nplt.xlabel(\"KO Daily Adj Close Price\")\nplt.ylabel(\"PEP Daily Adj Close Price\")\nplt.title(\"Scatter plot of KO Daily Adj Close Price vs. PEP Daily Adj Close Price with prediction line\")\nplt.plot(x_vals, y_predictions, 'r', linewidth=2)\nplt.xlim(35, 60)\nplt.ylim(100, 145)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d054896ec3566783a3eac2d7c6f146124c120885 | 69,473 | ipynb | Jupyter Notebook | Transformer/LA_Transformer_Oneshot_clean.ipynb | McStevenss/reid-keras-padel | c43716fdccf9348cff38bc4d3b1b34d1083a23b0 | [
"MIT"
] | null | null | null | Transformer/LA_Transformer_Oneshot_clean.ipynb | McStevenss/reid-keras-padel | c43716fdccf9348cff38bc4d3b1b34d1083a23b0 | [
"MIT"
] | null | null | null | Transformer/LA_Transformer_Oneshot_clean.ipynb | McStevenss/reid-keras-padel | c43716fdccf9348cff38bc4d3b1b34d1083a23b0 | [
"MIT"
] | null | null | null | 34,736.5 | 69,472 | 0.742461 | [
[
[
"## Setup",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Mounted at /content/drive\n"
],
[
"!ls /content/drive/MyDrive/ColabNotebooks/Transformer\n",
" LA-Transformer\t\t\t'LA-Transformer Training.html'\n LA_Transformer.ipynb\t\t'LA-Transformer Training.ipynb'\n LA_Transformer_Oneshot.ipynb\t LICENSE\n'LA-Transformer Testing.html'\t Readme.md\n'LA-Transformer Testing.ipynb'\n"
],
[
"!nvcc --version",
"nvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2020 NVIDIA Corporation\nBuilt on Mon_Oct_12_20:09:46_PDT_2020\nCuda compilation tools, release 11.1, V11.1.105\nBuild cuda_11.1.TC455_06.29190527_0\n"
],
[
"!pip3 install timm faiss tqdm numpy\n!pip3 install torch==1.10.2+cu113 torchvision==0.11.3+cu113 torchaudio==0.10.2+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html\n!sudo apt-get install libomp-dev",
"Collecting timm\n Downloading timm-0.5.4-py3-none-any.whl (431 kB)\n\u001b[K |████████████████████████████████| 431 kB 13.1 MB/s \n\u001b[?25hCollecting faiss\n Downloading faiss-1.5.3-cp37-cp37m-manylinux1_x86_64.whl (4.7 MB)\n\u001b[K |████████████████████████████████| 4.7 MB 31.8 MB/s \n\u001b[?25hRequirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (4.63.0)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (1.21.5)\nRequirement already satisfied: torchvision in /usr/local/lib/python3.7/dist-packages (from timm) (0.11.1+cu111)\nRequirement already satisfied: torch>=1.4 in /usr/local/lib/python3.7/dist-packages (from timm) (1.10.0+cu111)\nRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=1.4->timm) (3.10.0.2)\nRequirement already satisfied: pillow!=8.3.0,>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from torchvision->timm) (7.1.2)\nInstalling collected packages: timm, faiss\nSuccessfully installed faiss-1.5.3 timm-0.5.4\nLooking in links: https://download.pytorch.org/whl/cu113/torch_stable.html\nCollecting torch==1.10.2+cu113\n Downloading https://download.pytorch.org/whl/cu113/torch-1.10.2%2Bcu113-cp37-cp37m-linux_x86_64.whl (1821.4 MB)\n\u001b[K |██████████████▋ | 834.1 MB 1.3 MB/s eta 0:12:43tcmalloc: large alloc 1147494400 bytes == 0x55c1ea4ba000 @ 0x7fc7c6910615 0x55c1b0eae3bc 0x55c1b0f8f18a 0x55c1b0eb11cd 0x55c1b0fa3b3d 0x55c1b0f25458 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f252c0 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21cd4 0x55c1b0fa4986 0x55c1b0f21350 0x55c1b0fa4986 0x55c1b0f21350 0x55c1b0fa4986 0x55c1b0f21350 0x55c1b0eb2f19 0x55c1b0ef6a79 0x55c1b0eb1b32 0x55c1b0f251dd 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21cd4 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f20eae 0x55c1b0eb29da 0x55c1b0f21108 0x55c1b0f2002f\n\u001b[K |██████████████████▌ | 1055.7 MB 1.3 MB/s eta 0:09:31tcmalloc: large alloc 1434370048 bytes == 0x55c22eb10000 @ 0x7fc7c6910615 0x55c1b0eae3bc 0x55c1b0f8f18a 0x55c1b0eb11cd 0x55c1b0fa3b3d 0x55c1b0f25458 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f252c0 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21cd4 0x55c1b0fa4986 0x55c1b0f21350 0x55c1b0fa4986 0x55c1b0f21350 0x55c1b0fa4986 0x55c1b0f21350 0x55c1b0eb2f19 0x55c1b0ef6a79 0x55c1b0eb1b32 0x55c1b0f251dd 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21cd4 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f20eae 0x55c1b0eb29da 0x55c1b0f21108 0x55c1b0f2002f\n\u001b[K |███████████████████████▌ | 1336.2 MB 1.3 MB/s eta 0:06:11tcmalloc: large alloc 1792966656 bytes == 0x55c1b3942000 @ 0x7fc7c6910615 0x55c1b0eae3bc 0x55c1b0f8f18a 0x55c1b0eb11cd 0x55c1b0fa3b3d 0x55c1b0f25458 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f252c0 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21cd4 0x55c1b0fa4986 0x55c1b0f21350 0x55c1b0fa4986 0x55c1b0f21350 0x55c1b0fa4986 0x55c1b0f21350 0x55c1b0eb2f19 0x55c1b0ef6a79 0x55c1b0eb1b32 0x55c1b0f251dd 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21cd4 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f20eae 0x55c1b0eb29da 0x55c1b0f21108 0x55c1b0f2002f\n\u001b[K |█████████████████████████████▊ | 1691.1 MB 1.2 MB/s eta 0:01:54tcmalloc: large alloc 2241208320 bytes == 0x55c21e72a000 @ 0x7fc7c6910615 0x55c1b0eae3bc 0x55c1b0f8f18a 0x55c1b0eb11cd 0x55c1b0fa3b3d 0x55c1b0f25458 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f252c0 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21cd4 0x55c1b0fa4986 0x55c1b0f21350 0x55c1b0fa4986 0x55c1b0f21350 0x55c1b0fa4986 0x55c1b0f21350 0x55c1b0eb2f19 0x55c1b0ef6a79 0x55c1b0eb1b32 0x55c1b0f251dd 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21cd4 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f20eae 0x55c1b0eb29da 0x55c1b0f21108 0x55c1b0f2002f\n\u001b[K |████████████████████████████████| 1821.4 MB 1.2 MB/s eta 0:00:01tcmalloc: large alloc 1821433856 bytes == 0x55c2a408c000 @ 0x7fc7c690f1e7 0x55c1b0ee45d7 0x55c1b0eae3bc 0x55c1b0f8f18a 0x55c1b0eb11cd 0x55c1b0fa3b3d 0x55c1b0f25458 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21108 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21108 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21108 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21108 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21108 0x55c1b0eb29da 0x55c1b0f21108 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21cd4 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21cd4 0x55c1b0f2002f\ntcmalloc: large alloc 2276794368 bytes == 0x55c31099a000 @ 0x7fc7c6910615 0x55c1b0eae3bc 0x55c1b0f8f18a 0x55c1b0eb11cd 0x55c1b0fa3b3d 0x55c1b0f25458 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21108 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21108 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21108 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21108 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21108 0x55c1b0eb29da 0x55c1b0f21108 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21cd4 0x55c1b0f2002f 0x55c1b0eb2aba 0x55c1b0f21cd4 0x55c1b0f2002f 0x55c1b0eb3151\n\u001b[K |████████████████████████████████| 1821.4 MB 2.6 kB/s \n\u001b[?25hCollecting torchvision==0.11.3+cu113\n Downloading https://download.pytorch.org/whl/cu113/torchvision-0.11.3%2Bcu113-cp37-cp37m-linux_x86_64.whl (24.5 MB)\n\u001b[K |████████████████████████████████| 24.5 MB 8.1 MB/s \n\u001b[?25hCollecting torchaudio==0.10.2+cu113\n Downloading https://download.pytorch.org/whl/cu113/torchaudio-0.10.2%2Bcu113-cp37-cp37m-linux_x86_64.whl (2.9 MB)\n\u001b[K |████████████████████████████████| 2.9 MB 34.6 MB/s \n\u001b[?25hRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch==1.10.2+cu113) (3.10.0.2)\nRequirement already satisfied: pillow!=8.3.0,>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from torchvision==0.11.3+cu113) (7.1.2)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torchvision==0.11.3+cu113) (1.21.5)\nInstalling collected packages: torch, torchvision, torchaudio\n Attempting uninstall: torch\n Found existing installation: torch 1.10.0+cu111\n Uninstalling torch-1.10.0+cu111:\n Successfully uninstalled torch-1.10.0+cu111\n Attempting uninstall: torchvision\n Found existing installation: torchvision 0.11.1+cu111\n Uninstalling torchvision-0.11.1+cu111:\n Successfully uninstalled torchvision-0.11.1+cu111\n Attempting uninstall: torchaudio\n Found existing installation: torchaudio 0.10.0+cu111\n Uninstalling torchaudio-0.10.0+cu111:\n Successfully uninstalled torchaudio-0.10.0+cu111\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\ntorchtext 0.11.0 requires torch==1.10.0, but you have torch 1.10.2+cu113 which is incompatible.\u001b[0m\nSuccessfully installed torch-1.10.2+cu113 torchaudio-0.10.2+cu113 torchvision-0.11.3+cu113\nReading package lists... Done\nBuilding dependency tree \nReading state information... Done\nThe following additional packages will be installed:\n libomp5\nSuggested packages:\n libomp-doc\nThe following NEW packages will be installed:\n libomp-dev libomp5\n0 upgraded, 2 newly installed, 0 to remove and 39 not upgraded.\nNeed to get 239 kB of archives.\nAfter this operation, 804 kB of additional disk space will be used.\nGet:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libomp5 amd64 5.0.1-1 [234 kB]\nGet:2 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libomp-dev amd64 5.0.1-1 [5,088 B]\nFetched 239 kB in 0s (1,028 kB/s)\ndebconf: unable to initialize frontend: Dialog\ndebconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 2.)\ndebconf: falling back to frontend: Readline\ndebconf: unable to initialize frontend: Readline\ndebconf: (This frontend requires a controlling tty.)\ndebconf: falling back to frontend: Teletype\ndpkg-preconfigure: unable to re-open stdin: \nSelecting previously unselected package libomp5:amd64.\n(Reading database ... 155335 files and directories currently installed.)\nPreparing to unpack .../libomp5_5.0.1-1_amd64.deb ...\nUnpacking libomp5:amd64 (5.0.1-1) ...\nSelecting previously unselected package libomp-dev.\nPreparing to unpack .../libomp-dev_5.0.1-1_amd64.deb ...\nUnpacking libomp-dev (5.0.1-1) ...\nSetting up libomp5:amd64 (5.0.1-1) ...\nSetting up libomp-dev (5.0.1-1) ...\nProcessing triggers for libc-bin (2.27-3ubuntu1.3) ...\n/sbin/ldconfig.real: /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link\n\n"
],
[
"import torch\n\nprint(f'torch.__version__ = {torch.__version__}')\nprint(f'torch.cuda.is_available() = {torch.cuda.is_available()}')\nprint(f'torch.cuda.current_device() = {torch.cuda.current_device()}')\nprint(f'torch.cuda.device(0) = {torch.cuda.device(0)}')\nprint(f'torch.cuda.device_count() = {torch.cuda.device_count()}')\nprint(f'torch.cuda.get_device_name(0) = {torch.cuda.get_device_name(0)}')",
"torch.__version__ = 1.10.2+cu113\ntorch.cuda.is_available() = True\ntorch.cuda.current_device() = 0\ntorch.cuda.device(0) = <torch.cuda.device object at 0x7f17ea3237d0>\ntorch.cuda.device_count() = 1\ntorch.cuda.get_device_name(0) = Tesla P100-PCIE-16GB\n"
],
[
"%cd /content/drive/MyDrive/ColabNotebooks/Transformer/LA-Transformer",
"/content/drive/.shortcut-targets-by-id/19RweVltTTlScqIDv6lHIQzlQezjmyFBN/ColabNotebooks/Transformer/LA-Transformer\n"
]
],
[
[
"# Testing",
"_____no_output_____"
]
],
[
[
"from __future__ import print_function\n\nimport os\nimport time\nimport glob\nimport random\nimport zipfile\nfrom itertools import chain\n\nimport timm\nimport numpy as np\nimport pandas as pd\nfrom PIL import Image\nfrom tqdm.notebook import tqdm\nimport matplotlib.pyplot as plt\nfrom collections import OrderedDict\nfrom sklearn.model_selection import train_test_split\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn import init\nimport torch.optim as optim\nfrom torchvision import models\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nfrom torch.optim.lr_scheduler import StepLR\nfrom torchvision import datasets, transforms\nfrom torch.utils.data import DataLoader, Dataset\n\nimport faiss\n\n# from LATransformer.model import ClassBlock, LATransformer, LATransformerTest\n# from LATransformer.utils import save_network, update_summary, get_id\n# from LATransformer.metrics import rank1, rank5, rank10, calc_map\n\nfrom osprey import LATransformerTest\n\ndef initilize_device(hardware):\n # os.environ['CUDA_VISIBLE_DEVICES']='1'\n if hardware == \"gpu\":\n device = \"cuda\"\n # if not device.type == \"cpu\":\n print(f'torch.__version__ = {torch.__version__}')\n print(f'torch.cuda.is_available() = {torch.cuda.is_available()}')\n print(f'torch.cuda.current_device() = {torch.cuda.current_device()}')\n print(f'torch.cuda.device(0) = {torch.cuda.device(0)}')\n print(f'torch.cuda.device_count() = {torch.cuda.device_count()}')\n print(f'torch.cuda.get_device_name(0) = {torch.cuda.get_device_name(0)}')\n \n elif hardware == \"cpu\":\n device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\") ## Use if CPU\n print(\"Using cpu\")\n\n else:\n print(\"Choose either gpu or cpu\")\n return None\n\n return device\n\ndevice = initilize_device(\"gpu\")",
"torch.__version__ = 1.10.2+cu113\ntorch.cuda.is_available() = True\ntorch.cuda.current_device() = 0\ntorch.cuda.device(0) = <torch.cuda.device object at 0x00000256EC04E2C8>\ntorch.cuda.device_count() = 1\ntorch.cuda.get_device_name(0) = NVIDIA GeForce GTX 1080\n"
]
],
[
[
"## Load Model",
"_____no_output_____"
]
],
[
[
"batch_size = 8\ngamma = 0.7\nseed = 42\n\n# Load ViT\nvit_base = timm.create_model('vit_base_patch16_224', pretrained=True, num_classes=50)\nvit_base= vit_base.to(device)\n\n# Create La-Transformer\nosprey_model = LATransformerTest(vit_base, lmbd=8).to(device)\n\n# Load LA-Transformer\n# name = \"la_with_lmbd_8\"\n# name = \"la_with_lmbd_8_12-03\"\n# save_path = os.path.join('./model',name,'net_best.pth')\n\n\nname = \"oprey_{}\".format(8)\n\noutput_dir = \"model/\" + name\n\nsave_path = os.path.join(output_dir, \"saves\", \"model_32.pt\")\n\ncheckpoint = torch.load(save_path)\nosprey_model.load_state_dict(checkpoint['model_state_dict'], strict=False)\n\n# # Load LA-Transformer\n# name = \"old_weights\"\n# save_path = os.path.join('./model',name,'small_ds_68_map_net_best.pth')\n\n#Transformer\\model\\old_weights\\small_ds_68_map_net_best.pth\n\n# osprey_model.load_state_dict(torch.load(save_path), strict=False)\n# model.eval()\n\ntransform_query_list = [\n transforms.Resize((224,224), interpolation=3),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n ]\ntransform_gallery_list = [\n transforms.Resize(size=(224,224),interpolation=3), #Image.BICUBIC\n transforms.ToTensor(),\n transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n ]\ndata_transforms = {\n'query': transforms.Compose( transform_query_list ),\n'gallery': transforms.Compose(transform_gallery_list),\n}",
"E:\\Anaconda\\envs\\py37\\lib\\site-packages\\torchvision\\transforms\\transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n"
]
],
[
[
"# Required functions",
"_____no_output_____"
]
],
[
[
"\n# device = initilize_device(\"cpu\")\n# We had to recreate the get_id() func since they assume the pictures are named in a specific manner. \ndef get_id_padel(img_path):\n\n labels = []\n for path, v in img_path:\n filename = os.path.basename(path)\n\n label = filename.split('_')[0]\n labels.append(int(label))\n return labels\n \ndef extract_feature(model,dataloaders):\n \n features = torch.FloatTensor()\n count = 0\n idx = 0\n for data in tqdm(dataloaders):\n img, label = data\n img, label = img.to(device), label.to(device)\n\n output = model(img)\n\n n, c, h, w = img.size()\n \n count += n\n features = torch.cat((features, output.detach().cpu()), 0)\n idx += 1\n return features\n\ndef image_loader(data_dir_path): \n image_datasets = {}\n # data_dir = \"data/The_OspreyChallengerSet\"\n data_dir = data_dir_path\n\n image_datasets['query'] = datasets.ImageFolder(os.path.join(data_dir, 'query'),\n data_transforms['query'])\n image_datasets['gallery'] = datasets.ImageFolder(os.path.join(data_dir, 'gallery'),\n data_transforms['gallery'])\n query_loader = DataLoader(dataset = image_datasets['query'], batch_size=batch_size, shuffle=False)\n gallery_loader = DataLoader(dataset = image_datasets['gallery'], batch_size=batch_size, shuffle=False)\n\n return query_loader, gallery_loader, image_datasets\n\ndef feature_extraction(model, query_loader, gallery_loader):\n # Extract Query Features\n query_feature = extract_feature(model, query_loader)\n\n # Extract Gallery Features\n gallery_feature = extract_feature(model, gallery_loader)\n\n return query_feature, gallery_feature\n\ndef get_labels(image_datasets):\n #Retrieve labels\n gallery_path = image_datasets['gallery'].imgs\n query_path = image_datasets['query'].imgs\n gallery_label = get_id_padel(gallery_path)\n query_label = get_id_padel(query_path)\n\n return gallery_label, query_label\n\ndef calc_gelt_feature(query_feature):\n concatenated_query_vectors = []\n for query in query_feature: \n fnorm = torch.norm(query, p=2, dim=1, keepdim=True)*np.sqrt(14) \n query_norm = query.div(fnorm.expand_as(query)) \n concatenated_query_vectors.append(query_norm.view((-1))) # 14*768 -> 10752\n return concatenated_query_vectors\n\ndef calc_gelt_gallery(gallery_feature):\n concatenated_gallery_vectors = []\n for gallery in gallery_feature: \n fnorm = torch.norm(gallery, p=2, dim=1, keepdim=True) *np.sqrt(14) \n gallery_norm = gallery.div(fnorm.expand_as(gallery)) \n concatenated_gallery_vectors.append(gallery_norm.view((-1))) # 14*768 -> 10752 \n return concatenated_gallery_vectors\n\ndef calc_faiss(concatenated_gallery_vectors, gallery_label):\n index = faiss.IndexIDMap(faiss.IndexFlatIP(10752))\n index.add_with_ids(np.array([t.numpy() for t in concatenated_gallery_vectors]), np.array(gallery_label).astype('int64')) # original \n return index\n\ndef search(query: str, k=1):\n encoded_query = query.unsqueeze(dim=0).numpy()\n top_k = index.search(encoded_query, k)\n return top_k",
"_____no_output_____"
],
[
"def osprey_detect(data_dir_path, osprey_model):\n\n query_loader, gallery_loader, image_datasets = image_loader(data_dir_path=data_dir_path)\n\n query_feature, gallery_feature = feature_extraction(model=osprey_model, query_loader=query_loader, gallery_loader=gallery_loader)\n\n gallery_label, query_label = get_labels(image_datasets)\n\n concatenated_query_vectors = calc_gelt_feature(query_feature)\n concatenated_gallery_vectors = calc_gelt_gallery(gallery_feature)\n\n index = calc_faiss(concatenated_gallery_vectors, gallery_label)\n\n return concatenated_query_vectors, index",
"_____no_output_____"
],
[
"concatenated_query_vectors, index = osprey_detect(\"data/Osprey_eval\", osprey_model)",
"_____no_output_____"
],
[
"#For each vector in the query vector list\n\nfor query in concatenated_query_vectors:\n output = search(query)\n print(f\"Predicted class: {output[1][0][0]} with {output[0][0][0] * 100} % confidence\")",
"Predicted class: 3 with 68.69755983352661 % confidence\nPredicted class: 4 with 62.87815570831299 % confidence\n"
],
[
"##Making new class boy\ndef predictClass(queryVector):\n output = search(queryVector)\n print(f\"Predicted class: {output[1][0][0]} with {output[0][0][0] * 100} % confidence\")\n\n return output[1][0][0]",
"_____no_output_____"
]
],
[
[
"odoijadsoijas",
"_____no_output_____"
]
],
[
[
"#query_loader, gallery_loader, image_datasets = image_loader(data_dir_path=\"data/The_OspreyChallengerSet\")\n\n#load images from folder\nquery_loader, gallery_loader, image_datasets = image_loader(data_dir_path=\"data/bim_bam\")\n\n#extract features\nquery_feature, gallery_feature = feature_extraction(model=osprey_model, query_loader=query_loader)\n\n#get labels from pictures\ngallery_label, query_label = get_labels(image_datasets)\n\n\nconcatenated_query_vectors = calc_gelt_feature(query_feature)\nconcatenated_gallery_vectors = calc_gelt_gallery(gallery_feature)\n\n\nindex = calc_faiss(concatenated_gallery_vectors, gallery_label)",
"_____no_output_____"
],
[
"rank1_score = 0\nrank5_score = 0\nrank10_score = 0\nap = 0\ncount = 0\nfor query, label in zip(concatenated_query_vectors, query_label):\n count += 1\n label = label\n output = search(query, k=10)\n# print(output)\n rank1_score += rank1(label, output) \n rank5_score += rank5(label, output) \n rank10_score += rank10(label, output) \n print(\"Correct: {}, Total: {}, Incorrect: {}\".format(rank1_score, count, count-rank1_score), end=\"\\r\")\n ap += calc_map(label, output)\n\nprint(\"Rank1: {}, Rank5: {}, Rank10: {}, mAP: {}\".format(rank1_score/len(query_feature), \n rank5_score/len(query_feature), \n rank10_score/len(query_feature), ap/len(query_feature))) ",
"Correct: 1, Total: 1, Incorrect: 0\rCorrect: 2, Total: 2, Incorrect: 0\rRank1: 1.0, Rank5: 1.0, Rank10: 1.0, mAP: 0.6766666666666666\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d054a0adfdc91aa72e064677c633ed39a0b06efd | 38,867 | ipynb | Jupyter Notebook | miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb | dcroce/jupyter-book | 9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624 | [
"MIT"
] | null | null | null | miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb | dcroce/jupyter-book | 9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624 | [
"MIT"
] | null | null | null | miscellaneous_notebooks/Markov_Chains/Chain_at_a_Fixed_Time.ipynb | dcroce/jupyter-book | 9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624 | [
"MIT"
] | null | null | null | 27.604403 | 450 | 0.387964 | [
[
[
"# HIDDEN\nfrom datascience import *\nfrom prob140 import *\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('fivethirtyeight')\n%matplotlib inline\nimport math\nfrom scipy import stats\nfrom scipy import misc",
"_____no_output_____"
]
],
[
[
"### The Chain at a Fixed Time ###\nLet $X_0, X_1, X_2, \\ldots $ be a Markov Chain with state space $S$. We will start by setting up notation that will help us express our calculations compactly.\n\nFor $n \\ge 0$, let $P_n$ be the distribution of $X_n$. That is,\n\n$$\nP_n(i) = P(X_n = i), ~~~~ i \\in S\n$$\n\nThen the distribution of $X_0$ is $P_0$. This is called the *initial distribution* of the chain.\n\nFor $n \\ge 0$ and $j \\in S$,\n\n\\begin{align*}\nP_{n+1}(j) &= P(X_{n+1} = j) \\\\\n&= \\sum_{i \\in S} P(X_n = i, X_{n+1} = j) \\\\\n&= \\sum_{i \\in S} P(X_n = i)P(X_{n+1} = j \\mid X_n = i) \\\\\n&= \\sum_{i \\in S} P_n(i)P(X_{n+1} = j \\mid X_n = i)\n\\end{align*}\n\nThe conditional probability $P(X_{n+1} = j \\mid X_n = i)$ is called a *one-step transition probability at time $n$*. \n\nFor many chains such as the random walk, these one-step transition probabilities depend only on the states $i$ and $j$, not on the time $n$. For example, for the random walk,\n\n\\begin{equation}\nP(X_{n+1} = j \\mid X_n = i) = \n \\begin{cases} \n \\frac{1}{2} & \\text{if } j = i-1 \\text{ or } j = i+1 \\\\\n 0 & \\text{ otherwise}\n \\end{cases}\n\\end{equation}\n\nfor every $n$. When one-step transition probabilites don't depend on $n$, they are called *stationary* or *time-homogenous*. All the Markov Chains that we will study in this course have time-homogenous transition probabilities.\n\nFor such a chain, define the *one-step transition probability*\n\n$$\nP(i, j) = P(X_{n+1} = j \\mid X_n = i)\n$$",
"_____no_output_____"
],
[
"### The Probability of a Path ###\nGiven that the chain starts at $i$, what is the chance that the next three values are of the chain are $j, k$, and $l$, in that order? \n\nWe are looking for \n$$\nP(X_1 = j, X_2 = k, X_3 = l \\mid X_0 = i)\n$$\n\nBy repeated use of the multiplication rule and the Markov property, this is\n\n$$\nP(X_1 = j, X_2 = k, X_3 = l \\mid X_0 = i) = P(i, j)P(j, k)P(k, l)\n$$\n\nIn the same way, given that you know the starting point, you can find the probability of any path of finite length by multiplying one-step transition probabilities.",
"_____no_output_____"
],
[
"### The Distribution of $X_{n+1}$ ###\nBy our calculation at the start of this section,\n\n\\begin{align*}\nP_{n+1}(j) &= P(X_{n+1} = j) \\\\\n&= \\sum_{i \\in S} P_n(i)P(X_{n+1} = j \\mid X_n = i) \\\\\n&= \\sum_{i \\in S} P_n(i)P(i, j)\n\\end{align*}\n\nThe calculation is based on the straightforward observation that for the chain to be at state $j$ at time $n+1$, it had to be at some state $i$ at time $n$ and then get from $i$ to $j$ in one step.",
"_____no_output_____"
],
[
"Let's use all this in examples. You will quickly see that the distribution $P_n$ has interesting properties.",
"_____no_output_____"
],
[
"### Lazy Random Walk on a Circle ###\nLet the state space be five points arranged on a circle. Suppose the process starts at Point 1, and at each step either stays in place with probability 0.5 (and thus is lazy), or moves to one of the two neighboring points with chance 0.25 each, regardless of the other moves. \n\nThis transition behavior can be summed up in a *transition diagram*:\n\n\n\nAt every step, the next move is determined by a random choice from among three options and by the chain's current location, not on how it got to that location. So the process is a Markov chain. Let's call it $X_0, X_1, X_2, \\ldots $.\n\nBy our assumption, the initial distribution $P_0$ puts all the probability on Point 1. It is defined in the cell below. We will be using `prob140` Markov Chain methods based on [Pykov](https://github.com/riccardoscalco/Pykov) written by [Riccardo Scalco](http://riccardoscalco.github.io). Note the use of `states` instead of `values`. Please enter the states in ascending order, for technical reasons that we hope to overcome later in the term.",
"_____no_output_____"
]
],
[
[
"s = np.arange(1, 6)\np = [1, 0, 0, 0, 0]\ninitial = Table().states(s).probability(p)\ninitial",
"_____no_output_____"
]
],
[
[
"The transition probabilities are:\n- For $2 \\le i \\le 4$, $P(i, i) = 0.5$ and $P(i, i-1) = 0.25 = P(i, i+1)$. \n- $P(1, 1) = 0.5$ and $P(1, 5) = 0.25 = P(1, 2)$.\n- $P(5, 5) = 0.5$ and $P(5, 4) = 0.25 = P(5, 1)$.\n\nThese probabilities are returned by the function `circle_walk_probs` that takes states $i$ and $j$ as its arguments.",
"_____no_output_____"
]
],
[
[
"def circle_walk_probs(i, j):\n if i-j == 0:\n return 0.5\n elif abs(i-j) == 1:\n return 0.25\n elif abs(i-j) == 4:\n return 0.25\n else:\n return 0 ",
"_____no_output_____"
]
],
[
[
"All the transition probabilities can be captured in a table, in a process analogous to creating a joint distribution table.",
"_____no_output_____"
]
],
[
[
"trans_tbl = Table().states(s).transition_function(circle_walk_probs)",
"_____no_output_____"
],
[
"trans_tbl",
"_____no_output_____"
]
],
[
[
"Just as when we were constructing joint distribution tables, we can better visualize this as a $5 \\times 5$ table:",
"_____no_output_____"
]
],
[
[
"circle_walk = trans_tbl.toMarkovChain()\ncircle_walk",
"_____no_output_____"
]
],
[
[
"This is called the *transition matrix* of the chain. \n- For each $i$ and $j$, the $(i, j)$ element of the transition matrix is the one-step transition probability $P(i, j)$.\n- For each $i$, the $i$th row of the transition matrix consists of the conditional distribution of $X_{n+1}$ given $X_n = i$.",
"_____no_output_____"
],
[
"#### Probability of a Path ####\nWhat's the probability of the path 1, 1, 2, 1, 2? That's the path $X_0 = 1, X_1 = 1, X_2 = 2, X_3 = 1, X_4 = 2$. We know that the chain is starting at 1, so the chance of the path is\n\n$$\n1 \\cdot P(1, 1)P(1, 2)P(2, 1)P(1, 2) = 0.5 \\times 0.25 \\times 0.25 \\times 0.25 = 0.0078125\n$$\n\nThe method `prob_of_path` takes the initial distribution and path as its arguments, and returns the probability of the path:",
"_____no_output_____"
]
],
[
[
"circle_walk.prob_of_path(initial, [1, 1, 2, 1, 2])",
"_____no_output_____"
]
],
[
[
"#### Distribution of $X_n$ ####\nRemember that the chain starts at 1. So $P_0$, the distribution of $X_0$ is:",
"_____no_output_____"
]
],
[
[
"initial",
"_____no_output_____"
]
],
[
[
"We know that $P_1$ must place probability 0.5 at Point 1 and 0.25 each the points 2 and 5. This is confirmed by the `distribution` method that applies to a MarkovChain object. Its first argument is the initial distribution, and its second is the number of steps $n$. It returns a distribution object that is the distribution of $X_n$. ",
"_____no_output_____"
]
],
[
[
"P_1 = circle_walk.distribution(initial, 1)\nP_1",
"_____no_output_____"
]
],
[
[
"What's the probability that the chain is has value 3 at time 2? That's $P_2(3)$ which we can calculate by conditioning on $X_1$:\n\n$$\nP_2(3) = \\sum_{i=1}^5 P_1(i)P(i, 3)\n$$\n\nThe distribution of $X_1$ is $P_1$, given above. Here are those probabilities in an array:",
"_____no_output_____"
]
],
[
[
"P_1.column('Probability')",
"_____no_output_____"
]
],
[
[
"The `3` column of the transition matrix gives us, for each $i$, the chance of getting from $i$ to 3 in one step.",
"_____no_output_____"
]
],
[
[
"circle_walk.column('3')",
"_____no_output_____"
]
],
[
[
"So the probability that the chain has the value 3 at time 2 is $P_2(3)$ which is equal to:",
"_____no_output_____"
]
],
[
[
"sum(P_1.column('Probability')*circle_walk.column('3'))",
"_____no_output_____"
]
],
[
[
"Similarly, $P_2(2)$ is equal to:",
"_____no_output_____"
]
],
[
[
"sum(P_1.column('Probability')*circle_walk.column('2'))",
"_____no_output_____"
]
],
[
[
"And so on. The `distribution` method finds all these probabilities for us.",
"_____no_output_____"
]
],
[
[
"P_2 = circle_walk.distribution(initial, 2)\nP_2",
"_____no_output_____"
]
],
[
[
"At time 3, the chain continues to be much more likely to be at 1, 2, or 5 compared to the other two states. That's because it started at Point 1 and is lazy.",
"_____no_output_____"
]
],
[
[
"P_3 = circle_walk.distribution(initial, 3)\nP_3",
"_____no_output_____"
]
],
[
[
"But by time 10, something interesting starts to emerge.",
"_____no_output_____"
]
],
[
[
"P_10 = circle_walk.distribution(initial, 10)\nP_10",
"_____no_output_____"
]
],
[
[
"The chain is almost equally likely to be at any of the five states. By time 50, it seems to have completely forgotten where it started, and is distributed uniformly on the state space.",
"_____no_output_____"
]
],
[
[
"P_50 = circle_walk.distribution(initial, 50)\nP_50",
"_____no_output_____"
]
],
[
[
"As time passes, this chain gets \"all mixed up\", regardless of where it started. That is perhaps not surprising as the transition probabilities are symmetric over the five states. Let's see what happens when we cut the circle between Points 1 and 5 and lay it out in a line.",
"_____no_output_____"
],
[
"### Reflecting Random Walk ###\nThe state space and transition probabilities remain the same, except when the chain is at the two \"edge\" states.\n- If the chain is at Point 1, then at the next step it either stays there or moves to Point 2 with equal probability: $P(1, 1) = 0.5 = P(1, 2)$.\n- If the chain is at Point 5, then at the next step it either stays there or moves to Point 4 with equal probability: $P(5, 5) = 0.5 = P(5, 4)$.\n\nWe say that there is *reflection* at the boundaries 1 and 5.\n\n",
"_____no_output_____"
]
],
[
[
"def ref_walk_probs(i, j):\n if i-j == 0:\n return 0.5\n elif 2 <= i <= 4:\n if abs(i-j) == 1:\n return 0.25\n else:\n return 0\n elif i == 1:\n if j == 2:\n return 0.5\n else:\n return 0\n elif i == 5:\n if j == 4:\n return 0.5\n else:\n return 0",
"_____no_output_____"
],
[
"trans_tbl = Table().states(s).transition_function(ref_walk_probs)\nrefl_walk = trans_tbl.toMarkovChain()\nprint('Transition Matrix')\nrefl_walk",
"Transition Matrix\n"
]
],
[
[
"Let the chain start at Point 1 as it did in the last example. That initial distribution was defined as `initial`. At time 1, therefore, the chain is either at 1 or 2, and at times 2 and 3 it is likely to still be around 1.",
"_____no_output_____"
]
],
[
[
"refl_walk.distribution(initial, 1)",
"_____no_output_____"
],
[
"refl_walk.distribution(initial, 3)",
"_____no_output_____"
]
],
[
[
"But by time 20, the distribution is settling down:",
"_____no_output_____"
]
],
[
[
"refl_walk.distribution(initial, 20)",
"_____no_output_____"
]
],
[
[
"And by time 100 it has settled into what is called its *steady state*. ",
"_____no_output_____"
]
],
[
[
"refl_walk.distribution(initial, 100)",
"_____no_output_____"
]
],
[
[
"This steady state distribution isn't uniform. But it is steady. If you increase the amount of time for which the chain has run, you get the same distribution for the value of the chain at that time.\n\nThat's quite remarkable. In the rest of this chapter, we will look more closely at what's going on.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d054b0b0d6e2939f2916ba1a963086d04ce98404 | 25,931 | ipynb | Jupyter Notebook | deep-learning/Keras Tutorials/Arabic-Rootfinder/roots-with-noroots.py.ipynb | AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials | 352dd6d9a785e22fde0ce53a6b0c2e56f4964950 | [
"Apache-2.0"
] | 3,266 | 2017-08-06T16:51:46.000Z | 2022-03-30T07:34:24.000Z | deep-learning/Keras Tutorials/Arabic-Rootfinder/roots-with-noroots.py.ipynb | AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials | 352dd6d9a785e22fde0ce53a6b0c2e56f4964950 | [
"Apache-2.0"
] | 150 | 2017-08-28T14:59:36.000Z | 2022-03-11T23:21:35.000Z | deep-learning/Keras Tutorials/Arabic-Rootfinder/roots-with-noroots.py.ipynb | AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials | 352dd6d9a785e22fde0ce53a6b0c2e56f4964950 | [
"Apache-2.0"
] | 1,449 | 2017-08-06T17:40:59.000Z | 2022-03-31T12:03:24.000Z | 38.302806 | 149 | 0.443793 | [
[
[
"import csv\nfrom pprint import pprint\nimport random\nimport numpy as np\n\nalphabet = ['',\n 'ا', 'ب', 'ت', 'ث','ج','ح', 'خ',\n 'د','ذ','ر','ز', 'س','ش','ص',\n 'ض','ط','ظ','ع','غ','ف','ق',\n 'ك','ل','م','ن','ه','و','ي',\n 'ء','ى','أ','ؤ']\n\ndef xalphabetin(char):\n nums = list(char.encode('utf8'))\n nums[0] = nums[0] - 216\n num = (nums[0] * 256) + nums[1]\n return num\n\ndef alphabetin(char):\n if(char == 'ؤ'):\n return 29\n if(char == 'أ'):\n return 29\n if(char == 'ى'):\n return 1\n if(char == 'ئ'):\n return 1\n \n return alphabet.index(char)\n \ndef alphabetout(num):\n return alphabet[num]\n\ndef binin(dcty):\n x = np.zeros(20*512) # 20 letters max x (from unicode)\n y = np.zeros((4*30) + 1) # 4 letters max y (mapped to alphabet) + 1 \"no root\" flag\n \n lx = 0 # letter index\n for letter in list(dcty['word']):\n ix = (lx*512) + xalphabetin(letter)\n x[ix] = 1\n lx += 1\n \n if dcty['rootsize'] > 0:\n y[(0*30) + dcty['root1']] = 1\n if dcty['rootsize'] > 1:\n y[(1*30) + dcty['root2']] = 1\n if dcty['rootsize'] > 2:\n y[(2*30) + dcty['root3']] = 1\n if dcty['rootsize'] > 3:\n y[(3*30) + dcty['root4']] = 1\n if dcty['rootsize'] == 0:\n y[4*30] = 1 # no root\n\n return np.array([x, y])\n \n\ndef binout(by):\n root = ''\n if by[120] == 1:\n return ''\n \n for yix in range(0, 30):\n if by[yix] == 1:\n lix = yix % 30\n root += alphabetout(lix)\n break\n \n for yix in range(30, 60):\n if by[yix] == 1:\n lix = yix % 30\n root += alphabetout(lix)\n break\n \n for yix in range(60, 90):\n if by[yix] == 1:\n lix = yix % 30\n root += alphabetout(lix)\n break\n \n for yix in range(90, 120):\n if by[yix] == 1:\n lix = yix % 30\n root += alphabetout(lix)\n break\n \n if len(list(root)) == 2:\n root += root[1]\n \n return root\n\ndef transformin(row):\n if(len(row[1]) == 0):\n # null object\n dcty = {\n 'word': row[0],\n 'rootsize': 0,\n 'root1': 0,\n 'root2': 0,\n 'root3': 0,\n 'root4': 0\n }\n binxy = binin(dcty)\n dcty['x'] = binxy[0]\n dcty['y'] = binxy[1]\n return dcty\n \n rootlist = list(row[1])\n rootsize = len(rootlist)\n \n if(len(rootlist) == 2):\n rootlist += [rootlist[1]]\n rootsize = 3\n \n if(rootlist[2] not in alphabet):\n # null object\n dcty = {\n 'word': row[0],\n 'rootsize': 0,\n 'root1': 0,\n 'root2': 0,\n 'root3': 0,\n 'root4': 0\n }\n binxy = binin(dcty)\n dcty['x'] = binxy[0]\n dcty['y'] = binxy[1]\n return dcty\n \n if(len(rootlist) == 3):\n rootlist += [\"\"]\n \n dcty = {\n 'word': row[0],\n 'rootsize': rootsize,\n 'root1': alphabetin(rootlist[0]),\n 'root2': alphabetin(rootlist[1]),\n 'root3': alphabetin(rootlist[2]),\n 'root4': alphabetin(rootlist[3])\n }\n binxy = binin(dcty)\n dcty['x'] = binxy[0]\n dcty['y'] = binxy[1]\n return dcty\n \ndef transformout(data):\n return [data['word'], alphabetout(data['root1']) + alphabetout(data['root2']) + alphabetout(data['root3']) + alphabetout(data['root4'])]\n\ndatain = []\n\nwith open('roots-all.csv') as csvfile:\n readcsv = csv.reader(csvfile, delimiter=',')\n next(readcsv)\n for row in readcsv:\n data = transformin(row)\n \n if(data == False):\n continue\n \n datain += [data]\n\nfor i in range(3):\n r = random.randint(0,len(datain))\n pprint(transformout(datain[r]))\n pprint(datain[r])\n pprint(binout(datain[r]['y']))\n print(\"\\n\")\n",
"['تحصيل', 'حصل']\n{'root1': 6,\n 'root2': 14,\n 'root3': 23,\n 'root4': 0,\n 'rootsize': 3,\n 'word': 'تحصيل',\n 'x': array([0., 0., 0., ..., 0., 0., 0.]),\n 'y': array([0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0.])}\n'حصل'\n\n\n['الأقواس', 'قوس']\n{'root1': 21,\n 'root2': 27,\n 'root3': 12,\n 'root4': 0,\n 'rootsize': 3,\n 'word': 'الأقواس',\n 'x': array([0., 0., 0., ..., 0., 0., 0.]),\n 'y': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0.])}\n'قوس'\n\n\n['بالمتماثلات', 'مثل']\n{'root1': 24,\n 'root2': 4,\n 'root3': 23,\n 'root4': 0,\n 'rootsize': 3,\n 'word': 'بالمتماثلات',\n 'x': array([0., 0., 0., ..., 0., 0., 0.]),\n 'y': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0.])}\n'مثل'\n\n\n"
],
[
"from sklearn.model_selection import train_test_split\n\nX = datain\ny = np.array([d['y'] for d in datain])\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.025)\nDX_train = np.array([d['x'] for d in X_train])\nDX_test = np.array([d['x'] for d in X_test])\n\npprint(np.shape(DX_train))\npprint(np.shape(DX_test))\npprint(np.shape(y_train))\npprint(np.shape(y_test))",
"(9045, 10240)\n(232, 10240)\n(9045, 121)\n(232, 121)\n"
],
[
"from keras.models import Sequential\nfrom keras import regularizers\nfrom keras.layers import Dense\n\nmodel = Sequential()\n\nmodel.add(Dense(8000,\n input_dim=10240,\n kernel_initializer='normal',\n activation='sigmoid'))\n\nmodel.add(Dense(121,\n kernel_initializer='normal',\n activation='softmax'))\n\nmodel.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\nmodel.fit(DX_train, y_train, validation_data=(DX_test, y_test), epochs=7, batch_size=200)\n\nloss_and_metrics = model.evaluate(DX_test, y_test, batch_size=128)\n\npprint(loss_and_metrics)\n",
"Train on 9045 samples, validate on 232 samples\nEpoch 1/7\n9045/9045 [==============================] - 79s 9ms/step - loss: 13.9868 - acc: 0.1014 - val_loss: 10.7173 - val_acc: 0.1638\nEpoch 2/7\n9045/9045 [==============================] - 80s 9ms/step - loss: 10.1244 - acc: 0.2670 - val_loss: 9.2710 - val_acc: 0.2241\nEpoch 3/7\n9045/9045 [==============================] - 73s 8ms/step - loss: 8.9914 - acc: 0.3058 - val_loss: 8.5513 - val_acc: 0.2500\nEpoch 4/7\n9045/9045 [==============================] - 77s 8ms/step - loss: 8.3915 - acc: 0.3350 - val_loss: 8.1864 - val_acc: 0.2500\nEpoch 5/7\n9045/9045 [==============================] - 80s 9ms/step - loss: 8.0765 - acc: 0.3353 - val_loss: 8.1016 - val_acc: 0.2198\nEpoch 6/7\n9045/9045 [==============================] - 83s 9ms/step - loss: 7.8822 - acc: 0.3403 - val_loss: 7.9837 - val_acc: 0.1724\nEpoch 7/7\n9045/9045 [==============================] - 75s 8ms/step - loss: 7.7284 - acc: 0.3422 - val_loss: 7.9079 - val_acc: 0.2198\n232/232 [==============================] - 0s 2ms/step\n[7.90788747524393, 0.21982758723456283]\n"
],
[
"def ytobin(y):\n by = np.zeros(121)\n \n if y[120] == 1:\n by[120] == 1\n return by\n\n by[np.argmax(y[0:30])] = 1\n by[np.argmax(y[30:60]) + 30] = 1\n \n if np.max(y[60:90]) > 0.02:\n by[np.argmax(y[60:90]) + 60] = 1\n if np.max(y[90:120]) > 0.01:\n by[np.argmax(y[90:120]) + 90] = 1\n \n return by\n\ndef chunks(l, n):\n \"\"\"Yield successive n-sized chunks from l.\"\"\"\n for i in range(0, len(l), n):\n yield l[i:i + n]\n\nscore = []\nfor r in range(len(X_test)):\n r_pred = model.predict(DX_test[r:r+1,:])[0]\n if binout(ytobin(r_pred)) == transformout(X_test[r])[1]: \n print(\"Correct: \" + str(transformout(X_test[r])))\n score += [1]\n else:\n print(\"Missed: \" + str(transformout(X_test[r])) + \" Predicted: \" + binout(ytobin(r_pred)))\n score += [0]\n\nprint(\"Score: \" + str(round(100 * (np.sum(score) / len(score)), 1)) + \"%\")",
"Missed: ['سلعك', 'سلع'] Predicted: ككع\nCorrect: ['فليبللني', 'بلل']\nCorrect: ['فخاطفتهن', 'خطف']\nMissed: ['ودلعتم', 'دلع'] Predicted: دعم\nMissed: ['لانجرحت', 'جرح'] Predicted: نجح\nMissed: ['تجادل', 'جدل'] Predicted: ججل\nMissed: ['وليفطنوك', 'فطن'] Predicted: لفط\nMissed: ['مغنطيسيتا', ''] Predicted: غطط\nCorrect: ['وفاصلنكما', 'فصل']\nMissed: ['كشبهه', 'شبه'] Predicted: شهه\nMissed: ['فلتقتسمك', 'قسم'] Predicted: ققت\nCorrect: ['بريد', 'برد']\nMissed: ['ولصورتكم', 'صور'] Predicted: صصر\nMissed: ['كاستعماريات', 'عمر'] Predicted: ععر\nMissed: ['ملاحظة', 'لحظ'] Predicted: ححح\nMissed: ['ييمنونهن', 'يمن'] Predicted: منن\nMissed: ['فتذكارية', 'ذكر'] Predicted: ككر\nCorrect: ['فكرحمتها', 'رحم']\nMissed: ['تسنّن', 'سنن'] Predicted: ننن\nCorrect: ['للهمنهم', 'لهم']\nMissed: ['تشغيل', 'شغل'] Predicted: شيل\nMissed: ['فبتفهي', 'تفه'] Predicted: ففه\nMissed: ['وتعتمدهم', 'عمد'] Predicted: ععد\nMissed: ['وغيلانهما', 'غول'] Predicted: غلل\nMissed: ['وارتيابهن', 'ريب'] Predicted: ررب\nMissed: ['كمادحكم', 'مدح'] Predicted: محح\nMissed: ['تقنية', 'تقن'] Predicted: ققت\nMissed: ['فيضمركن', 'ضمر'] Predicted: مضر\nMissed: ['فخرافتهن', 'خرف'] Predicted: خفف\nMissed: ['حرارة', 'حرر'] Predicted: ررر\nCorrect: ['وسيعيبونهن', 'عيب']\nMissed: ['طابور', ''] Predicted: طبر\nMissed: ['وسأرمح', 'رمح'] Predicted: ررح\nMissed: ['ليأرباه', 'ارب'] Predicted: ريب\nMissed: ['ودشنتن', 'دشن'] Predicted: شنن\nMissed: ['لعقلنتكن', 'عقلن'] Predicted: عقل\nMissed: ['بآلفاتهما', 'الف'] Predicted: افف\nMissed: ['ويصححانه', 'صحح'] Predicted: ححح\nCorrect: ['ضَمّ', 'ضمم']\nMissed: ['فلسعادينكن', 'سعدن'] Predicted: ععد\nMissed: ['وتفرملني', 'فرمل'] Predicted: ففل\nMissed: ['بمستلمها', 'سلم'] Predicted: بلل\nCorrect: ['وفضيلكن', 'فضل']\nMissed: ['لليمين', 'يمن'] Predicted: ميم\nMissed: ['فكتحويلي', 'حول'] Predicted: ححل\nCorrect: ['فلمعقولتي', 'عقل']\nMissed: ['مضلّع', 'ضلع'] Predicted: ضعع\nMissed: ['وتستجمعان', 'جمع'] Predicted: تجع\nMissed: ['فكالمطرانين', 'مطرن'] Predicted: قطط\nMissed: ['كسنفرتهن', ''] Predicted: كفر\nMissed: ['كالإرهابيتين', 'رهب'] Predicted: كرر\nMissed: ['عميم', 'عمم'] Predicted: ممم\nMissed: ['وقريباتك', 'قرب'] Predicted: ققب\nMissed: ['احتياطية', 'حوط'] Predicted: حيط\nMissed: ['كزنجيكم', 'زنج'] Predicted: زجج\nMissed: ['بممسكاتكما', 'مسك'] Predicted: ممم\nCorrect: ['ناقل', 'نقل']\nMissed: ['فبفردوسيها', ''] Predicted: ففر\nCorrect: ['جُسَيم', 'جسم']\nMissed: ['لأظفاركم', 'ظفر'] Predicted: ظظر\nMissed: ['الحرف', 'حرف'] Predicted: حفف\nMissed: ['توعيكما', 'وعا'] Predicted: عكع\nMissed: ['فكمتضافركم', 'ضفر'] Predicted: عضف\nMissed: ['كسكائرنا', 'سكر'] Predicted: ككر\nMissed: ['فبدراقكن', 'درق'] Predicted: برق\nMissed: ['فتيمماك', 'يمم'] Predicted: ممم\nCorrect: ['سيرضعك', 'رضع']\nCorrect: ['قرار', 'قرر']\nMissed: ['فزربتماهما', 'زرب'] Predicted: رزب\nCorrect: ['تعددية', 'عدد']\nMissed: ['لغيرتماهما', 'غير'] Predicted: غيم\nMissed: ['لتوريتي', 'ورا'] Predicted: ووت\nMissed: ['المستقبلي', 'قبل'] Predicted: مقل\nMissed: ['واستضفتمانا', 'ضيف'] Predicted: وضف\nMissed: ['فلمستقتلته', 'قتل'] Predicted: مقل\nCorrect: ['وبغنهن', 'غنن']\nMissed: ['فلتيجانكم', 'توج'] Predicted: ميج\nCorrect: ['معياري', 'عير']\nMissed: ['التعاونيً', 'عون'] Predicted: عوت\nMissed: ['كبديعيي', 'بدع'] Predicted: بيع\nMissed: ['فبأجمتكن', ''] Predicted: ججم\nMissed: ['إجابة', 'جوب'] Predicted: جبب\nMissed: ['بالاستغلال', 'غلل'] Predicted: بسل\nMissed: ['وحانتيكن', ''] Predicted: حوت\nCorrect: ['فحفاريهم', 'حفر']\nMissed: ['لأسخنوا', 'سخن'] Predicted: سسن\nMissed: ['فكمتكدر', 'كدر'] Predicted: ككر\nMissed: ['ولمشاتلهن', 'شتل'] Predicted: شمل\nCorrect: ['تناقشهم', 'نقش']\nMissed: ['لتحاماه', 'حما'] Predicted: حمم\nMissed: ['وكتفضلنا', 'فضل'] Predicted: كضل\nMissed: ['منسلخهن', 'سلخ'] Predicted: نسخ\nCorrect: ['معاكستاكما', 'عكس']\nMissed: ['ينجيانه', 'نجو'] Predicted: ننج\nCorrect: ['قانون', 'قنن']\nMissed: ['خزن', 'خزن'] Predicted: خزل\nCorrect: ['منظّمة', 'نظم']\nMissed: ['نلتبس', 'لبس'] Predicted: نبس\nMissed: ['يكلبهم', 'كلب'] Predicted: كبب\nCorrect: ['يصانعونني', 'صنع']\nMissed: ['أوياك', 'اوي'] Predicted: ويل\nCorrect: ['رائعتيهما', 'روع']\nCorrect: ['بمجدالهن', 'جدل']\nMissed: ['أبديتن', 'بدو'] Predicted: بيت\nCorrect: ['فكوصيفتينا', 'وصف']\nCorrect: ['التتابع', 'تبع']\nMissed: ['فمونتما', 'مون'] Predicted: موم\nCorrect: ['وظيفية', 'وظف']\nCorrect: ['الرسالة', 'رسل']\nMissed: ['فللدهقان', 'دهق'] Predicted: لقق\nMissed: ['لأدبتنا', 'ادب'] Predicted: بدت\nMissed: ['وكطموحك', 'طمح'] Predicted: طوح\nMissed: ['نافذة', 'نفذ'] Predicted: نفف\nMissed: ['فلألوثك', 'لوث'] Predicted: للل\nMissed: ['جنوحك', 'جنح'] Predicted: جحح\nCorrect: ['ولشرائعه', 'شرع']\nMissed: ['فجأتك', 'فجء'] Predicted: جكت\nMissed: ['مخفي', 'خفا'] Predicted: خفف\nMissed: ['لتعليتهن', 'علو'] Predicted: ععل\nMissed: ['لتندهشن', 'دهش'] Predicted: ندش\nMissed: ['المواقع', 'وقع'] Predicted: ووع\nCorrect: ['مراسم', 'رسم']\nMissed: ['توارث', 'ورث'] Predicted: وور\nCorrect: ['وبعصارها', 'عصر']\nCorrect: ['تحميل', 'حمل']\nMissed: ['فبقاعتهم', 'قوع'] Predicted: ققع\nMissed: ['كقمحيتين', 'قمح'] Predicted: قحح\nCorrect: ['موضع', 'وضع']\nMissed: ['موجز', 'وجز'] Predicted: وزز\nMissed: ['فتسجراهما', 'سجر'] Predicted: ججر\nMissed: ['فلمستنبطتكم', 'نبط'] Predicted: مسب\nMissed: ['فبكنعانينا', ''] Predicted: بكع\nCorrect: ['فسيقويهما', 'قوا']\nCorrect: ['منزلة', 'نزل']\nMissed: ['ويشاكون', 'شوك'] Predicted: شيي\nCorrect: ['موزع', 'وزع']\nMissed: ['لملاءتها', 'ملء'] Predicted: للل\nMissed: ['ولظئرك', ''] Predicted: ظظر\nMissed: ['فستوشح', 'وشح'] Predicted: ووح\nMissed: ['باعتباطيتهن', 'عبط'] Predicted: عبب\nCorrect: ['سيحصدنهم', 'حصد']\nMissed: ['فتحنطوها', 'حنط'] Predicted: ححط\nMissed: ['هذا', ''] Predicted: هور\nCorrect: ['أعداد', 'عدد']\nMissed: ['ومنبطحاك', 'بطح'] Predicted: نبح\nMissed: ['ثنائيا', 'ثني'] Predicted: ثيي\nMissed: ['فلتتخللنا', 'خلل'] Predicted: تلل\nMissed: ['فلتسحنوا', 'سحن'] Predicted: محح\nMissed: ['فيقتادانا', 'قود'] Predicted: ققد\nMissed: ['فكمماطلتي', 'مطل'] Predicted: ممط\nCorrect: ['مختلفة', 'خلف']\nMissed: ['فصلابتي', 'صلب'] Predicted: صصب\nMissed: ['بخاناتكم', ''] Predicted: خنت\nMissed: ['للصليبي', 'صلب'] Predicted: صصب\nMissed: ['فتجميليتيك', 'جمل'] Predicted: ججل\nMissed: ['حواشي', 'حشو'] Predicted: حوش\nMissed: ['فابدع', 'بدع'] Predicted: بعع\nMissed: ['فعريقتاك', 'عرق'] Predicted: عقت\nMissed: ['يلاج', 'لجج'] Predicted: ججج\nMissed: ['محسوس', 'حسس'] Predicted: سسس\nMissed: ['فستتنصر', 'نصر'] Predicted: قصر\nMissed: ['وليثقفوكم', 'ثقف'] Predicted: ثفف\nCorrect: ['عدً', 'عدد']\nMissed: ['فبإبطييك', 'ابط'] Predicted: ببط\nCorrect: ['تفاصيل', 'فصل']\nMissed: ['فكلقائهن', 'لقا'] Predicted: ققل\nMissed: ['وسنكافها', 'كفا'] Predicted: نفف\nMissed: ['فهجانيهم', 'هجن'] Predicted: جهن\nCorrect: ['فيخلعاهم', 'خلع']\nMissed: ['لسلوتها', 'سلو'] Predicted: لول\nMissed: ['كآريكم', ''] Predicted: اكر\nMissed: ['فيزهداك', 'زهد'] Predicted: زيد\nMissed: ['البلد', 'بلد'] Predicted: بدل\nMissed: ['بمتحاملين', 'حمل'] Predicted: ححم\nMissed: ['البائعة', 'بيع'] Predicted: ببع\nMissed: ['وسيستوعب', 'وعب'] Predicted: سسع\nCorrect: ['وعجائزكم', 'عجز']\nMissed: ['فنهزتاكما', 'نهز'] Predicted: نزز\nCorrect: ['مرسل', 'رسل']\nMissed: ['للقرعاء', 'قرع'] Predicted: ققع\nMissed: ['ولصفحتكم', 'صفح'] Predicted: صصح\nMissed: ['لريمتاك', 'ريم'] Predicted: ررم\nCorrect: ['ويشعلانكما', 'شعل']\nMissed: ['فستوسعه', 'وسع'] Predicted: سسع\nMissed: ['لرشدتماهم', 'رشد'] Predicted: شرم\nMissed: ['وستستشيرونهن', 'شور'] Predicted: عير\nMissed: ['الوظيفية', 'وظف'] Predicted: وفف\nCorrect: ['مجردة', 'جرد']\nMissed: ['سيوبخونه', 'وبخ'] Predicted: بوخ\nMissed: ['لاستهلهم', 'هلل'] Predicted: مهل\nMissed: ['فكأوابيكما', 'اوب'] Predicted: كوب\nCorrect: ['خصائص', 'خصص']\nMissed: ['تنقيح', 'نقح'] Predicted: ققح\nCorrect: ['فنهبوني', 'نهب']\nMissed: ['القياس', 'قيس'] Predicted: قسس\nMissed: ['وأسطورتاكم', ''] Predicted: ططر\nMissed: ['لتدغدغهن', 'دغدغ'] Predicted: غدد\nMissed: ['وبسراقهم', 'سرق'] Predicted: سسق\nMissed: ['طفو', 'طفو'] Predicted: طول\nMissed: ['وسنلخبطه', 'لخبط'] Predicted: نبط\nCorrect: ['لأغربنكم', 'غرب']\nCorrect: ['سترسخها', 'رسخ']\nMissed: ['فكمسكم', 'مسس'] Predicted: مسم\nMissed: ['بيضية', 'بيض'] Predicted: ضضت\nMissed: ['ايقاف', 'وقف'] Predicted: قفف\nMissed: ['فلمتطفلي', 'طفل'] Predicted: مطف\nMissed: ['تقضيانكما', 'قضا'] Predicted: قضن\nCorrect: ['الأصل', 'اصل']\nMissed: ['وبتخلية', 'خلو'] Predicted: خلل\nMissed: ['وسيتجهزن', 'جهز'] Predicted: وجج\nMissed: ['فلحملهن', 'حمل'] Predicted: ححل\nMissed: ['فمهرجانكم', ''] Predicted: مجر\nMissed: ['فلترعنكما', 'رعن'] Predicted: ررع\nMissed: ['مُجمِّع', 'جمع'] Predicted: ججع\nMissed: ['وبتوجسها', 'وجس'] Predicted: وجج\n"
],
[
"x1 = np.array([transformin([\"أرحام\",\"\"])['x']])\nr_pred = model.predict([x1])[0]\nprint(binout(ytobin(r_pred)))",
"ررم\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d054bc927af889f919e84b841833170f4d9133ec | 62,702 | ipynb | Jupyter Notebook | .ipynb_checkpoints/sentiment_analysis_twitter_comments-checkpoint.ipynb | adrientalbot/twitter-sentiment-training | 4669a538dc1644b04605f4741a351d2e1401183f | [
"MIT"
] | null | null | null | .ipynb_checkpoints/sentiment_analysis_twitter_comments-checkpoint.ipynb | adrientalbot/twitter-sentiment-training | 4669a538dc1644b04605f4741a351d2e1401183f | [
"MIT"
] | null | null | null | .ipynb_checkpoints/sentiment_analysis_twitter_comments-checkpoint.ipynb | adrientalbot/twitter-sentiment-training | 4669a538dc1644b04605f4741a351d2e1401183f | [
"MIT"
] | null | null | null | 34.470588 | 11,045 | 0.450305 | [
[
[
"# Twitter Sentiment Analysis",
"_____no_output_____"
]
],
[
[
"import twitter\nimport pandas as pd\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"### Source",
"_____no_output_____"
],
[
"https://towardsdatascience.com/creating-the-twitter-sentiment-analysis-program-in-python-with-naive-bayes-classification-672e5589a7ed",
"_____no_output_____"
],
[
"### Authenticating Twitter API",
"_____no_output_____"
]
],
[
[
"# Authenticating our twitter API credentials\ntwitter_api = twitter.Api(consumer_key='f2ujCRaUnQJy4PoiZvhRQL4n4',\n consumer_secret='EjBSQirf7i83T7CX90D5Qxgs9pTdpIGIsVAhHVs5uvd0iAcw5V',\n access_token_key='1272989631404015616-5XMQkx65rKfQU87UWAh40cMf4aCzSq',\n access_token_secret='emfWcF8fyfqoyywfPCJnz4jXt6DFXfndro59UK9IMAMgy')\n\n# Test authentication to make sure it was successful\nprint(twitter_api.VerifyCredentials())",
"{\"created_at\": \"Tue Jun 16 20:29:26 +0000 2020\", \"default_profile\": true, \"default_profile_image\": true, \"id\": 1272989631404015616, \"id_str\": \"1272989631404015616\", \"name\": \"Nicola Osrin\", \"profile_background_color\": \"F5F8FA\", \"profile_image_url\": \"http://abs.twimg.com/sticky/default_profile_images/default_profile_normal.png\", \"profile_image_url_https\": \"https://abs.twimg.com/sticky/default_profile_images/default_profile_normal.png\", \"profile_link_color\": \"1DA1F2\", \"profile_sidebar_border_color\": \"C0DEED\", \"profile_sidebar_fill_color\": \"DDEEF6\", \"profile_text_color\": \"333333\", \"profile_use_background_image\": true, \"screen_name\": \"NicolaOsrin\"}\n"
]
],
[
[
"### Building the Test Set",
"_____no_output_____"
]
],
[
[
"#We first build the test set, consisting of only 100 tweets for simplicity. \n#Note that we can only download 180 tweets every 15min.\ndef buildTestSet(search_keyword):\n try:\n tweets_fetched = twitter_api.GetSearch(search_keyword, count = 100)\n \n print(\"Fetched \" + str(len(tweets_fetched)) + \" tweets for the term \" + search_keyword)\n \n return [{\"text\":status.text, \"label\":None} for status in tweets_fetched]\n except:\n print(\"Unfortunately, something went wrong..\")\n return None",
"_____no_output_____"
],
[
"#Testing out fetching the test set. The function below prints out the first 5 tweets in our test set.\nsearch_term = input(\"Enter a search keyword:\")\ntestDataSet = buildTestSet(search_term)\n\nprint(testDataSet[0:4])",
"Enter a search keyword:Peace\nFetched 100 tweets for the term Peace\n[{'text': 'Dear Lord,\\n\\nToday, may You heal the parts of me that need healing. Restore and transform me in unfathomable ways. W… https://t.co/bU8X1fMNjP', 'label': None}, {'text': 'You’ll never walk alone because the Almighty is always with you. Don’t you forget that. Always have hope in the fac… https://t.co/XaqjzzUKVa', 'label': None}, {'text': 'In this timeframe Trump was cozying up to Putin AND reaching out to the Taliban for his peace deal. He is not respe… https://t.co/6xq1VxoFg1', 'label': None}, {'text': '@MichaelKugelman @MoeedNj The imbroglio wld continue for an definite period bcz India is hardly given to reason. It… https://t.co/Jbos4GRMJ6', 'label': None}]\n"
],
[
"testDataSet[0]",
"_____no_output_____"
],
[
"#df = pd.DataFrame(list())\n#df.to_csv('tweetDataFile.csv')",
"_____no_output_____"
]
],
[
[
"### Building the Training Set",
"_____no_output_____"
],
[
"We will be using a downloadable training set, consisting of 5,000 tweets. These tweets have already been labelled as positive/negative. We use this training set to calculate the posterior probabilities of each word appearing and its respective sentiment.",
"_____no_output_____"
]
],
[
[
"#As Twitter doesn't allow the storage of the tweets on personal drives, we have to create a function to download\n#the relevant tweets that will be matched to the Tweet IDs and their labels, which we have.\n\ndef buildTrainingSet(corpusFile, tweetDataFile, size):\n import csv\n import time\n \n count = 0\n corpus = []\n \n with open(corpusFile,'r') as csvfile:\n lineReader = csv.reader(csvfile,delimiter=',', quotechar=\"\\\"\")\n for row in lineReader:\n if count <= size: \n corpus.append({\"tweet_id\":row[2], \"label\":row[1], \"topic\":row[0]})\n count += 1\n else: \n break\n\n rate_limit = 180\n sleep_time = 900/180\n \n trainingDataSet = []\n \n for tweet in corpus:\n try:\n status = twitter_api.GetStatus(tweet[\"tweet_id\"])\n print(\"Tweet fetched\" + status.text)\n tweet[\"text\"] = status.text\n trainingDataSet.append(tweet)\n time.sleep(sleep_time) \n except: \n continue\n # now we write them to the empty CSV file\n with open(tweetDataFile,'w') as csvfile:\n linewriter = csv.writer(csvfile,delimiter=',',quotechar=\"\\\"\")\n for tweet in trainingDataSet:\n try:\n linewriter.writerow([tweet[\"tweet_id\"], tweet[\"text\"], tweet[\"label\"], tweet[\"topic\"]])\n except Exception as e:\n print(e)\n return trainingDataSet",
"_____no_output_____"
],
[
"#This function is used to download the actual tweets. It takes hours to run and we only need to run it once\n#in order to get all 5,000 training tweets. The 'size' parameter below is the number of tweets that we want to\n#download. If 5,000 => set size=5,000\n\n'''\ncorpusFile = \"corpus.csv\"\ntweetDataFile = \"tweetDataFile.csv\"\n\ntrainingData = buildTrainingSet(corpusFile, tweetDataFile, 5000)\n'''\n\n#When this code stops running, we will have a tweetDataFile.csv full of the tweets that we downloaded.",
"_____no_output_____"
],
[
"#This line counts the number of tweets and their labels in the Corpus.csv file that we originally downloaded\ncorp = pd.read_csv(\"corpus.csv\", header = 0 , names = ['topic','label', 'tweet_id'] )\ncorp['label'].value_counts()",
"_____no_output_____"
],
[
"#As a check, we look at the first 5 lines in our new tweetDataFile.csv\ntrainingData_copied = pd.read_csv(\"tweetDataFile.csv\", header = None, names = ['tweet_id', 'text', 'label', 'topic'])\ntrainingData_copied.head()",
"_____no_output_____"
],
[
"len(trainingData_copied)",
"_____no_output_____"
],
[
"#We check the number of tweets by each label in our training set\ntrainingData_copied['label'].value_counts()",
"_____no_output_____"
],
[
"df = trainingData_copied.copy()\nlst_labels = df['label'].unique()\ncount_rows_keep = df['label'].value_counts().min()\n\nneutral_df = df[df['label'] == 'neutral'].sample(n= count_rows_keep , random_state = 3)\nirrelevant_df = df[df['label'] == 'irrelevant'].sample(n= count_rows_keep , random_state = 2)\nnegative_df = df[df['label'] == 'negative'].sample(n= count_rows_keep , random_state = 3)\npositive_df = df[df['label'] == 'positive'].sample(n= count_rows_keep , random_state = 3)\n\nlst_df = [neutral_df, irrelevant_df, negative_df, positive_df]\n\ntrainingData_copied = pd.concat(lst_df)\ntrainingData_copied['label'].value_counts()",
"_____no_output_____"
],
[
"'''\ndef oversample(df):\n lst_labels = df['label'].unique()\n for x in lst_labels:\n if len(df[df['label'] == x]) < df['label'].value_counts().max():\n df=df.append(df[df['label'] == x]*((df['label'].value_counts().max())/ len(df[df['label'] == 'x']))) \n return df\n'''",
"_____no_output_____"
],
[
"'''\ndef undersample(df):\n lst_labels = df['label'].unique()\n for x in lst_labels:\n if len(df[df['label'] == 'x']) > df['label'].value_counts().min():\n count_rows_keep = df['label'].value_counts().min()\n sample = df[df['label'] == 'x'].sample(n= count_rows_keep , random_state = 1)\n index_drop = pd.concat([df[df['label'] == 'x'], sample]).drop_duplicates(keep=False).index\n df = df.drop(index_drop)\n return df\n'''",
"_____no_output_____"
],
[
"trainingData_copied = trainingData_copied.to_dict('records')",
"_____no_output_____"
]
],
[
[
"### Pre-processing",
"_____no_output_____"
],
[
"Here we use the NLTK library to filter for keywords and remove irrelevant words in tweets. We also remove punctuation and things like images (emojis) as they cannot be classified using this model.",
"_____no_output_____"
]
],
[
[
"import re #a library that makes parsing strings and modifying them more efficient\nfrom nltk.tokenize import word_tokenize\nfrom string import punctuation \nfrom nltk.corpus import stopwords \nimport nltk #Natural Processing Toolkit that takes care of any processing that we need to perform on text \n #to change its form or extract certain components from it.\n \n#nltk.download('popular') #We need this if certain nltk libraries are not installed. \n\nclass PreProcessTweets:\n def __init__(self):\n self._stopwords = set(stopwords.words('english') + list(punctuation) + ['AT_USER','URL'])\n \n def processTweets(self, list_of_tweets):\n processedTweets=[]\n for tweet in list_of_tweets:\n processedTweets.append((self._processTweet(tweet[\"text\"]),tweet[\"label\"]))\n return processedTweets\n \n def _processTweet(self, tweet):\n tweet = tweet.lower() # convert text to lower-case\n tweet = re.sub('((www\\.[^\\s]+)|(https?://[^\\s]+))', 'URL', tweet) # remove URLs\n tweet = re.sub('@[^\\s]+', 'AT_USER', tweet) # remove usernames\n tweet = re.sub(r'#([^\\s]+)', r'\\1', tweet) # remove the # in #hashtag\n tweet = word_tokenize(tweet) # remove repeated characters (helloooooooo into hello)\n return [word for word in tweet if word not in self._stopwords]",
"_____no_output_____"
],
[
"#Here we call the function to pre-process both our training and our test set. \ntweetProcessor = PreProcessTweets()\npreprocessedTrainingSet = tweetProcessor.processTweets(trainingData_copied)\npreprocessedTestSet = tweetProcessor.processTweets(testDataSet)",
"_____no_output_____"
]
],
[
[
"### Building the Naive Bayes Classifier",
"_____no_output_____"
],
[
"We apply a classifier based on Bayes' Theorem, hence the name. It allows us to find the posterior probability of an event occuring (in this case that event being the sentiment- positive/neutral or negative) is reliant on another probabilistic background that we know. \n\nThe posterior probability is calculated as follows:\n$P(A|B) = \\frac{P(B|A)\\times P(A)}{P(B)}$\n\nThe final sentiment is assigned based on the highest probability of the tweet falling in each one.",
"_____no_output_____"
],
[
"#### To read more about Bayes Classifier in the context of classification:\nhttps://nlp.stanford.edu/IR-book/html/htmledition/naive-bayes-text-classification-1.html",
"_____no_output_____"
],
[
"### Build the vocabulary",
"_____no_output_____"
]
],
[
[
"#Here we attempt to build a vocabulary (a list of words) of all words present in the training set.\n\nimport nltk \n\ndef buildVocabulary(preprocessedTrainingData):\n all_words = []\n \n for (words, sentiment) in preprocessedTrainingData:\n all_words.extend(words)\n\n wordlist = nltk.FreqDist(all_words)\n word_features = wordlist.keys()\n \n return word_features\n\n#This function generates a list of all words (all_words) and then turns it into a frequency distribution (wordlist)\n#The word_features is a list of distinct words, with the key being the frequency of each one.",
"_____no_output_____"
]
],
[
[
"### Matching tweets against our vocabulary",
"_____no_output_____"
],
[
"Here we go through all the words in the training set (i.e. our word_features list), comparing every word against the tweet at hand, associating a number with the word following:\n\nlabel 1 (true): if word in vocabulary occurs in tweet\n\nlabel 0 (false): if word in vocabulary does not occur in tweet",
"_____no_output_____"
]
],
[
[
"def extract_features(tweet):\n tweet_words = set(tweet)\n features = {}\n for word in word_features:\n features['contains(%s)' % word] = (word in tweet_words)\n return features ",
"_____no_output_____"
]
],
[
[
"### Building our feature vector",
"_____no_output_____"
]
],
[
[
"word_features = buildVocabulary(preprocessedTrainingSet)\ntrainingFeatures = nltk.classify.apply_features(extract_features, preprocessedTrainingSet)",
"_____no_output_____"
]
],
[
[
"This feature vector shows if a particular tweet contains a certain word out of all the words present in the corpus in the training data + label (positive, negative or neutral) of the tweet.\n\nWe will input the feature vector in the Naive Bayes Classifier, which will calculate the posterior probability given the prior probability that a randomly chosen observation is associated with a certain label, and the likelihood of the outcome/label given the presence of this word (density function of X that comes for observation that comes from the k class/label)",
"_____no_output_____"
],
[
"### Train the Naives Bayes Classifier",
"_____no_output_____"
]
],
[
[
"#This line trains our Bayes Classifier\nNBayesClassifier = nltk.NaiveBayesClassifier.train(trainingFeatures)\n",
"_____no_output_____"
]
],
[
[
"## Test Classifier ",
"_____no_output_____"
]
],
[
[
"#We now run the classifier and test it on 100 tweets previously downloaded in the test set, on our specified keyword.\n\nNBResultLabels = [NBayesClassifier.classify(extract_features(tweet[0])) for tweet in preprocessedTestSet]\n\n# get the majority vote\nif NBResultLabels.count('positive') > NBResultLabels.count('negative'):\n print(\"Overall Positive Sentiment\")\n print(\"Positive Sentiment Percentage = \" + str(100*NBResultLabels.count('positive')/len(NBResultLabels)) + \"%\")\nelse: \n print(\"Overall Negative Sentiment\")\n print(\"Negative Sentiment Percentage = \" + str(100*NBResultLabels.count('negative')/len(NBResultLabels)) + \"%\")\n print(\"Positive Sentiment Percentage = \" + str(100*NBResultLabels.count('positive')/len(NBResultLabels)) + \"%\")\n print(\"Number of negative comments = \" + str(NBResultLabels.count('negative')))\n print(\"Number of positive comments = \" + str(NBResultLabels.count('positive')))\n print(\"Number of neutral comments = \" + str(NBResultLabels.count('neutral')))\n print(\"Number of irrelevant comments = \" + str(NBResultLabels.count('irrelevant')))",
"Overall Negative Sentiment\nNegative Sentiment Percentage = 18.0%\nPositive Sentiment Percentage = 15.0%\nNumber of negative comments = 18\nNumber of positive comments = 15\nNumber of neutral comments = 67\nNumber of irrelevant comments = 0\n"
],
[
"len(preprocessedTestSet)",
"_____no_output_____"
],
[
"import plotly.graph_objects as go\n\nsentiment = [\"Negative\",\"Positive\",\"Neutral\", \"Irrelevant\"]\n\nfig = go.Figure([go.Bar(x=sentiment, y=[str(NBResultLabels.count('negative')), str(NBResultLabels.count('positive')), str(NBResultLabels.count('neutral')), str(NBResultLabels.count('irrelevant'))])])\nfig.update_layout(title_text='Sentiment Results for Specific Keyword')\n\nfig.update_layout(template = 'simple_white',\n title_text='Twitter Sentiment Results', \n yaxis=dict(\n title='Percentage (%)',\n titlefont_size=16,\n tickfont_size=14,) ,\n \n \n)\n\nfig.show()\n",
"_____no_output_____"
]
],
[
[
"### TBC:\n- Retrieve tweets about keyword, not from keyword (username)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
d054c1c851c5d98fc0294b3f74d3f813941f6be0 | 27,598 | ipynb | Jupyter Notebook | yds_mapping_2012_ds.ipynb | mohadelrezk/open-gov-DataHandling-traffic | a2df3ff176cd9947026a59d9ef77d1f286895dd4 | [
"MIT"
] | null | null | null | yds_mapping_2012_ds.ipynb | mohadelrezk/open-gov-DataHandling-traffic | a2df3ff176cd9947026a59d9ef77d1f286895dd4 | [
"MIT"
] | null | null | null | yds_mapping_2012_ds.ipynb | mohadelrezk/open-gov-DataHandling-traffic | a2df3ff176cd9947026a59d9ef77d1f286895dd4 | [
"MIT"
] | null | null | null | 85.442724 | 499 | 0.405935 | [
[
[
"#yds data mapping 2012 (eastbound and westbound)",
"_____no_output_____"
]
],
[
[
"<h4>This script is to map 2012 galway traffic data (bridge 1)</h4>",
"_____no_output_____"
]
],
[
[
"#python list to store csv data as mapping suggest\n#Site No\tDataset\tSurvey Company\tClient\tProject Reference\tMethod of Survey\tAddress\tLatitude\tLongtitude\tEasting\tNorthing\tDate From\tDate To\tTime From\tTime To\tObservations\tWeather\tJunction Type\tVehicle Type\tDirection\tCount\n#Site No,Dataset,Survey Company,Client,Project Reference,Method of Survey,Address,Latitude,Longtitude,Easting,Northing,Date From,Date To,Time From,Time To,Observations,Weather,Junction Type,Vehicle Type,Direction,Count\nheader=[\"Site No\",\"Dataset\",\"Survey Company\",\"Client\",\"Project Reference\",\"Method of Survey\",\"Address\",\"Latitude\",\"Longtitude\",\"Easting\",\"Northing\",\n\"Date From\",\"Date To\",\"Time From\",\"Time To\",\"Observations\",\"Weather\",\"Junction Type\",\"Vehicle Type\",\"Direction\",\"Count\"]\nfull_data_template = [\"\",\"Galway 2016 Br. 1\",\"Idaso Ltd.\",\"Galway City Council\",\"2016 Annual Survey\",\"JTC\",\"Quincentenary Bridge\",53.282696,-9.06065,495956.4,5903720.6,\"\",\"\",\"\",\"\",\"Nothing to report\",\"Sunny and generally dry but there were some light showers\",\"Link\",\"\",\"\",\"\"]\ndata_template = [\"\",\"Galway 2012 Br. 1\",\"\",\"Galway City Council\",\"\",\"\",\"Quincentenary Bridge\",53.282696,-9.06065,495956.4,5903720.6,\"\",\"\",\"\",\"\",\"\",\"\",\"Link\",\"\",\"\",\"\"] \ndirections_alphabet = [\"\", \"\", \"\", \"\", \"\", \"\", \"A TO F\", \"A TO E\", \"A TO D\", \"A TO C\", \"A TO B\", \"A TO A\", \"B TO A\", \"B TO F\", \"B TO E\", \"B TO D\", \"B TO C\", \"B TO B\", \"C TO B\", \"C TO A\", \"C TO F\", \"C TO E\", \"C TO D\", \"C TO C\", \"D TO C\", \"D TO B\", \"D TO A\", \"D TO F\", \"D TO E\", \"D TO D\", \"E TO D\", \"E TO C\", \"E TO B\", \"E TO A\", \"E TO F\", \"E TO E\", \"F TO E\", \"F TO D\", \"F TO C\", \"F TO B\", \"F TO A\", \"F TO F\"]\noutputfile_name=\"data/2012/mapped-final/bridge1_2012_eastbound_verified.csv\"\nvich_type = [\"Motorcycles\",\"Cars\",\"LGV\",\"HGV\",\"Buses\"]\ndirections = [\"Westbound\",\"Eastbound\"]\ncounts_in_rows = [3,5,7,9,11]\n#times_hourly = [\"00:00\",\"01:00\",\"02:00\",\"03:00\",\"04:00\",\"05:00\",\"06:00\",\"07:00\",\"08:00\",\"08:00\",\"09:00\",\"10:00\",\"11:00\"] \n\n#Read csv file data row by row\n#this file wil only fill sections (0,11,12,13,14,19,20,21)\nimport csv\nwith open('data/2012/refined/Br1_Eastbound_2012.csv', 'rb') as source:\n #write data again acoording to the schema\n #import csv\n with open(outputfile_name, 'w+') as output:\n\n csv_sourcereader = csv.reader(source, delimiter=',', quotechar='\\\"')\n\n outputwriter = csv.writer(output, delimiter=',', quotechar='\\\"')\n #putting the header\n outputwriter.writerow(header)\n #counter to scape file headers\n c = 0\n #list to get all possible readings\n quinque_data = []\n \n #csv reader object to list\n sourcereader = list(csv_sourcereader)\n \n for r in xrange (0,len(sourcereader)):\n \n #print ', '.join(row)\n print sourcereader[r]\n \n import copy\n #lget both possible directions (A-B, B-A)\n #data_A_B = copy.deepcopy(data_template)\n #data_B_A = copy.deepcopy(data_template)\n data = copy.deepcopy(data_template)\n\n #print data\n \n if c > 1 :\n for x in xrange(0,5): \n #a-b\n #data_A_B[0]=row[0] # Site NO\n #data_A_B[11]=row[2] # date from\n #data_A_B[12]=row[2] # date to\n #data_A_B[13]=row[3] # time from\n #data_A_B[14]=row[4] # time to\n #data_A_B[18]=row[5] # Vehicle Type\n\n #b-a\n #data_B_A[0]=row[0] # Site NO\n #data_B_A[11]=row[2] # date from\n #data_B_A[12]=row[2] # date to\n #data_B_A[13]=row[3] # time from\n #data_B_A[14]=row[4] # time to\n #data_B_A[18]=row[5] # Vehicle Type\n\n data[0]=\"\" # Site NO\n data[11]=sourcereader[r][0] # date from\n data[12]=sourcereader[r][0] # date to\n data[13]=\"\\'\"+str(sourcereader[r][1]) # time from\n #last one to avoid index out range\n if sourcereader[r][1] != \"23:00\":\n data[14]=\"\\'\"+str(sourcereader[r+1][1]) # time to\n elif sourcereader[r][1] == \"23:00\":\n data[14]=\"\\'24:00\" # time to\n data[18]=vich_type[x] # Vehicle Type\n data[19]=sourcereader[r][13] # direction\n data[20]=sourcereader[r][counts_in_rows[x]] # count\n \n #appending data row to the 5 rows batch\n quinque_data.append(copy.deepcopy(data))\n \n for data_row in quinque_data:\n outputwriter.writerow(data_row)\n \n c = c + 1\n #print data\n #del data_B_A [:]\n #del data_A_B[:]\n \n del data[:]\n del quinque_data [:]\n",
"['Date From', 'Time', 'Total', 'Bin 1', 'Bin 1', 'Bin 2', 'Bin 2', 'Bin 3', 'Bin 3', 'Bin 4', 'Bin 4', 'Bin 5', 'Bin 5', 'dir']\n['12/11/12', 'Begin', 'Vol.', 'Motorcycles', '%', 'Cars', '%', 'LGV', '%', 'HGV', '%', 'Buses', '%', 'Eastbound']\n['12/11/12', '00:00', '98', '1', '1.02', '92', '93.88', '3', '3.06', '2', '2.04', '0', '0', 'Eastbound']\n['12/11/12', '01:00', '41', '0', '0', '34', '82.93', '3', '7.32', '4', '9.76', '0', '0', 'Eastbound']\n['12/11/12', '02:00', '22', '0', '0', '15', '68.18', '3', '13.64', '4', '18.18', '0', '0', 'Eastbound']\n['12/11/12', '03:00', '35', '0', '0', '33', '94.29', '1', '2.86', '1', '2.86', '0', '0', 'Eastbound']\n['12/11/12', '04:00', '61', '1', '1.64', '44', '72.13', '12', '19.67', '4', '6.56', '0', '0', 'Eastbound']\n['12/11/12', '05:00', '172', '4', '2.33', '137', '79.65', '17', '9.88', '13', '7.56', '1', '0.58', 'Eastbound']\n['12/11/12', '06:00', '492', '4', '0.81', '437', '88.82', '31', '6.3', '20', '4.07', '0', '0', 'Eastbound']\n['12/11/12', '07:00', '1107', '12', '1.08', '979', '88.44', '52', '4.7', '64', '5.78', '0', '0', 'Eastbound']\n['12/11/12', '08:00', '1593', '25', '1.57', '1423', '89.33', '48', '3.01', '97', '6.09', '0', '0', 'Eastbound']\n['12/11/12', '09:00', '1286', '26', '2.02', '1147', '89.19', '37', '2.88', '74', '5.75', '2', '0.16', 'Eastbound']\n['12/11/12', '10:00', '1054', '18', '1.71', '892', '84.63', '72', '6.83', '72', '6.83', '0', '0', 'Eastbound']\n['12/11/12', '11:00', '1041', '10', '0.96', '893', '85.78', '69', '6.63', '66', '6.34', '3', '0.29', 'Eastbound']\n['12/11/12', '12:00', '1100', '14', '1.27', '946', '86', '70', '6.36', '69', '6.27', '1', '0.09', 'Eastbound']\n['12/11/12', '13:00', '1084', '5', '0.46', '961', '88.65', '51', '4.7', '64', '5.9', '3', '0.28', 'Eastbound']\n['12/11/12', '14:00', '887', '8', '0.9', '764', '86.13', '51', '5.75', '63', '7.1', '1', '0.11', 'Eastbound']\n['12/11/12', '15:00', '1217', '17', '1.4', '1052', '86.44', '76', '6.24', '72', '5.92', '0', '0', 'Eastbound']\n['12/11/12', '16:00', '1318', '15', '1.14', '1182', '89.68', '59', '4.48', '61', '4.63', '1', '0.08', 'Eastbound']\n['12/11/12', '17:00', '1213', '13', '1.07', '1113', '91.76', '38', '3.13', '48', '3.96', '1', '0.08', 'Eastbound']\n['12/11/12', '18:00', '1055', '10', '0.95', '965', '91.47', '33', '3.13', '46', '4.36', '1', '0.09', 'Eastbound']\n['12/11/12', '19:00', '764', '6', '0.79', '692', '90.58', '38', '4.97', '28', '3.66', '0', '0', 'Eastbound']\n['12/11/12', '20:00', '665', '0', '0', '612', '92.03', '25', '3.76', '28', '4.21', '0', '0', 'Eastbound']\n['12/11/12', '21:00', '536', '2', '0.37', '490', '91.42', '25', '4.66', '18', '3.36', '1', '0.19', 'Eastbound']\n['12/11/12', '22:00', '321', '1', '0.31', '295', '91.9', '16', '4.98', '9', '2.8', '0', '0', 'Eastbound']\n['12/11/12', '23:00', '209', '0', '0', '194', '92.82', '8', '3.83', '4', '1.91', '3', '1.44', 'Eastbound']\n['13/11/12', '00:00', '82', '0', '0', '79', '96.34', '3', '3.66', '0', '0', '0', '0', 'Eastbound']\n['13/11/12', '01:00', '34', '0', '0', '31', '91.18', '2', '5.88', '0', '0', '1', '2.94', 'Eastbound']\n['13/11/12', '02:00', '21', '0', '0', '17', '80.95', '2', '9.52', '2', '9.52', '0', '0', 'Eastbound']\n['13/11/12', '03:00', '37', '0', '0', '29', '78.38', '4', '10.81', '3', '8.11', '1', '2.7', 'Eastbound']\n['13/11/12', '04:00', '62', '1', '1.61', '40', '64.52', '19', '30.65', '1', '1.61', '1', '1.61', 'Eastbound']\n['13/11/12', '05:00', '144', '1', '0.69', '119', '82.64', '20', '13.89', '4', '2.78', '0', '0', 'Eastbound']\n['13/11/12', '06:00', '431', '2', '0.46', '389', '90.26', '31', '7.19', '8', '1.86', '1', '0.23', 'Eastbound']\n['13/11/12', '07:00', '1189', '6', '0.5', '1092', '91.84', '51', '4.29', '38', '3.2', '2', '0.17', 'Eastbound']\n['13/11/12', '08:00', '1659', '24', '1.45', '1547', '93.25', '25', '1.51', '60', '3.62', '3', '0.18', 'Eastbound']\n['13/11/12', '09:00', '1407', '15', '1.07', '1250', '88.84', '64', '4.55', '76', '5.4', '2', '0.14', 'Eastbound']\n['13/11/12', '10:00', '1095', '15', '1.37', '930', '84.93', '88', '8.04', '61', '5.57', '1', '0.09', 'Eastbound']\n['13/11/12', '11:00', '1037', '21', '2.03', '875', '84.38', '74', '7.14', '60', '5.79', '7', '0.68', 'Eastbound']\n['13/11/12', '12:00', '1075', '4', '0.37', '937', '87.16', '69', '6.42', '63', '5.86', '2', '0.19', 'Eastbound']\n['13/11/12', '13:00', '1074', '11', '1.02', '951', '88.55', '50', '4.66', '59', '5.49', '3', '0.28', 'Eastbound']\n['13/11/12', '14:00', '1159', '16', '1.38', '1008', '86.97', '71', '6.13', '62', '5.35', '2', '0.17', 'Eastbound']\n['13/11/12', '15:00', '1309', '16', '1.22', '1146', '87.55', '75', '5.73', '72', '5.5', '0', '0', 'Eastbound']\n['13/11/12', '16:00', '1411', '28', '1.98', '1241', '87.95', '75', '5.32', '66', '4.68', '1', '0.07', 'Eastbound']\n['13/11/12', '17:00', '1287', '10', '0.78', '1203', '93.47', '21', '1.63', '53', '4.12', '0', '0', 'Eastbound']\n['13/11/12', '18:00', '1233', '11', '0.89', '1164', '94.4', '16', '1.3', '42', '3.41', '0', '0', 'Eastbound']\n['13/11/12', '19:00', '792', '4', '0.51', '719', '90.78', '39', '4.92', '29', '3.66', '1', '0.13', 'Eastbound']\n['13/11/12', '20:00', '744', '3', '0.4', '678', '91.13', '33', '4.44', '30', '4.03', '0', '0', 'Eastbound']\n['13/11/12', '21:00', '607', '1', '0.16', '574', '94.56', '15', '2.47', '16', '2.64', '1', '0.16', 'Eastbound']\n['13/11/12', '22:00', '362', '2', '0.55', '331', '91.44', '17', '4.7', '11', '3.04', '1', '0.28', 'Eastbound']\n['13/11/12', '23:00', '202', '0', '0', '188', '93.07', '8', '3.96', '6', '2.97', '0', '0', 'Eastbound']\n['14/11/12', '00:00', '95', '0', '0', '90', '94.74', '4', '4.21', '1', '1.05', '0', '0', 'Eastbound']\n['14/11/12', '01:00', '39', '0', '0', '36', '92.31', '2', '5.13', '1', '2.56', '0', '0', 'Eastbound']\n['14/11/12', '02:00', '17', '0', '0', '14', '82.35', '2', '11.76', '1', '5.88', '0', '0', 'Eastbound']\n['14/11/12', '03:00', '25', '0', '0', '23', '92', '2', '8', '0', '0', '0', '0', 'Eastbound']\n['14/11/12', '04:00', '45', '0', '0', '27', '60', '14', '31.11', '3', '6.67', '1', '2.22', 'Eastbound']\n['14/11/12', '05:00', '147', '1', '0.68', '126', '85.71', '15', '10.2', '5', '3.4', '0', '0', 'Eastbound']\n['14/11/12', '06:00', '420', '2', '0.48', '370', '88.1', '28', '6.67', '19', '4.52', '1', '0.24', 'Eastbound']\n['14/11/12', '07:00', '1108', '12', '1.08', '990', '89.35', '52', '4.69', '51', '4.6', '3', '0.27', 'Eastbound']\n['14/11/12', '08:00', '1598', '24', '1.5', '1468', '91.86', '33', '2.07', '73', '4.57', '0', '0', 'Eastbound']\n['14/11/12', '09:00', '1465', '25', '1.98', '1344', '90.43', '26', '2.06', '69', '5.45', '1', '0.08', 'Eastbound']\n['14/11/12', '10:00', '995', '13', '1.31', '839', '84.32', '74', '7.44', '66', '6.63', '3', '0.3', 'Eastbound']\n['14/11/12', '11:00', '982', '20', '2.04', '844', '85.95', '70', '7.13', '43', '4.38', '5', '0.51', 'Eastbound']\n['14/11/12', '12:00', '1148', '11', '0.96', '981', '85.45', '86', '7.49', '67', '5.84', '3', '0.26', 'Eastbound']\n['14/11/12', '13:00', '1185', '13', '1.1', '1026', '86.58', '78', '6.58', '64', '5.4', '4', '0.34', 'Eastbound']\n['14/11/12', '14:00', '1202', '11', '0.92', '1058', '88.02', '71', '5.91', '60', '4.99', '2', '0.17', 'Eastbound']\n['14/11/12', '15:00', '1389', '16', '1.24', '1224', '87.2', '62', '4.81', '84', '6.52', '3', '0.23', 'Eastbound']\n['14/11/12', '16:00', '1549', '11', '0.76', '1404', '89.99', '52', '3.59', '77', '5.31', '5', '0.35', 'Eastbound']\n['14/11/12', '17:00', '1517', '20', '1.41', '1414', '92.73', '15', '1.06', '67', '4.73', '1', '0.07', 'Eastbound']\n['14/11/12', '18:00', '1062', '12', '1.39', '964', '88.63', '32', '3.71', '53', '6.15', '1', '0.12', 'Eastbound']\n['14/11/12', '19:00', '914', '4', '0.44', '822', '89.93', '46', '5.03', '42', '4.6', '0', '0', 'Eastbound']\n['14/11/12', '20:00', '775', '4', '0.52', '706', '91.1', '30', '3.87', '35', '4.52', '0', '0', 'Eastbound']\n['14/11/12', '21:00', '619', '2', '0.32', '569', '91.92', '26', '4.2', '21', '3.39', '1', '0.16', 'Eastbound']\n['14/11/12', '22:00', '373', '0', '0', '351', '94.1', '14', '3.75', '8', '2.14', '0', '0', 'Eastbound']\n['14/11/12', '23:00', '211', '0', '0', '199', '94.31', '8', '3.79', '3', '1.42', '1', '0.47', 'Eastbound']\n['15/11/12', '00:00', '122', '0', '0', '112', '91.8', '4', '3.28', '4', '3.28', '2', '1.64', 'Eastbound']\n['15/11/12', '01:00', '57', '0', '0', '53', '92.98', '3', '5.26', '1', '1.75', '0', '0', 'Eastbound']\n['15/11/12', '02:00', '48', '1', '2.08', '43', '89.58', '3', '6.25', '1', '2.08', '0', '0', 'Eastbound']\n['15/11/12', '03:00', '42', '0', '0', '33', '78.57', '7', '16.67', '1', '2.38', '1', '2.38', 'Eastbound']\n['15/11/12', '04:00', '64', '1', '1.56', '47', '73.44', '13', '20.31', '2', '3.13', '1', '1.56', 'Eastbound']\n['15/11/12', '05:00', '153', '0', '0', '125', '81.7', '21', '13.73', '6', '3.92', '1', '0.65', 'Eastbound']\n['15/11/12', '06:00', '423', '1', '0.24', '373', '88.18', '36', '8.51', '12', '2.84', '1', '0.24', 'Eastbound']\n['15/11/12', '07:00', '1104', '13', '1.18', '972', '88.04', '64', '5.8', '54', '4.89', '1', '0.09', 'Eastbound']\n['15/11/12', '08:00', '1629', '19', '1.17', '1493', '91.65', '42', '2.58', '75', '4.6', '0', '0', 'Eastbound']\n['15/11/12', '09:00', '1227', '15', '1.22', '1102', '89.81', '49', '3.99', '60', '4.89', '1', '0.08', 'Eastbound']\n['15/11/12', '10:00', '997', '3', '0.3', '863', '86.56', '90', '9.03', '39', '3.91', '2', '0.2', 'Eastbound']\n['15/11/12', '11:00', '1040', '10', '0.96', '879', '84.52', '96', '9.23', '52', '5', '3', '0.29', 'Eastbound']\n['15/11/12', '12:00', '1093', '17', '1.56', '938', '85.82', '71', '6.5', '66', '6.04', '1', '0.09', 'Eastbound']\n['15/11/12', '13:00', '1143', '13', '1.14', '1002', '87.66', '77', '6.74', '50', '4.37', '1', '0.09', 'Eastbound']\n['15/11/12', '14:00', '1147', '11', '0.96', '1000', '87.18', '76', '6.63', '59', '5.14', '1', '0.09', 'Eastbound']\n['15/11/12', '15:00', '1208', '6', '0.5', '1071', '88.66', '71', '5.88', '57', '4.72', '3', '0.25', 'Eastbound']\n['15/11/12', '16:00', '1425', '17', '1.19', '1262', '88.56', '76', '5.33', '63', '4.42', '7', '0.49', 'Eastbound']\n['15/11/12', '17:00', '1338', '11', '0.82', '1197', '89.46', '73', '5.46', '56', '4.19', '1', '0.07', 'Eastbound']\n['15/11/12', '18:00', '1079', '11', '1.02', '968', '89.71', '58', '5.38', '38', '3.52', '4', '0.37', 'Eastbound']\n['15/11/12', '19:00', '893', '3', '0.34', '819', '91.71', '46', '5.15', '25', '2.8', '0', '0', 'Eastbound']\n['15/11/12', '20:00', '800', '2', '0.25', '739', '92.38', '28', '3.5', '31', '3.88', '0', '0', 'Eastbound']\n['15/11/12', '21:00', '581', '0', '0', '533', '91.74', '28', '4.82', '20', '3.44', '0', '0', 'Eastbound']\n['15/11/12', '22:00', '427', '1', '0.23', '392', '91.8', '25', '5.85', '9', '2.11', '0', '0', 'Eastbound']\n['15/11/12', '23:00', '214', '0', '0', '201', '93.93', '8', '3.74', '5', '2.34', '0', '0', 'Eastbound']\n['16/11/12', '00:00', '116', '0', '0', '105', '90.52', '10', '8.62', '1', '0.86', '0', '0', 'Eastbound']\n['16/11/12', '01:00', '73', '0', '0', '70', '95.89', '3', '4.11', '0', '0', '0', '0', 'Eastbound']\n['16/11/12', '02:00', '60', '0', '0', '46', '76.67', '9', '15', '5', '8.33', '0', '0', 'Eastbound']\n['16/11/12', '03:00', '62', '1', '1.61', '51', '82.26', '8', '12.9', '2', '3.23', '0', '0', 'Eastbound']\n['16/11/12', '04:00', '66', '0', '0', '44', '66.67', '19', '28.79', '3', '4.55', '0', '0', 'Eastbound']\n['16/11/12', '05:00', '150', '2', '1.33', '124', '82.67', '19', '12.67', '4', '2.67', '1', '0.67', 'Eastbound']\n['16/11/12', '06:00', '381', '1', '0.26', '343', '90.03', '27', '7.09', '9', '2.36', '1', '0.26', 'Eastbound']\n['16/11/12', '07:00', '1036', '8', '0.77', '921', '88.9', '60', '5.79', '44', '4.25', '3', '0.29', 'Eastbound']\n['16/11/12', '08:00', '1590', '27', '1.7', '1417', '89.12', '78', '4.91', '65', '4.09', '3', '0.19', 'Eastbound']\n['16/11/12', '09:00', '1350', '13', '0.96', '1184', '87.7', '82', '6.07', '69', '5.11', '2', '0.15', 'Eastbound']\n['16/11/12', '10:00', '1067', '8', '0.75', '932', '87.35', '74', '6.94', '47', '4.4', '6', '0.56', 'Eastbound']\n['16/11/12', '11:00', '1179', '13', '1.1', '1024', '86.85', '85', '7.21', '55', '4.66', '2', '0.17', 'Eastbound']\n['16/11/12', '12:00', '1225', '15', '1.22', '1058', '86.37', '84', '6.86', '63', '5.14', '5', '0.41', 'Eastbound']\n['16/11/12', '13:00', '1328', '15', '1.13', '1167', '87.88', '74', '5.57', '61', '4.59', '11', '0.83', 'Eastbound']\n['16/11/12', '14:00', '1152', '16', '1.39', '1025', '88.98', '34', '2.95', '77', '6.68', '0', '0', 'Eastbound']\n['16/11/12', '15:00', '1212', '23', '1.9', '1083', '89.36', '19', '1.57', '87', '7.18', '0', '0', 'Eastbound']\n['16/11/12', '16:00', '1485', '16', '1.08', '1314', '88.48', '83', '5.59', '63', '4.24', '9', '0.61', 'Eastbound']\n['16/11/12', '17:00', '1429', '19', '1.33', '1280', '89.57', '58', '4.06', '70', '4.9', '2', '0.14', 'Eastbound']\n['16/11/12', '18:00', '1039', '8', '0.77', '946', '91.05', '43', '4.14', '40', '3.85', '2', '0.19', 'Eastbound']\n['16/11/12', '19:00', '854', '5', '0.59', '774', '90.63', '36', '4.22', '38', '4.45', '1', '0.12', 'Eastbound']\n['16/11/12', '20:00', '727', '3', '0.41', '675', '92.85', '28', '3.85', '21', '2.89', '0', '0', 'Eastbound']\n['16/11/12', '21:00', '498', '0', '0', '472', '94.78', '14', '2.81', '12', '2.41', '0', '0', 'Eastbound']\n['16/11/12', '22:00', '297', '2', '0.67', '278', '93.6', '10', '3.37', '6', '2.02', '1', '0.34', 'Eastbound']\n['16/11/12', '23:00', '190', '0', '0', '174', '91.58', '9', '4.74', '7', '3.68', '0', '0', 'Eastbound']\n['17/11/12', '00:00', '145', '0', '0', '131', '90.34', '7', '4.83', '7', '4.83', '0', '0', 'Eastbound']\n['17/11/12', '01:00', '89', '0', '0', '83', '93.26', '4', '4.49', '2', '2.25', '0', '0', 'Eastbound']\n['17/11/12', '02:00', '53', '0', '0', '45', '84.91', '4', '7.55', '3', '5.66', '1', '1.89', 'Eastbound']\n['17/11/12', '03:00', '55', '0', '0', '47', '85.45', '5', '9.09', '1', '1.82', '2', '3.64', 'Eastbound']\n['17/11/12', '04:00', '85', '2', '2.35', '70', '82.35', '10', '11.76', '2', '2.35', '1', '1.18', 'Eastbound']\n['17/11/12', '05:00', '95', '1', '1.05', '77', '81.05', '11', '11.58', '6', '6.32', '0', '0', 'Eastbound']\n['17/11/12', '06:00', '159', '0', '0', '131', '82.39', '19', '11.95', '9', '5.66', '0', '0', 'Eastbound']\n['17/11/12', '07:00', '289', '1', '0.35', '239', '82.7', '36', '12.46', '11', '3.81', '2', '0.69', 'Eastbound']\n['17/11/12', '08:00', '572', '6', '1.05', '491', '85.84', '54', '9.44', '21', '3.67', '0', '0', 'Eastbound']\n['17/11/12', '09:00', '1007', '10', '0.99', '891', '88.48', '65', '6.45', '38', '3.77', '3', '0.3', 'Eastbound']\n['17/11/12', '10:00', '1053', '10', '0.95', '929', '88.22', '56', '5.32', '57', '5.41', '1', '0.09', 'Eastbound']\n['17/11/12', '11:00', '1213', '8', '0.66', '1062', '87.55', '73', '6.02', '69', '5.69', '1', '0.08', 'Eastbound']\n['17/11/12', '12:00', '1281', '16', '1.25', '1115', '87.04', '81', '6.32', '69', '5.39', '0', '0', 'Eastbound']\n['17/11/12', '13:00', '1178', '12', '1.02', '1044', '88.62', '63', '5.35', '59', '5.01', '0', '0', 'Eastbound']\n['17/11/12', '14:00', '1177', '11', '0.93', '1076', '91.42', '43', '3.65', '47', '3.99', '0', '0', 'Eastbound']\n['17/11/12', '15:00', '1115', '7', '0.63', '1000', '89.69', '54', '4.84', '54', '4.84', '0', '0', 'Eastbound']\n['17/11/12', '16:00', '1058', '7', '0.66', '936', '88.47', '60', '5.67', '53', '5.01', '2', '0.19', 'Eastbound']\n['17/11/12', '17:00', '1013', '11', '1.09', '924', '91.21', '29', '2.86', '45', '4.44', '4', '0.39', 'Eastbound']\n['17/11/12', '18:00', '772', '3', '0.39', '713', '92.36', '31', '4.02', '25', '3.24', '0', '0', 'Eastbound']\n['17/11/12', '19:00', '688', '1', '0.15', '635', '92.3', '30', '4.36', '21', '3.05', '1', '0.15', 'Eastbound']\n['17/11/12', '20:00', '569', '4', '0.7', '510', '89.63', '23', '4.04', '32', '5.62', '0', '0', 'Eastbound']\n['17/11/12', '21:00', '372', '1', '0.27', '342', '91.94', '13', '3.49', '16', '4.3', '0', '0', 'Eastbound']\n['17/11/12', '22:00', '270', '1', '0.37', '241', '89.26', '8', '2.96', '20', '7.41', '0', '0', 'Eastbound']\n['17/11/12', '23:00', '208', '1', '0.48', '182', '87.5', '8', '3.85', '17', '8.17', '0', '0', 'Eastbound']\n['18/11/12', '00:00', '126', '0', '0', '118', '93.65', '2', '1.59', '6', '4.76', '0', '0', 'Eastbound']\n['18/11/12', '01:00', '115', '0', '0', '101', '87.83', '4', '3.48', '10', '8.7', '0', '0', 'Eastbound']\n['18/11/12', '02:00', '77', '0', '0', '67', '87.01', '7', '9.09', '3', '3.9', '0', '0', 'Eastbound']\n['18/11/12', '03:00', '51', '0', '0', '43', '84.31', '6', '11.76', '2', '3.92', '0', '0', 'Eastbound']\n['18/11/12', '04:00', '63', '0', '0', '49', '77.78', '9', '14.29', '5', '7.94', '0', '0', 'Eastbound']\n['18/11/12', '05:00', '56', '0', '0', '46', '82.14', '7', '12.5', '3', '5.36', '0', '0', 'Eastbound']\n['18/11/12', '06:00', '89', '0', '0', '79', '88.76', '5', '5.62', '5', '5.62', '0', '0', 'Eastbound']\n['18/11/12', '07:00', '151', '1', '0.66', '136', '90.07', '6', '3.97', '8', '5.3', '0', '0', 'Eastbound']\n['18/11/12', '08:00', '224', '0', '0', '196', '87.5', '14', '6.25', '13', '5.8', '1', '0.45', 'Eastbound']\n['18/11/12', '09:00', '387', '5', '1.29', '353', '91.21', '19', '4.91', '10', '2.58', '0', '0', 'Eastbound']\n['18/11/12', '10:00', '715', '4', '0.56', '634', '88.67', '29', '4.06', '48', '6.71', '0', '0', 'Eastbound']\n['18/11/12', '11:00', '875', '6', '0.69', '807', '92.23', '25', '2.86', '36', '4.11', '1', '0.11', 'Eastbound']\n['18/11/12', '12:00', '1097', '14', '1.28', '985', '89.79', '29', '2.64', '69', '6.29', '0', '0', 'Eastbound']\n['18/11/12', '13:00', '1111', '9', '0.81', '1006', '90.55', '34', '3.06', '61', '5.49', '1', '0.09', 'Eastbound']\n['18/11/12', '14:00', '1121', '9', '0.8', '1031', '91.97', '24', '2.14', '56', '5', '1', '0.09', 'Eastbound']\n['18/11/12', '15:00', '1446', '7', '0.48', '1313', '90.8', '36', '2.49', '89', '6.15', '1', '0.07', 'Eastbound']\n['18/11/12', '16:00', '1467', '10', '0.68', '1361', '92.77', '16', '1.09', '79', '5.39', '1', '0.07', 'Eastbound']\n['18/11/12', '17:00', '966', '12', '1.24', '875', '90.58', '31', '3.21', '47', '4.87', '1', '0.1', 'Eastbound']\n['18/11/12', '18:00', '739', '12', '1.62', '659', '89.17', '24', '3.25', '43', '5.82', '1', '0.14', 'Eastbound']\n['18/11/12', '19:00', '632', '0', '0', '570', '90.19', '16', '2.53', '44', '6.96', '2', '0.32', 'Eastbound']\n['18/11/12', '20:00', '556', '5', '0.9', '511', '91.91', '9', '1.62', '28', '5.04', '3', '0.54', 'Eastbound']\n['18/11/12', '21:00', '400', '2', '0.5', '355', '88.75', '10', '2.5', '32', '8', '1', '0.25', 'Eastbound']\n['18/11/12', '22:00', '282', '3', '1.06', '254', '90.07', '6', '2.13', '18', '6.38', '1', '0.35', 'Eastbound']\n['18/11/12', '23:00', '145', '1', '0.69', '129', '88.97', '3', '2.07', '8', '5.52', '4', '2.76', 'Eastbound']\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d054c94f943b83e15ce076773c29bc9ab989cffc | 35,518 | ipynb | Jupyter Notebook | models/Model1/Mod1_Data_Prep.ipynb | Xavian-Brooker/Gawler-Unearthed | 772554a9891a7411feeb26ed34214d14a6139cb6 | [
"MIT"
] | 1 | 2020-08-04T00:23:14.000Z | 2020-08-04T00:23:14.000Z | models/Model1/Mod1_Data_Prep.ipynb | ozntur/Gawler-Unearthed | 772554a9891a7411feeb26ed34214d14a6139cb6 | [
"MIT"
] | null | null | null | models/Model1/Mod1_Data_Prep.ipynb | ozntur/Gawler-Unearthed | 772554a9891a7411feeb26ed34214d14a6139cb6 | [
"MIT"
] | 3 | 2020-08-04T00:23:24.000Z | 2020-08-28T15:02:40.000Z | 35.411765 | 327 | 0.478884 | [
[
[
"## Data Preperation for the first Model\nWelcome to the first notebook. Here we'll process the data from downloading to what we will be using to train our first model - **'Wh’re Art Thee Min’ral?'**.\n\nThe steps we'll be following here are:\n- Downloading the SARIG Geochem Data Package. **(~350 Mb)**\n- Understanding the data columns in our csv of interest.\n- Cleaning and applying some processing.\n- Saving our processed file into a csv.\n- _And seeing some unnecessary memes in between_.\n\nYou can upload this notebook and run it on colab or on Jupyter-Notebook locally.",
"_____no_output_____"
]
],
[
[
"# import the required package - Pandas\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"You can simply download the data by clicking the link [here](https://unearthed-exploresa.s3-ap-southeast-2.amazonaws.com/Unearthed_5_SARIG_Data_Package.zip). You can also download it by simply running the cell down below.\n\nWe recommed you to use **Google Colab** and download it here itself if you have a poor internet connection.\n\n\n\n Colab has a decent internet speed of around **~15-20 Mb/s** which is more than enough for the download.",
"_____no_output_____"
]
],
[
[
"# You can simply download the data by running this cell\n!wget https://unearthed-exploresa.s3-ap-southeast-2.amazonaws.com/Unearthed_5_SARIG_Data_Package.zip",
"--2020-07-26 10:57:12-- https://unearthed-exploresa.s3-ap-southeast-2.amazonaws.com/Unearthed_5_SARIG_Data_Package.zip\nResolving unearthed-exploresa.s3-ap-southeast-2.amazonaws.com (unearthed-exploresa.s3-ap-southeast-2.amazonaws.com)... 52.95.128.54\nConnecting to unearthed-exploresa.s3-ap-southeast-2.amazonaws.com (unearthed-exploresa.s3-ap-southeast-2.amazonaws.com)|52.95.128.54|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 458997620 (438M) [application/zip]\nSaving to: ‘Unearthed_5_SARIG_Data_Package.zip’\n\nUnearthed_5_SARIG_D 100%[===================>] 437.73M 20.7MB/s in 22s \n\n2020-07-26 10:57:35 (19.5 MB/s) - ‘Unearthed_5_SARIG_Data_Package.zip’ saved [458997620/458997620]\n\n"
]
],
[
[
"\n\nHere for extracting, if you wish to use the download file for a later use, than you can first mount your google drive and then extracting the files there. You can read more about mounting Google Drive to colab [here](https://towardsdatascience.com/downloading-datasets-into-google-drive-via-google-colab-bcb1b30b0166).\n\n***Note** - One of the files is really big (~10 Gb) and so it might take some time to extract as well. *Don't think that it's stuck!*",
"_____no_output_____"
]
],
[
[
"# Let's first create a directory to extract the downloaded zip file.\n!mkdir 'GeoChemData'\n\n# Now let's unzip the files into the data directory that we created.\n!unzip 'Unearthed_5_SARIG_Data_Package.zip' -d 'GeoChemData/'",
"Archive: Unearthed_5_SARIG_Data_Package.zip\n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_dh_core_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_dh_details_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_dh_litho_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_dh_petrophys_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_dh_reference_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_dh_strat_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_fieldobs_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_fieldobs_litho_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_fieldobs_note_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_fieldobs_struct_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_md_commodity_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_md_details_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_md_mineralogy_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_md_reference_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_md_zone_hr_lith_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_md_zone_lith_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_rs_bostr_analys_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_rs_bostr_results_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_rs_chem_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_rs_chem_isotope_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_rs_details_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_rs_geochron_ages_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_rs_geochron_reslt_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_rs_petrology_exp.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_rs_reference_exp.csv \n creating: GeoChemData/SARIG_Data_Package3_Exported06072020/vocabulary_codes_descriptions/\n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/vocabulary_codes_descriptions/chem_method_code_desc.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/vocabulary_codes_descriptions/lithology_code_desc.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/vocabulary_codes_descriptions/petro_type_code_desc.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/vocabulary_codes_descriptions/strat_unit_code_desc.csv \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/vocabulary_codes_descriptions/Stratigraphic Unit Letter Code System - Explanation of the GIS Search Code and Map Symbol.pdf \n inflating: GeoChemData/SARIG_Data_Package3_Exported06072020/vocabulary_codes_descriptions/unit_code_desc.csv \n"
],
[
"# Read the df_details.csv \n# We use unicode_escape as the encoding to avoid etf-8 error.\ndf_details = pd.read_csv('/content/GeoChemData/SARIG_Data_Package3_Exported06072020/sarig_dh_details_exp.csv', encoding= 'unicode_escape')",
"/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py:2718: DtypeWarning: Columns (2) have mixed types.Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n"
],
[
"# Let's view the first few columns\ndf_details.head()",
"_____no_output_____"
],
[
"# Data Column Information\ndf_details.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 321843 entries, 0 to 321842\nData columns (total 51 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 DRILLHOLE_NO 321843 non-null int64 \n 1 DH_NAME 191457 non-null object \n 2 DH_OTHER_NAME 26298 non-null object \n 3 PACE_DH 321843 non-null object \n 4 PACE_ROUND_NO 6535 non-null float64\n 5 REPRESENTATIVE_DH 321843 non-null object \n 6 REPRESENTATIVE_DH_COMMENTS 97696 non-null object \n 7 DH_UNIT_NO 321843 non-null object \n 8 MAX_DRILLED_DEPTH 303597 non-null float64\n 9 MAX_DRILLED_DEPTH_DATE 296142 non-null object \n 10 CORED_LENGTH 51566 non-null float64\n 11 TENEMENT 321843 non-null object \n 12 OPERATOR_CODE 155645 non-null object \n 13 OPERATOR_NAME 155645 non-null object \n 14 TARGET_COMMODITIES 274769 non-null object \n 15 MINERAL_CLASS 321843 non-null object \n 16 PETROLEUM_CLASS 321843 non-null object \n 17 STRATIGRAPHIC_CLASS 321843 non-null object \n 18 ENGINEERING_CLASS 321843 non-null object \n 19 SEISMIC_POINT_CLASS 321843 non-null object \n 20 WATER_WELL_CLASS 321843 non-null object \n 21 WATER_POINT_CLASS 321843 non-null object \n 22 DRILLING_METHODS 235287 non-null object \n 23 STRAT_LOG 321843 non-null object \n 24 LITHO_LOG 321843 non-null object \n 25 PETROPHYSICAL_LOG 321843 non-null object \n 26 GEOCHEMISTRY 321843 non-null object \n 27 PETROLOGY 321843 non-null object \n 28 BIOSTRATIGRAPHY 321843 non-null object \n 29 SPECTRAL_SCANNED 321843 non-null object \n 30 CORE_LIBRARY 321843 non-null object \n 31 REFERENCES 321843 non-null object \n 32 HISTORICAL_DOCUMENTS 321843 non-null object \n 33 COMMENTS 156435 non-null object \n 34 MAP_250000 321843 non-null object \n 35 MAP_100000 321843 non-null object \n 36 MAP_50K_NO 321843 non-null int64 \n 37 SITE_NO 321843 non-null int64 \n 38 EASTING_GDA2020 321843 non-null float64\n 39 NORTHING_GDA2020 321843 non-null float64\n 40 ZONE_GDA2020 321843 non-null int64 \n 41 LONGITUDE_GDA2020 321843 non-null float64\n 42 LATITUDE_GDA2020 321843 non-null float64\n 43 LONGITUDE_GDA94 321843 non-null float64\n 44 LATITUDE_GDA94 321843 non-null float64\n 45 HORIZ_ACCRCY_M 187292 non-null float64\n 46 ELEVATION_M 236945 non-null float64\n 47 INCLINATION 196822 non-null float64\n 48 AZIMUTH 166320 non-null float64\n 49 SURVEY_METHOD_CODE 195778 non-null object \n 50 SURVEY_METHOD 195778 non-null object \ndtypes: float64(13), int64(4), object(34)\nmemory usage: 125.2+ MB\n"
]
],
[
[
"### What columns do we need?\nWe only need the following three columns from this dataframe ->\n- `LONGITUDE_GDA94`: This is the longitude of the mine/mineral location in **EPSG:4283** Co-ordinate Referencing System (CRS). \n\n- `LATITUDE_GDA94`: This is the latitude of the mine/mineral location in **EPSG:4283** Co-ordinate Referencing System (CRS).\n\n- `MINERAL_CLASS`: Mineral Class is a column containing **two unique values (Y/N)** representing if there is any mineralization or not.\n\n> *Note - We are using GDA94 over GDA20 because of the former's standardness.* You can understand more about it our glossary's page [here]().\n\n",
"_____no_output_____"
]
],
[
[
"# Here the only relevant data we need is the location and the Mineral Class (Yes/No)\ndf_final = df_details[['LONGITUDE_GDA94','LATITUDE_GDA94', 'MINERAL_CLASS']]\n\n# Drop the rows with null values \ndf_final = df_final.dropna()",
"_____no_output_____"
],
[
"# Lets print out a few rows of the new dataframe.\ndf_final.head()",
"_____no_output_____"
],
[
"# Let's check the data points in both classes\nprint(\"Number of rows with Mineral Class Yes is\", len(df_final.query('MINERAL_CLASS==\"Y\"')))\nprint(\"Number of rows with Mineral Class No is\", len(df_final.query('MINERAL_CLASS==\"N\"')))",
"Number of rows with Mineral Class Yes is 147407\nNumber of rows with Mineral Class No is 174436\n"
]
],
[
[
"The Total Number of rows in the new dataset is **147407 (Y) + 174436 (N) = 321843** which is quite sufficient for training our models over it.\n\nAlso the ratio of Class `'Y'` to Class `'N'` is 1 : 0.8 which is quite _**balanced**_.\n\n",
"_____no_output_____"
],
[
"Now that we have our csv, let's go ahead and save our progress into a new csv before the session expires!\n\n",
"_____no_output_____"
]
],
[
[
"# Create a new directory to save the csv.\n!mkdir 'GeoChemData/exported'\n\n# Convert the dataframe into a new csv file.\ndf_final.to_csv('GeoChemData/mod1_unsampled.csv')",
"mkdir: cannot create directory ‘GeoChemData/exported’: File exists\n"
],
[
"# Finally if you are on google colab, you can simply download using ->\nfrom google.colab import files\nfiles.download('GeoChemData/exported/mod1_vectors.csv')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d054ca057bfe5a5f0f9610c69ef44182ac572ab4 | 97,893 | ipynb | Jupyter Notebook | notebooks/hyperopt_on_iris_data.ipynb | jianzhnie/AutoML-Tools | 10ffd2a92458a2d32ecb7b82d5584860e9126801 | [
"Apache-2.0"
] | null | null | null | notebooks/hyperopt_on_iris_data.ipynb | jianzhnie/AutoML-Tools | 10ffd2a92458a2d32ecb7b82d5584860e9126801 | [
"Apache-2.0"
] | null | null | null | notebooks/hyperopt_on_iris_data.ipynb | jianzhnie/AutoML-Tools | 10ffd2a92458a2d32ecb7b82d5584860e9126801 | [
"Apache-2.0"
] | null | null | null | 122.36625 | 23,792 | 0.838272 | [
[
[
"## Hyperopt",
"_____no_output_____"
],
[
"### Iris 数据集",
"_____no_output_____"
],
[
"在本节中,我们将介绍4个使用hyperopt在经典数据集 Iris 上调参的完整示例。我们将涵盖 K 近邻(KNN),支持向量机(SVM),决策树和随机森林。",
"_____no_output_____"
],
[
"对于这项任务,我们将使用经典的Iris数据集,并进行一些有监督的机器学习。数据集有有4个输入特征和3个输出类别。数据被标记为属于类别0,1或2,其映射到不同种类的鸢尾花。输入有4列:萼片长度,萼片宽度,花瓣长度和花瓣宽度。输入的单位是厘米。我们将使用这4个特征来学习模型,预测三种输出类别之一。因为数据由sklearn提供,它有一个很好的DESCR属性,可以提供有关数据集的详细信息。尝试以下代码以获得更多细节信息\n",
"_____no_output_____"
]
],
[
[
"from sklearn import datasets\niris = datasets.load_iris()\n\n\nprint(iris.feature_names) # input names\nprint(iris.target_names) # output names\nprint(iris.DESCR) # everything else",
"['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']\n['setosa' 'versicolor' 'virginica']\n.. _iris_dataset:\n\nIris plants dataset\n--------------------\n\n**Data Set Characteristics:**\n\n :Number of Instances: 150 (50 in each of three classes)\n :Number of Attributes: 4 numeric, predictive attributes and the class\n :Attribute Information:\n - sepal length in cm\n - sepal width in cm\n - petal length in cm\n - petal width in cm\n - class:\n - Iris-Setosa\n - Iris-Versicolour\n - Iris-Virginica\n \n :Summary Statistics:\n\n ============== ==== ==== ======= ===== ====================\n Min Max Mean SD Class Correlation\n ============== ==== ==== ======= ===== ====================\n sepal length: 4.3 7.9 5.84 0.83 0.7826\n sepal width: 2.0 4.4 3.05 0.43 -0.4194\n petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)\n petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)\n ============== ==== ==== ======= ===== ====================\n\n :Missing Attribute Values: None\n :Class Distribution: 33.3% for each of 3 classes.\n :Creator: R.A. Fisher\n :Donor: Michael Marshall (MARSHALL%[email protected])\n :Date: July, 1988\n\nThe famous Iris database, first used by Sir R.A. Fisher. The dataset is taken\nfrom Fisher's paper. Note that it's the same as in R, but not as in the UCI\nMachine Learning Repository, which has two wrong data points.\n\nThis is perhaps the best known database to be found in the\npattern recognition literature. Fisher's paper is a classic in the field and\nis referenced frequently to this day. (See Duda & Hart, for example.) The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant. One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\n.. topic:: References\n\n - Fisher, R.A. \"The use of multiple measurements in taxonomic problems\"\n Annual Eugenics, 7, Part II, 179-188 (1936); also in \"Contributions to\n Mathematical Statistics\" (John Wiley, NY, 1950).\n - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis.\n (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.\n - Dasarathy, B.V. (1980) \"Nosing Around the Neighborhood: A New System\n Structure and Classification Rule for Recognition in Partially Exposed\n Environments\". IEEE Transactions on Pattern Analysis and Machine\n Intelligence, Vol. PAMI-2, No. 1, 67-71.\n - Gates, G.W. (1972) \"The Reduced Nearest Neighbor Rule\". IEEE Transactions\n on Information Theory, May 1972, 431-433.\n - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al\"s AUTOCLASS II\n conceptual clustering system finds 3 classes in the data.\n - Many, many more ...\n"
]
],
[
[
"### K-means",
"_____no_output_____"
],
[
"我们现在将使用hyperopt来找到 K近邻(KNN)机器学习模型的最佳参数。KNN 模型是基于训练数据集中 k 个最近数据点的大多数类别对来自测试集的数据点进行分类。",
"_____no_output_____"
]
],
[
[
"from hyperopt import fmin, tpe, hp, STATUS_OK, Trials\nimport matplotlib.pyplot as plt\nimport numpy as np, pandas as pd\nfrom math import *\nfrom sklearn import datasets\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.model_selection import cross_val_score\n\n# 数据集导入\niris = datasets.load_iris()\nX = iris.data\ny = iris.target\n\n# 损失函数\ndef hyperopt_train_test(params):\n clf = KNeighborsClassifier(**params)\n return cross_val_score(clf, X, y).mean()\n\n\n# hp.choice(label, options) 其中options应是 python 列表或元组\n# space4nn就是需要输入到损失函数里面的参数\nspace4knn = {\n 'n_neighbors': hp.choice('n_neighbors', range(1,100))\n}\n\n# 定义目标函数\ndef f(params):\n acc = hyperopt_train_test(params)\n return {'loss': -acc, 'status': STATUS_OK}\n\n# Trials对象允许我们在每个时间步存储信息\ntrials = Trials()\n\n# 函数fmin首先接受一个函数来最小化,algo参数指定搜索算法,最大评估次数max_evals\nbest = fmin(f, space4knn, algo=tpe.suggest, max_evals=100, trials=trials)\nprint('best:',best)\nprint('trials:')\n\nfor trial in trials.trials[:2]:\n print(trial)",
"100%|█| 100/100 [00:02<00:00, 34.95it/s, best loss: -0.98000000\nbest: {'n_neighbors': 11}\ntrials:\n{'state': 2, 'tid': 0, 'spec': None, 'result': {'loss': -0.9666666666666668, 'status': 'ok'}, 'misc': {'tid': 0, 'cmd': ('domain_attachment', 'FMinIter_Domain'), 'workdir': None, 'idxs': {'n_neighbors': [0]}, 'vals': {'n_neighbors': [7]}}, 'exp_key': None, 'owner': None, 'version': 0, 'book_time': datetime.datetime(2020, 11, 10, 8, 0, 48, 790000), 'refresh_time': datetime.datetime(2020, 11, 10, 8, 0, 48, 814000)}\n{'state': 2, 'tid': 1, 'spec': None, 'result': {'loss': -0.6599999999999999, 'status': 'ok'}, 'misc': {'tid': 1, 'cmd': ('domain_attachment', 'FMinIter_Domain'), 'workdir': None, 'idxs': {'n_neighbors': [1]}, 'vals': {'n_neighbors': [86]}}, 'exp_key': None, 'owner': None, 'version': 0, 'book_time': datetime.datetime(2020, 11, 10, 8, 0, 48, 883000), 'refresh_time': datetime.datetime(2020, 11, 10, 8, 0, 48, 912000)}\n"
]
],
[
[
"现在让我们看看输出结果的图。y轴是交叉验证分数,x轴是 k 近邻个数。下面是代码和它的图像:",
"_____no_output_____"
]
],
[
[
"f, ax = plt.subplots(1) #, figsize=(10,10))\nxs = [t['misc']['vals']['n_neighbors'] for t in trials.trials]\nys = [-t['result']['loss'] for t in trials.trials]\nax.scatter(xs, ys, s=20, linewidth=0.01, alpha=0.5)\nax.set_title('Iris Dataset - KNN', fontsize=18)\nax.set_xlabel('n_neighbors', fontsize=12)\nax.set_ylabel('cross validation accuracy', fontsize=12)",
"_____no_output_____"
]
],
[
[
"k大于63后,精度会急剧下降。 这是由于数据集中每个类只有50个实例。 因此,让我们通过将“ n_neighbors”的值限制为较小的值来进行深入研究。",
"_____no_output_____"
]
],
[
[
"def hyperopt_train_test(params):\n clf = KNeighborsClassifier(**params)\n return cross_val_score(clf, X, y).mean()\n\nspace4knn = {\n 'n_neighbors': hp.choice('n_neighbors', range(1,50))\n}\n\ndef f(params):\n acc = hyperopt_train_test(params)\n return {'loss': -acc, 'status': STATUS_OK}\n\ntrials = Trials()\nbest = fmin(f, space4knn, algo=tpe.suggest, max_evals=100, trials=trials)\nprint ('best:')\nprint (best)",
"100%|█| 100/100 [00:02<00:00, 38.21it/s, best loss: -0.98000000\nbest:\n{'n_neighbors': 5}\n"
],
[
"f, ax = plt.subplots(1) #, figsize=(10,10))\nxs = [t['misc']['vals']['n_neighbors'] for t in trials.trials]\nys = [-t['result']['loss'] for t in trials.trials]\nax.scatter(xs, ys, s=20, linewidth=0.01, alpha=0.5)\nax.set_title('Iris Dataset - KNN', fontsize=18)\nax.set_xlabel('n_neighbors', fontsize=12)\nax.set_ylabel('cross validation accuracy', fontsize=12)",
"_____no_output_____"
]
],
[
[
"上面的模型没有做任何预处理。所以我们来归一化和缩放特征,看看是否有帮助。用如下代码:",
"_____no_output_____"
]
],
[
[
"# 归一化和缩放特征\nfrom sklearn.preprocessing import normalize, scale\n\niris = datasets.load_iris()\nX = iris.data\ny = iris.target\n\ndef hyperopt_train_test(params):\n X_ = X[:]\n\n if 'normalize' in params:\n if params['normalize'] == 1:\n X_ = normalize(X_)\n del params['normalize']\n\n if 'scale' in params:\n if params['scale'] == 1:\n X_ = scale(X_)\n del params['scale']\n\n clf = KNeighborsClassifier(**params)\n return cross_val_score(clf, X_, y).mean()\n\nspace4knn = {\n 'n_neighbors': hp.choice('n_neighbors', range(1,50)),\n 'scale': hp.choice('scale', [0, 1]),\n 'normalize': hp.choice('normalize', [0, 1])\n}\n\ndef f(params):\n acc = hyperopt_train_test(params)\n return {'loss': -acc, 'status': STATUS_OK}\n\ntrials = Trials()\nbest = fmin(f, space4knn, algo=tpe.suggest, max_evals=100, trials=trials)\nprint('best:',best)",
"100%|█| 100/100 [00:02<00:00, 34.37it/s, best loss: -0.98000000\nbest: {'n_neighbors': 3, 'normalize': 1, 'scale': 0}\n"
]
],
[
[
"绘制参数",
"_____no_output_____"
]
],
[
[
"parameters = ['n_neighbors', 'scale', 'normalize']\ncols = len(parameters)\nf, axes = plt.subplots(nrows=1, ncols=cols, figsize=(15,5))\ncmap = plt.cm.jet\nfor i, val in enumerate(parameters):\n xs = np.array([t['misc']['vals'][val] for t in trials.trials]).ravel()\n ys = [-t['result']['loss'] for t in trials.trials]\n xs, ys = zip(*sorted(zip(xs, ys)))\n ys = np.array(ys)\n axes[i].scatter(xs, ys, s=20, linewidth=0.01, alpha=0.75, c=cmap(float(i)/len(parameters)))",
"'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n"
]
],
[
[
"### 支持向量机(SVM)\n\n由于这是一个分类任务,我们将使用sklearn的SVC类。代码如下:",
"_____no_output_____"
]
],
[
[
"from sklearn.svm import SVC\n\n\ndef hyperopt_train_test(params):\n X_ = X[:]\n\n if 'normalize' in params:\n if params['normalize'] == 1:\n X_ = normalize(X_)\n del params['normalize']\n\n if 'scale' in params:\n if params['scale'] == 1:\n X_ = scale(X_)\n del params['scale']\n\n clf = SVC(**params)\n return cross_val_score(clf, X_, y).mean()\n\n# SVM模型有两个非常重要的参数C与gamma。其中 C是惩罚系数,即对误差的宽容度。\n# c越高,说明越不能容忍出现误差,容易过拟合。C越小,容易欠拟合。C过大或过小,泛化能力变差\nspace4svm = {\n 'C': hp.uniform('C', 0, 20),\n 'kernel': hp.choice('kernel', ['linear', 'sigmoid', 'poly', 'rbf']),\n 'gamma': hp.uniform('gamma', 0, 20),\n 'scale': hp.choice('scale', [0, 1]),\n 'normalize': hp.choice('normalize', [0, 1])\n}\n\ndef f(params):\n acc = hyperopt_train_test(params)\n return {'loss': -acc, 'status': STATUS_OK}\n\ntrials = Trials()\nbest = fmin(f, space4svm, algo=tpe.suggest, max_evals=100, trials=trials)\nprint('best:',best)",
"100%|█| 100/100 [00:08<00:00, 12.02it/s, best loss: -0.98666666\nbest: {'C': 8.238774783515044, 'gamma': 1.1896015071446002, 'kernel': 3, 'normalize': 1, 'scale': 1}\n"
]
],
[
[
"同样,缩放和规范化也无济于事。 核函数的最佳选择是(线性核),最佳C值为1.4168540399911616,最佳gamma为15.04230279483486。 这组参数的分类精度为99.3%。",
"_____no_output_____"
]
],
[
[
"parameters = ['C', 'kernel', 'gamma', 'scale', 'normalize']\ncols = len(parameters)\nf, axes = plt.subplots(nrows=1, ncols=cols, figsize=(20,5))\ncmap = plt.cm.jet\nfor i, val in enumerate(parameters):\n xs = np.array([t['misc']['vals'][val] for t in trials.trials]).ravel()\n ys = [-t['result']['loss'] for t in trials.trials]\n xs, ys = zip(*sorted(zip(xs, ys)))\n axes[i].scatter(xs, ys, s=20, linewidth=0.01, alpha=0.25, c=cmap(float(i)/len(parameters)))\n axes[i].set_title(val)\n axes[i].set_ylim([0.9, 1.0])",
"'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.\n"
]
],
[
[
"### 决策树\n我们将尝试只优化决策树的一些参数,码如下。",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\n\n\ndef hyperopt_train_test(params):\n X_ = X[:]\n if 'normalize' in params:\n if params['normalize'] == 1:\n X_ = normalize(X_)\n del params['normalize']\n\n if 'scale' in params:\n if params['scale'] == 1:\n X_ = scale(X_)\n del params['scale']\n clf = DecisionTreeClassifier(**params)\n return cross_val_score(clf, X, y).mean()\n\nspace4dt = {\n 'max_depth': hp.choice('max_depth', range(1,20)),\n 'max_features': hp.choice('max_features', range(1,5)),\n 'criterion': hp.choice('criterion', [\"gini\", \"entropy\"]),\n 'scale': hp.choice('scale', [0, 1]),\n 'normalize': hp.choice('normalize', [0, 1])\n}\n\ndef f(params):\n acc = hyperopt_train_test(params)\n return {'loss': -acc, 'status': STATUS_OK}\n\ntrials = Trials()\nbest = fmin(f, space4dt, algo=tpe.suggest, max_evals=100, trials=trials)\nprint('best:',best)",
"100%|█| 100/100 [00:01<00:00, 54.98it/s, best loss: -0.97333333\nbest: {'criterion': 0, 'max_depth': 2, 'max_features': 3, 'normalize': 0, 'scale': 0}\n"
]
],
[
[
"### Random Forests\n让我们看看 ensemble 的分类器 随机森林,它只是一组决策树的集合。",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier\n\ndef hyperopt_train_test(params):\n X_ = X[:]\n if 'normalize' in params:\n if params['normalize'] == 1:\n X_ = normalize(X_)\n del params['normalize']\n if 'scale' in params:\n if params['scale'] == 1:\n X_ = scale(X_)\n del params['scale']\n clf = RandomForestClassifier(**params)\n return cross_val_score(clf, X, y).mean()\n\nspace4rf = {\n 'max_depth': hp.choice('max_depth', range(1,20)),\n 'max_features': hp.choice('max_features', range(1,5)),\n 'n_estimators': hp.choice('n_estimators', range(1,20)),\n 'criterion': hp.choice('criterion', [\"gini\", \"entropy\"]),\n 'scale': hp.choice('scale', [0, 1]),\n 'normalize': hp.choice('normalize', [0, 1])\n}\n\nbest = 0\ndef f(params):\n global best\n acc = hyperopt_train_test(params)\n if acc > best:\n best = acc\n return {'loss': -acc, 'status': STATUS_OK}\n\ntrials = Trials()\nbest = fmin(f, space4rf, algo=tpe.suggest, max_evals=100, trials=trials)\nprint('best:')\nprint(best)",
"100%|█| 100/100 [00:11<00:00, 8.92it/s, best loss: -0.97333333\nbest:\n{'criterion': 1, 'max_depth': 14, 'max_features': 2, 'n_estimators': 0, 'normalize': 0, 'scale': 0}\n"
]
],
[
[
"同样的我们得到 97.3 % 的正确率 , 和decision tree 的结果一致.",
"_____no_output_____"
],
[
"### All Together Now\n\n一次自动调整一个模型的参数(例如,SVM或KNN)既有趣又有启发性,但如果一次调整所有模型参数并最终获得最佳模型更为有用。 这使我们能够一次比较所有模型和所有参数,从而为我们提供最佳模型。",
"_____no_output_____"
]
],
[
[
"from sklearn.naive_bayes import BernoulliNB \n\ndef hyperopt_train_test(params):\n t = params['type']\n del params['type']\n if t == 'naive_bayes':\n clf = BernoulliNB(**params)\n elif t == 'svm':\n clf = SVC(**params)\n elif t == 'dtree':\n clf = DecisionTreeClassifier(**params)\n elif t == 'knn':\n clf = KNeighborsClassifier(**params)\n else:\n return 0\n return cross_val_score(clf, X, y).mean()\n\nspace = hp.choice('classifier_type', [\n {\n 'type': 'naive_bayes',\n 'alpha': hp.uniform('alpha', 0.0, 2.0)\n },\n {\n 'type': 'svm',\n 'C': hp.uniform('C', 0, 10.0),\n 'kernel': hp.choice('kernel', ['linear', 'rbf']),\n 'gamma': hp.uniform('gamma', 0, 20.0)\n },\n {\n 'type': 'randomforest',\n 'max_depth': hp.choice('max_depth', range(1,20)),\n 'max_features': hp.choice('max_features', range(1,5)),\n 'n_estimators': hp.choice('n_estimators', range(1,20)),\n 'criterion': hp.choice('criterion', [\"gini\", \"entropy\"]),\n 'scale': hp.choice('scale', [0, 1]),\n 'normalize': hp.choice('normalize', [0, 1])\n },\n {\n 'type': 'knn',\n 'n_neighbors': hp.choice('knn_n_neighbors', range(1,50))\n }\n])\n\ncount = 0\nbest = 0\ndef f(params):\n global best, count\n count += 1\n acc = hyperopt_train_test(params.copy())\n if acc > best:\n print ('new best:', acc, 'using', params['type'])\n best = acc\n if count % 50 == 0:\n print ('iters:', count, ', acc:', acc, 'using', params)\n return {'loss': -acc, 'status': STATUS_OK}\n\ntrials = Trials()\nbest = fmin(f, space, algo=tpe.suggest, max_evals=50, trials=trials)\nprint('best:')\nprint(best)",
"new best: \n0.9333333333333333 \nusing \nknn \nnew best: \n0.9733333333333334 \nusing \nsvm \nnew best: \n0.9800000000000001 \nusing \nsvm \nnew best: \n0.9866666666666667 \nusing \nsvm \niters: \n50 \n, acc: \n0.9866666666666667 \nusing \n{'C': 0.9033939243580144, 'gamma': 19.28858951292339, 'kernel': 'linear', 'type': 'svm'}\n100%|█| 50/50 [00:01<00:00, 26.62it/s, best loss: -0.9866666666\nbest:\n{'C': 0.9059462783976437, 'classifier_type': 1, 'gamma': 4.146008164096844, 'kernel': 0}\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d054ce59b73e94c40067a5e44c742fd888d17cf0 | 20,540 | ipynb | Jupyter Notebook | AI-and-Analytics/Jupyter/Numba_DPPY_Essentials_training/04_DPPY_Black_Sholes/DPPY_Black_Sholes.ipynb | krzeszew/oneAPI-samples | d403a9acd340240dff39f051d71c9d3dcbc685ac | [
"MIT"
] | 1 | 2022-01-06T02:50:30.000Z | 2022-01-06T02:50:30.000Z | AI-and-Analytics/Jupyter/Numba_DPPY_Essentials_training/04_DPPY_Black_Sholes/DPPY_Black_Sholes.ipynb | krzeszew/oneAPI-samples | d403a9acd340240dff39f051d71c9d3dcbc685ac | [
"MIT"
] | 4 | 2021-07-05T15:35:22.000Z | 2022-03-28T15:51:54.000Z | AI-and-Analytics/Jupyter/Numba_DPPY_Essentials_training/04_DPPY_Black_Sholes/DPPY_Black_Sholes.ipynb | aaronkintel/oneAPI-samples | 5634b1077e5327076c749064369fc3d033bb45db | [
"MIT"
] | 3 | 2020-08-24T00:36:23.000Z | 2022-01-09T03:17:48.000Z | 35.846422 | 425 | 0.585686 | [
[
[
"# Black-Scholes Algorithm Using Numba-dppy",
"_____no_output_____"
],
[
"## Sections\n- [Black Sholes algorithm](#Black-Sholes-algorithm)\n- _Code:_ [Implementation of Black Scholes targeting CPU using Numba JIT](#Implementation-of-Black-Scholes-targeting-CPU-using-Numba-JIT)\n- _Code:_ [Implementation of Black Scholes targeting GPU using Kernels](#Implementation-of-Black-Scholes-targeting-GPU-using-Kernels)\n- _Code:_ [Implementation of Black Scholes targeting GPU using Numpy](#Implementation-of-Black-Scholes-targeting-GPU-using-Numpy)\n\n",
"_____no_output_____"
],
[
"## Learning Objectives\n* Build a Numba implementation of Black Scholes targeting CPU and GPU using Numba Jit\n* Build a Numba-DPPY implementation of Black Scholes on CPU and GPU using Kernel approach\n* Build a Numba-DPPY implementation of Black Scholes on GPU using Numpy approach",
"_____no_output_____"
],
[
"## numba-dppy\n\nNumba-dppy is a standalone extension to the Numba JIT compiler that adds SYCL programming capabilities to Numba. Numba-dppy is packaged as part of the IDP that comes with oneAPI base toolkit, and you don’t need to install any specific Conda packages. The support for SYCL is via DPC++'s SYCL runtime and other SYCL compilers are not supported by Numba-dppy.\n\n",
"_____no_output_____"
],
[
"## Black Sholes algorithm\n\nThe Black-Scholes program computes the price of a portfolio of options using partial differential equations. The entire computation performed by Black-Scholes is data-parallel, where each option can be priced independent of other options.\n\nThe Black-Scholes Model is one of the most important concepts in modern quantitative finance theory. Developed in 1973 by Fisher Black, Robert Merton, and Myron Scholes; it is still widely used today, and regarded as one of the best ways to determine fair prices of financial derivatives.",
"_____no_output_____"
],
[
"### Implementation of Black-Scholes Formula\n\nThe Black-Scholes formula is used widely in almost every aspect of quantitative finance. The Black-Scholes calculation has essentially permeated every quantitative finance library by traders and quantitative analysts alike. \n\nLet’s look at a hypothetic situation in which a firm has to calculate European options for millions of financial instruments. For each instrument, it has current price, strike price, and option expiration time. For each set of these data, it makes several thousand Black-Scholes calculations, much like the way options of neighboring stock prices, strike prices, and different option expiration times were calculated.\n",
"_____no_output_____"
],
[
"# Implementation of Black Scholes targeting CPU using Numba JIT\nIn the following example, we introduce a naive Black-Sholes implementation that targets a CPU using the Numba JIT, where we calculate the Black-Sholes formula as described:\n\nThis is the decorator-based approach, where we offload data parallel code sections like parallel-for, and certain NumPy function calls. With the decorator method, a programmer needs to simply identify the most time-consuming parts of the program. If those parts can be parallelized, the programmer needs to just annotate those sections using Numba-DPPy, and can expect those code sections to execute on a GPU.\n\n1. Inspect the code cell below and click run ▶ to save the code to a file.\n2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.",
"_____no_output_____"
]
],
[
[
"%%writefile lab/black_sholes_jit_cpu.py\n# Copyright (C) 2017-2018 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\nimport dpctl\nimport base_bs_erf\nimport numba as nb\nfrom math import log, sqrt, exp, erf\n\n# blackscholes implemented as a parallel loop using numba.prange\[email protected](parallel=True, fastmath=True)\ndef black_scholes_kernel(nopt, price, strike, t, rate, vol, call, put):\n mr = -rate\n sig_sig_two = vol * vol * 2\n\n for i in nb.prange(nopt):\n P = price[i]\n S = strike[i]\n T = t[i]\n\n a = log(P / S)\n b = T * mr\n\n z = T * sig_sig_two\n c = 0.25 * z\n y = 1.0 / sqrt(z)\n\n w1 = (a - b + c) * y\n w2 = (a - b - c) * y\n\n d1 = 0.5 + 0.5 * erf(w1)\n d2 = 0.5 + 0.5 * erf(w2)\n\n Se = exp(b) * S\n\n r = P * d1 - Se * d2\n call[i] = r\n put[i] = r - P + Se\n\n\ndef black_scholes(nopt, price, strike, t, rate, vol, call, put):\n # offload blackscholes computation to CPU (toggle level0 or opencl driver).\n with dpctl.device_context(base_bs_erf.get_device_selector()):\n black_scholes_kernel(nopt, price, strike, t, rate, vol, call, put)\n\n\n# call the run function to setup input data and performance data infrastructure\nbase_bs_erf.run(\"Numba@jit-loop-par\", black_scholes)",
"_____no_output_____"
]
],
[
[
"### Build and Run\nSelect the cell below and click run ▶ to compile and execute the code:",
"_____no_output_____"
]
],
[
[
"! chmod 755 q; chmod 755 run_black_sholes_jit_cpu.sh; if [ -x \"$(command -v qsub)\" ]; then ./q run_black_sholes_jit_cpu.sh; else ./run_black_sholes_jit_cpu.sh; fi",
"_____no_output_____"
]
],
[
[
"# Implementation of Black Scholes targeting GPU using Numba JIT\nIn the below example we introduce to a Naive Blacksholes implementation that targets a GPU using the Numba Jit where we calculate the blacksholes formula as described above.\n\n1. Inspect the code cell below and click run ▶ to save the code to a file.\n2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.",
"_____no_output_____"
]
],
[
[
"%%writefile lab/black_sholes_jit_gpu.py\n# Copyright (C) 2017-2018 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\nimport dpctl\nimport base_bs_erf_gpu\nimport numba as nb\nfrom math import log, sqrt, exp, erf\n\n# blackscholes implemented as a parallel loop using numba.prange\[email protected](parallel=True, fastmath=True)\ndef black_scholes_kernel(nopt, price, strike, t, rate, vol, call, put):\n mr = -rate\n sig_sig_two = vol * vol * 2\n\n for i in nb.prange(nopt):\n P = price[i]\n S = strike[i]\n T = t[i]\n\n a = log(P / S)\n b = T * mr\n\n z = T * sig_sig_two\n c = 0.25 * z\n y = 1.0 / sqrt(z)\n\n w1 = (a - b + c) * y\n w2 = (a - b - c) * y\n\n d1 = 0.5 + 0.5 * erf(w1)\n d2 = 0.5 + 0.5 * erf(w2)\n\n Se = exp(b) * S\n\n r = P * d1 - Se * d2\n call[i] = r\n put[i] = r - P + Se\n\n\ndef black_scholes(nopt, price, strike, t, rate, vol, call, put):\n # offload blackscholes computation to GPU (toggle level0 or opencl driver).\n with dpctl.device_context(base_bs_erf_gpu.get_device_selector()):\n black_scholes_kernel(nopt, price, strike, t, rate, vol, call, put)\n\n\n# call the run function to setup input data and performance data infrastructure\nbase_bs_erf_gpu.run(\"Numba@jit-loop-par\", black_scholes)",
"_____no_output_____"
]
],
[
[
"### Build and Run\nSelect the cell below and click run ▶ to compile and execute the code:",
"_____no_output_____"
]
],
[
[
"! chmod 755 q; chmod 755 run_black_sholes_jit_gpu.sh; if [ -x \"$(command -v qsub)\" ]; then ./q run_black_sholes_jit_gpu.sh; else ./run_black_sholes_jit_gpu.sh; fi",
"_____no_output_____"
]
],
[
[
"# Implementation of Black Scholes targeting GPU using Kernels\n\n## Writing Explicit Kernels in numba-dppy\n\nWriting a SYCL kernel using the `@numba_dppy.kernel` decorator has similar syntax to writing OpenCL kernels. As such, the numba-dppy module provides similar indexing and other functions as OpenCL. The indexing functions supported inside a `numba_dppy.kernel` are:\n\n* numba_dppy.get_local_id : Gets the local ID of the item\n* numba_dppy.get_local_size: Gets the local work group size of the device\n* numba_dppy.get_group_id : Gets the group ID of the item\n* numba_dppy.get_num_groups: Gets the number of gropus in a worksgroup\n\nRefer https://intelpython.github.io/numba-dppy/latest/user_guides/kernel_programming_guide/index.html for more details.\n\nIn the following example we use dppy-kernel approach for explicit kernel programming where if the programmer wants to extract further performance from the offloaded code, the programmer can use the explicit kernel programming approach using dppy-kernels and tune the GPU parameterswhere we take advantage of the workgroups and the workitems in a device using the kernel approach",
"_____no_output_____"
],
[
"1. Inspect the code cell below and click run ▶ to save the code to a file.\n2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.",
"_____no_output_____"
]
],
[
[
"%%writefile lab/black_sholes_kernel.py\n# Copyright (C) 2017-2018 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\nimport dpctl\nimport base_bs_erf_gpu\nimport numba_dppy\nfrom math import log, sqrt, exp, erf\n\n# blackscholes implemented using dppy.kernel\n@numba_dppy.kernel(\n access_types={\"read_only\": [\"price\", \"strike\", \"t\"], \"write_only\": [\"call\", \"put\"]}\n)\ndef black_scholes(nopt, price, strike, t, rate, vol, call, put):\n mr = -rate\n sig_sig_two = vol * vol * 2\n\n i = numba_dppy.get_global_id(0)\n\n P = price[i]\n S = strike[i]\n T = t[i]\n\n a = log(P / S)\n b = T * mr\n\n z = T * sig_sig_two\n c = 0.25 * z\n y = 1.0 / sqrt(z)\n\n w1 = (a - b + c) * y\n w2 = (a - b - c) * y\n\n d1 = 0.5 + 0.5 * erf(w1)\n d2 = 0.5 + 0.5 * erf(w2)\n\n Se = exp(b) * S\n\n r = P * d1 - Se * d2\n call[i] = r\n put[i] = r - P + Se\n\n\ndef black_scholes_driver(nopt, price, strike, t, rate, vol, call, put):\n # offload blackscholes computation to GPU (toggle level0 or opencl driver).\n with dpctl.device_context(base_bs_erf_gpu.get_device_selector()):\n black_scholes[nopt, numba_dppy.DEFAULT_LOCAL_SIZE](\n nopt, price, strike, t, rate, vol, call, put\n )\n\n\n# call the run function to setup input data and performance data infrastructure\nbase_bs_erf_gpu.run(\"Numba@jit-loop-par\", black_scholes_driver)",
"_____no_output_____"
]
],
[
[
"### Build and Run\nSelect the cell below and click run ▶ to compile and execute the code:",
"_____no_output_____"
]
],
[
[
"! chmod 755 q; chmod 755 run_black_sholes_kernel.sh; if [ -x \"$(command -v qsub)\" ]; then ./q run_black_sholes_kernel.sh; else ./run_black_sholes_kernel.sh; fi",
"_____no_output_____"
]
],
[
[
"## Implementation of Black Scholes targeting GPU using Numpy\n\n\nIn the following example, we can observe the Black Scholes NumPy implementation and we target the GPU using the NumPy approach.\n\n1. Inspect the code cell below and click run ▶ to save the code to a file.\n2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.",
"_____no_output_____"
]
],
[
[
"%%writefile lab/black_sholes_numpy_graph.py\n# Copyright (C) 2017-2018 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\n# Copyright (C) 2017-2018 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\nimport dpctl\nimport base_bs_erf_graph\nimport numba as nb\nimport numpy as np\nfrom numpy import log, exp, sqrt\nfrom math import erf\n\n# Numba does know erf function from numpy or scipy\[email protected](nopython=True)\ndef nberf(x):\n return erf(x)\n\n\n# blackscholes implemented using numpy function calls\[email protected](nopython=True, parallel=True, fastmath=True)\ndef black_scholes_kernel(nopt, price, strike, t, rate, vol, call, put):\n mr = -rate\n sig_sig_two = vol * vol * 2\n\n P = price\n S = strike\n T = t\n\n a = log(P / S)\n b = T * mr\n\n z = T * sig_sig_two\n c = 0.25 * z\n y = 1.0 / sqrt(z)\n\n w1 = (a - b + c) * y\n w2 = (a - b - c) * y\n\n d1 = 0.5 + 0.5 * nberf(w1)\n d2 = 0.5 + 0.5 * nberf(w2)\n\n Se = exp(b) * S\n\n r = P * d1 - Se * d2\n call[:] = r # temporary `r` is necessary for faster `put` computation\n put[:] = r - P + Se\n\n\ndef black_scholes(nopt, price, strike, t, rate, vol, call, put):\n # offload blackscholes computation to GPU (toggle level0 or opencl driver).\n with dpctl.device_context(base_bs_erf_graph.get_device_selector()):\n black_scholes_kernel(nopt, price, strike, t, rate, vol, call, put)\n\n\n# call the run function to setup input data and performance data infrastructure\nbase_bs_erf_graph.run(\"Numba@jit-numpy\", black_scholes)",
"_____no_output_____"
]
],
[
[
"### Build and Run\nSelect the cell below and click run ▶ to compile and execute the code:",
"_____no_output_____"
]
],
[
[
"! chmod 755 q; chmod 755 run_black_sholes_numpy_graph.sh; if [ -x \"$(command -v qsub)\" ]; then ./q run_black_sholes_numpy_graph.sh; else ./run_black_sholes_numpy_graph.sh; fi",
"_____no_output_____"
]
],
[
[
"# Plot GPU Results\n\nThe algorithm below is detecting Calls and Puts verses Current price for a strike price in range 23 to 25 and plots the results in a graph as shown below. ",
"_____no_output_____"
],
[
"### View the results\nSelect the cell below and click run ▶ to view the graph:",
"_____no_output_____"
]
],
[
[
"from matplotlib import pyplot as plt \nimport numpy as np \n\ndef read_dictionary(fn):\n import pickle\n # Load data (deserialize)\n with open(fn, 'rb') as handle:\n dictionary = pickle.load(handle)\n return dictionary\nresultsDict = read_dictionary('resultsDict.pkl')\nlimit = 10\ncall = resultsDict['call']\nput = resultsDict['put']\nprice = resultsDict['price']\nstrike = resultsDict['strike']\n\nplt.style.use('dark_background')\npriceRange = [23.0, 23.5]\n# strikeIndex = np.where((price >= priceRange[0]) & (price < priceRange[1]) )[0]\n# plt.scatter(strike[strikeIndex], put[strikeIndex], c= 'r', s = 2, alpha = 1, label = 'puts')\n# plt.scatter(strike[strikeIndex], call[strikeIndex], c= 'b', s = 2, alpha = 1, label = 'calls')\n# plt.title('Calls and Puts verses Strike for a current price in range {}'.format(priceRange))\n# plt.ylabel('Option Price [$]')\n# plt.xlabel('Strike Price [$]')\n# plt.legend()\n# plt.grid()\n\nstrikeRange = [23.0, 23.5]\nstrikeIndex = np.where((strike >= strikeRange[0]) & (strike < strikeRange[1]) )[0]\nplt.scatter(price[strikeIndex], put[strikeIndex], c= 'r', s = 2, alpha = 1, label = 'puts')\nplt.scatter(price[strikeIndex], call[strikeIndex], c= 'b', s = 2, alpha = 1, label = 'calls')\nplt.title('Calls and Puts verses Current price for a strike price in range {}'.format(priceRange))\nplt.ylabel('Option Price [$]')\nplt.xlabel('Current Price [$]')\nplt.legend()\nplt.grid()\n",
"_____no_output_____"
]
],
[
[
"_If the Jupyter cells are not responsive or if they error out when you compile the code samples, please restart the Jupyter Kernel: \n\"Kernel->Restart Kernel and Clear All Outputs\" and compile the code samples again__",
"_____no_output_____"
],
[
"## Summary\nIn this module you will have learned the following:\n* Numba implementation of Black Scholes targeting a CPU and GPU using Numba JIT\n* Numba-DPPY implementation of Black Scholes on a CPU and GPU using the kernel approach\n* Numba-DPPY implementation of Black Scholes on a GPU using Numpy approach",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d054cfbd602336a18bc9a320cce982a10d497bc6 | 568,213 | ipynb | Jupyter Notebook | package_expos/Seaborn_2/Seaborn Expo.ipynb | hanisaf/advanced-data-management-and-analytics-spring2021 | 35178f14b942f2accbcfcbaa5a27e134a9a9f96b | [
"MIT"
] | 6 | 2021-01-21T17:53:34.000Z | 2021-04-20T17:37:50.000Z | package_expos/Seaborn_2/Seaborn Expo.ipynb | hanisaf/advanced-data-management-and-analytics-spring2021 | 35178f14b942f2accbcfcbaa5a27e134a9a9f96b | [
"MIT"
] | null | null | null | package_expos/Seaborn_2/Seaborn Expo.ipynb | hanisaf/advanced-data-management-and-analytics-spring2021 | 35178f14b942f2accbcfcbaa5a27e134a9a9f96b | [
"MIT"
] | 13 | 2021-01-20T16:11:55.000Z | 2021-04-28T21:38:07.000Z | 719.256962 | 180,604 | 0.945858 | [
[
[
"# Pandas for managing datasets\nimport pandas as pd",
"_____no_output_____"
],
[
"# seaborn for plotting and styling\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"# read dataset\ntips = sns.load_dataset(\"tips\")",
"_____no_output_____"
],
[
"# a preview of the data\ntips.head()",
"_____no_output_____"
],
[
"# make a copy of the data to create the graphs of\ndf = tips.copy()\ndf",
"_____no_output_____"
],
[
"# create a column to determine tip percentage\ndf[\"tip_percentage\"] = df[\"tip\"] / df[\"total_bill\"]",
"_____no_output_____"
],
[
"# This plot is a histogram of tip percentages\n# The hue argument allows the color to be changed to reflect the categories\nsns.histplot(x='tip_percentage', binwidth = 0.05, hue = 'sex', data = df)",
"_____no_output_____"
],
[
"# Scatterplot of total bill and tip\n# This shows how you can set the style to change the visual style\n# The default relplot is a scatterplot\nsns.set(style = 'darkgrid')\nsns.relplot( x = 'total_bill', y = 'tip', hue = 'smoker', data = df)",
"_____no_output_____"
],
[
"# Scatterplot Gender\n# This scatterplot is the same with the addition of the size argument\n# The size argument is time here\nsns.set(style = 'darkgrid')\ngender = sns.relplot( x = 'total_bill', y = 'tip', hue = 'sex', size = 'time', data = df)",
"_____no_output_____"
],
[
"# Catplot is for categorical data\n# The default catplot is a strip plot\nsns.catplot(x = 'day', y = 'total_bill', data = df)",
"_____no_output_____"
],
[
"# This catplot shows that with the addition of the kind argument,\n# we can alter it to another cat plot, in this case, a barplot\nsns.catplot(x = 'time', y = 'total_bill', data= df, kind='bar')",
"_____no_output_____"
],
[
"# A violin plot is another way of visualizing categorical data\nsns.violinplot(x = 'day', y = 'total_bill', hue = 'sex', data = df)",
"_____no_output_____"
],
[
"# This violoin plot shows the same data above\n# With different arguments, different visuals are created\n# Here we set bw to 0.25 and split to True\nsns.violinplot(x = 'day', y = 'total_bill', hue = 'sex', bw = .25, split = True, data = df)",
"_____no_output_____"
],
[
"# This shows how we can alter the color palette of a violin plot\nsns.violinplot(x = 'day', y = 'total_bill', hue = 'sex', bw = .25, split = True, palette = 'Greens', data = df)",
"_____no_output_____"
],
[
"# Pairplots allow visualization of many distributions at once\n# Seaborn determines the visualizations and the variables to create\n# This allows the user to quickly view distributions very easily\nsns.set_theme(style=\"ticks\")\nsns.pairplot(df, hue='sex')",
"_____no_output_____"
],
[
"# This swarm plot is similar to a strip plot but does not allow points to overlap\n# The style is whitegrid\nsns.swarmplot(y='total_bill', x = 'day', data = df)\nsns.set_style('whitegrid')",
"_____no_output_____"
],
[
"# Seaborn can also create heatmaps\n# This heatmap shows correlation between variables\nsns.heatmap(df.corr(), annot = True, cmap = 'viridis')",
"_____no_output_____"
],
[
"# This heatmap requires creation of a pivot table\n# This shows that Seaborn can work with pivot tables\npivot = df.pivot_table(index = ['day'], columns =['size'], values = 'tip_percentage', aggfunc = np.average)\nsns.heatmap(pivot)",
"_____no_output_____"
],
[
"# This plot shows Seaborn's ability to create side by side visuals\n# The col argument allows for this\npal = dict(Male='#6495ED', Female = '#F08080')\ng = sns.lmplot(x='total_bill', y = 'tip_percentage', col = 'sex', hue='sex', data =df,\n palette=pal, y_jitter=.02, logistic = True, truncate = True)",
"_____no_output_____"
],
[
"# This plot is an example of how you can overlay visualizations\n# This is a boxplot with a stripplot on top\nsns.stripplot(x='tip', y = 'day', data = df, jitter = True, dodge = True, linewidth=1, \n edgecolor = 'gray', palette = 'gray')\ncolors = ['#78C850', '#F08030', '#6890F0','#F8D030']\nsns.boxplot(x='tip', y='day',data = df, fliersize=0, palette = colors)\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d054d8a47875a9f56aee7f671e99a924600eb2ea | 33,050 | ipynb | Jupyter Notebook | .ipynb_checkpoints/1-2-linear-regression-winequality-white-checkpoint.ipynb | hockeylori/FinalProject-Team8 | 6e7fef6a695fe09e3f61ffcd3e51b77edb9e23c8 | [
"FTL",
"CNRI-Python"
] | null | null | null | .ipynb_checkpoints/1-2-linear-regression-winequality-white-checkpoint.ipynb | hockeylori/FinalProject-Team8 | 6e7fef6a695fe09e3f61ffcd3e51b77edb9e23c8 | [
"FTL",
"CNRI-Python"
] | null | null | null | .ipynb_checkpoints/1-2-linear-regression-winequality-white-checkpoint.ipynb | hockeylori/FinalProject-Team8 | 6e7fef6a695fe09e3f61ffcd3e51b77edb9e23c8 | [
"FTL",
"CNRI-Python"
] | null | null | null | 64.299611 | 12,232 | 0.754856 | [
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"## Dataset: winequality-white.csv\n",
"_____no_output_____"
]
],
[
[
"# Read the csv file into a pandas DataFrame\nwhite = pd.read_csv('./datasets/winequality-white.csv')\nwhite.head()",
"_____no_output_____"
],
[
"# Assign the data to X and y\n# Note: Sklearn requires a two-dimensional array of values\n# so we use reshape to create this\n\nX = white.alcohol.values.reshape(-1, 1)\ny = white.quality.values.reshape(-1, 1)\n\nprint(\"Shape: \", X.shape, y.shape)\nX",
"Shape: (4898, 1) (4898, 1)\n"
],
[
"# Plot the data\n\n### BEGIN SOLUTION\n\nplt.scatter(X, y)\n\n### END SOLUTION",
"_____no_output_____"
],
[
"# Create the model and fit the model to the data\n\nfrom sklearn.linear_model import LinearRegression\n\n### BEGIN SOLUTION\n\nmodel = LinearRegression()\n\n### END SOLUTION",
"_____no_output_____"
],
[
"# Fit the model to the data. \n# Note: This is the training step where you fit the line to the data.\n\n### BEGIN SOLUTION\n\nmodel.fit(X, y)\n\n### END SOLUTION",
"_____no_output_____"
],
[
"# Print the coefficient and the intercept for the model\n\n### BEGIN SOLUTION\nprint('Weight coefficients: ', model.coef_)\nprint('y-axis intercept: ', model.intercept_)\n### END SOLUTION",
"Weight coefficients: [[0.60524374]]\ny-axis intercept: [6.95669921]\n"
],
[
"# Note: we have to transform our min and max values \n# so they are in the format: array([[ 1.17]])\n# This is the required format for `model.predict()`\n\nx_min = np.array([[X.min()]])\nx_max = np.array([[X.max()]])\nprint(f\"Min X Value: {x_min}\")\nprint(f\"Max X Value: {x_max}\")",
"Min X Value: [[3]]\nMax X Value: [[9]]\n"
],
[
"# Calculate the y_min and y_max using model.predict and x_min and x_max\n\n### BEGIN SOLUTION\ny_min = model.predict(x_min)\ny_max = model.predict(x_max)\n### END SOLUTION",
"_____no_output_____"
],
[
"# Plot X and y using plt.scatter\n# Plot the model fit line using [x_min[0], x_max[0]], [y_min[0], y_max[0]]\n\n### BEGIN SOLUTION\nplt.scatter(X, y, c='blue')\nplt.plot([x_min[0], x_max[0]], [y_min[0], y_max[0]], c='red')\n### END SOLUTION",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d054e58807366d9f5ee1b13b9e7dd09261086a23 | 2,189 | ipynb | Jupyter Notebook | notebooks/playgrounds.ipynb | espoirMur/balobi_nini | b68b9af4c84ec0f5b38ae8ba52d5f0d32b41ead3 | [
"Unlicense"
] | 1 | 2020-09-30T08:03:10.000Z | 2020-09-30T08:03:10.000Z | notebooks/playgrounds.ipynb | espoirMur/balobi_nini | b68b9af4c84ec0f5b38ae8ba52d5f0d32b41ead3 | [
"Unlicense"
] | 22 | 2020-09-23T14:05:33.000Z | 2021-12-04T22:40:41.000Z | notebooks/playgrounds.ipynb | espoirMur/balobi_nini | b68b9af4c84ec0f5b38ae8ba52d5f0d32b41ead3 | [
"Unlicense"
] | 1 | 2021-07-29T10:38:13.000Z | 2021-07-29T10:38:13.000Z | 21.89 | 341 | 0.555505 | [
[
[
"from topic_modeling.dynamic_nmf import DynamicNMF",
"_____no_output_____"
],
[
"dynamic_nmf = DynamicNMF()",
"_____no_output_____"
],
[
"from topic_modeling.dynamic_nmf import DynamicNMF\ndynamic_nmf = DynamicNMF()",
"_____no_output_____"
],
[
"dynamic_nmf.split_into_windows_docs()",
"_____no_output_____"
],
[
"for window_data in dynamic_nmf.windows_data:",
"_____no_output_____"
],
[
"from topic_modeling.dynamic_nmf import DynamicNMF\ndynamic_nmf = DynamicNMF()",
"_____no_output_____"
],
[
"dynamic_nmf.split_into_windows_docs()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d054f190f14372ca9ca055e15d136dae8a1267b0 | 4,289 | ipynb | Jupyter Notebook | PythonProgramming.net/DeepLearningBasics/07-intro_rnn/intro_rnn_colab.ipynb | dloperab/TensorFlow | 5e13ceaf793501eb01c2b22859211c75529c054b | [
"MIT"
] | 1 | 2019-04-12T23:59:54.000Z | 2019-04-12T23:59:54.000Z | PythonProgramming.net/DeepLearningBasics/07-intro_rnn/intro_rnn_colab.ipynb | dloperab/TensorFlow | 5e13ceaf793501eb01c2b22859211c75529c054b | [
"MIT"
] | null | null | null | PythonProgramming.net/DeepLearningBasics/07-intro_rnn/intro_rnn_colab.ipynb | dloperab/TensorFlow | 5e13ceaf793501eb01c2b22859211c75529c054b | [
"MIT"
] | null | null | null | 4,289 | 4,289 | 0.676615 | [
[
[
"# import necessary packages\nimport tensorflow as tf\nfrom tensorflow.keras.datasets import mnist\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Dropout, LSTM",
"_____no_output_____"
],
[
"# load data\n# mnist is a dataset of 28x28 images of handwritten digits and their labels\n(trainX, trainY), (testX, testY) = mnist.load_data()",
"Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 1s 0us/step\n"
],
[
"# scale the raw pixel intensities to the range [0, 1]\ntrainX = trainX / 255.0\ntestX = testX/ 255.0\n\nprint(trainX.shape)\nprint(trainX[0].shape)\nprint(trainX.shape[1:])",
"(60000, 28, 28)\n(28, 28)\n(28, 28)\n"
],
[
"# define the model architecture\nmodel = Sequential()\nmodel.add(LSTM(128, input_shape=(trainX.shape[1:]), activation=\"relu\", return_sequences=True))\nmodel.add(Dropout(0.2))\n\nmodel.add(LSTM(128, activation=\"relu\"))\nmodel.add(Dropout(0.2))\n\nmodel.add(Dense(32, activation=\"relu\"))\nmodel.add(Dropout(0.2))\n\nmodel.add(Dense(10, activation=\"softmax\"))",
"_____no_output_____"
],
[
"# define optimizer and train the model\nopt = tf.keras.optimizers.Adam(lr=0.001, decay=1e-6)\n\nmodel.compile(loss=\"sparse_categorical_crossentropy\",\n optimizer=opt,\n metrics=[\"accuracy\"])\n\nmodel.fit(trainX, trainY, epochs=3, validation_data=(testX, testY))",
"Train on 60000 samples, validate on 10000 samples\nEpoch 1/3\n60000/60000 [==============================] - 271s 5ms/step - loss: 0.7507 - acc: 0.7481 - val_loss: 0.1878 - val_acc: 0.9436\nEpoch 2/3\n60000/60000 [==============================] - 268s 4ms/step - loss: 0.1739 - acc: 0.9533 - val_loss: 0.1123 - val_acc: 0.9678\nEpoch 3/3\n60000/60000 [==============================] - 266s 4ms/step - loss: 0.1176 - acc: 0.9688 - val_loss: 0.0932 - val_acc: 0.9750\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d055031ac14367c3f566d45db3cb3933f8246f90 | 184,595 | ipynb | Jupyter Notebook | code/test.ipynb | HurryZhao/boxplot | 50c42ce92cc8a487e6887cf42c66379011499182 | [
"MIT"
] | 4 | 2020-11-09T13:53:41.000Z | 2020-12-10T15:03:55.000Z | code/test.ipynb | HurryZhao/boxplot | 50c42ce92cc8a487e6887cf42c66379011499182 | [
"MIT"
] | null | null | null | code/test.ipynb | HurryZhao/boxplot | 50c42ce92cc8a487e6887cf42c66379011499182 | [
"MIT"
] | 1 | 2020-11-09T13:28:10.000Z | 2020-11-09T13:28:10.000Z | 225.942472 | 37,300 | 0.908194 | [
[
[
"# Test",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom boxplot import boxplot as bx\nimport numpy as np\n",
"_____no_output_____"
]
],
[
[
"## Quality of data",
"_____no_output_____"
]
],
[
[
"# Integers\nIntegers = [np.random.randint(-3, 3, 500, dtype='l'),np.random.randint(-10, 10, 500, dtype='l')]\nFloat = np.random.random([2,500]).tolist()\n",
"_____no_output_____"
],
[
"fig,ax = plt.subplots(figsize=(10,10))\nbx.boxplot(ax,Integers)",
"_____no_output_____"
],
[
"fig,ax = plt.subplots(figsize=(10,10))\nbx.info_boxplot(ax,Integers)",
"[6.3, 12.6]\n"
],
[
"fig,ax = plt.subplots(figsize=(10,10))\nbx.hist_boxplot(ax,Integers)",
"[6.3, 12.6]\n"
],
[
"fig,ax = plt.subplots(figsize=(10,10))\nbx.creative_boxplot(ax,Integers)",
"[6.3, 12.6]\n"
],
[
"fig,ax = plt.subplots(figsize=(10,10))\nbx.boxplot(ax,Float)",
"_____no_output_____"
],
[
"fig,ax = plt.subplots(figsize=(10,10))\nbx.info_boxplot(ax,Float)",
"[0.36623335, 0.7324667]\n"
],
[
"fig,ax = plt.subplots(figsize=(10,10))\nbx.hist_boxplot(ax,Float)",
"[0.36623335, 0.7324667]\n"
],
[
"fig,ax = plt.subplots(figsize=(10,10))\nbx.creative_boxplot(ax,Float)",
"[0.36623335, 0.7324667]\n"
]
],
[
[
"## Real dataset",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndata = pd.read_csv('/Users/hurryzhao/boxplot/results_merged.csv')\n",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
],
[
"t_d1 = data.commits[data.last_updated=='2017-08-28']\nt_d2 = data.commits[data.last_updated=='2017-08-26']\nt_d3 = data.commits[data.last_updated=='2017-08-24']\nt_d4 = data.commits[data.last_updated=='2017-08-22']\nt_d5 = data.commits[data.last_updated=='2017-08-20']\n\nt_d=[t_d1,t_d2,t_d3,t_d4,t_d5]\nt_d",
"_____no_output_____"
],
[
"fig,ax = plt.subplots(figsize=(10,10))\nbx.boxplot(ax,t_d,outlier_facecolor='white',outlier_edgecolor='r',outlier=False)",
"_____no_output_____"
],
[
"fig,ax = plt.subplots(figsize=(10,10))\nbx.info_boxplot(ax,t_d,outlier_facecolor='white',outlier_edgecolor='r',outlier=False)",
"[333.3333333319238, 666.6666666680761, 999.9999999999999, 1333.3333333319238, 1666.6666666680758]\n"
],
[
"fig,ax = plt.subplots(figsize=(10,10))\nbx.hist_boxplot(ax,t_d,outlier_facecolor='white',outlier_edgecolor='r',outlier=False)",
"[333.3333333319238, 666.6666666680761, 999.9999999999999, 1333.3333333319238, 1666.6666666680758]\n"
],
[
"fig,ax = plt.subplots(figsize=(10,10))\nbx.creative_boxplot(ax,t_d,outlier_facecolor='white',outlier_edgecolor='r',outlier=False)",
"[333.3333333319238, 666.6666666680761, 999.9999999999999, 1333.3333333319238, 1666.6666666680758]\n"
]
],
[
[
"## Robustness",
"_____no_output_____"
]
],
[
[
"data=[['1','1','2','2','3','4'],['1','1','2','2','3','4']]\nfig,ax = plt.subplots(figsize=(10,10))\nbx.boxplot(ax,data,outlier_facecolor='white',outlier_edgecolor='r',outlier=False)",
"Wrong data type, please input a list of numerical list\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d05506d0c038073ef428850c6090d7c569fe8724 | 25,341 | ipynb | Jupyter Notebook | Big-Data-Clusters/CU4/Public/content/monitor-k8s/tsg097-get-statefulsets.ipynb | gantz-at-incomm/tigertoolbox | 9ea80d39a3c5e0c77553fc851c5ee787fbf9291d | [
"MIT"
] | 541 | 2019-05-07T11:41:25.000Z | 2022-03-29T17:33:19.000Z | Big-Data-Clusters/CU4/Public/content/monitor-k8s/tsg097-get-statefulsets.ipynb | gantz-at-incomm/tigertoolbox | 9ea80d39a3c5e0c77553fc851c5ee787fbf9291d | [
"MIT"
] | 89 | 2019-05-09T14:23:52.000Z | 2022-01-13T20:21:04.000Z | Big-Data-Clusters/CU4/Public/content/monitor-k8s/tsg097-get-statefulsets.ipynb | gantz-at-incomm/tigertoolbox | 9ea80d39a3c5e0c77553fc851c5ee787fbf9291d | [
"MIT"
] | 338 | 2019-05-08T05:45:16.000Z | 2022-03-28T15:35:03.000Z | 58.12156 | 408 | 0.418137 | [
[
[
"TSG097 - Get BDC stateful sets (Kubernetes)\n===========================================\n\nDescription\n-----------\n\nSteps\n-----\n\n### Common functions\n\nDefine helper functions used in this notebook.",
"_____no_output_____"
]
],
[
[
"# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows\nimport sys\nimport os\nimport re\nimport json\nimport platform\nimport shlex\nimport shutil\nimport datetime\n\nfrom subprocess import Popen, PIPE\nfrom IPython.display import Markdown\n\nretry_hints = {} # Output in stderr known to be transient, therefore automatically retry\nerror_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help\ninstall_hint = {} # The SOP to help install the executable if it cannot be found\n\nfirst_run = True\nrules = None\ndebug_logging = False\n\ndef run(cmd, return_output=False, no_output=False, retry_count=0):\n \"\"\"Run shell command, stream stdout, print stderr and optionally return output\n\n NOTES:\n\n 1. Commands that need this kind of ' quoting on Windows e.g.:\n\n kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}\n\n Need to actually pass in as '\"':\n\n kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='\"'data-pool'\"')].metadata.name}\n\n The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:\n \n `iter(p.stdout.readline, b'')`\n\n The shlex.split call does the right thing for each platform, just use the '\"' pattern for a '\n \"\"\"\n MAX_RETRIES = 5\n output = \"\"\n retry = False\n\n global first_run\n global rules\n\n if first_run:\n first_run = False\n rules = load_rules()\n\n # When running `azdata sql query` on Windows, replace any \\n in \"\"\" strings, with \" \", otherwise we see:\n #\n # ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')\n #\n if platform.system() == \"Windows\" and cmd.startswith(\"azdata sql query\"):\n cmd = cmd.replace(\"\\n\", \" \")\n\n # shlex.split is required on bash and for Windows paths with spaces\n #\n cmd_actual = shlex.split(cmd)\n\n # Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries\n #\n user_provided_exe_name = cmd_actual[0].lower()\n\n # When running python, use the python in the ADS sandbox ({sys.executable})\n #\n if cmd.startswith(\"python \"):\n cmd_actual[0] = cmd_actual[0].replace(\"python\", sys.executable)\n\n # On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail\n # with:\n #\n # UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)\n #\n # Setting it to a default value of \"en_US.UTF-8\" enables pip install to complete\n #\n if platform.system() == \"Darwin\" and \"LC_ALL\" not in os.environ:\n os.environ[\"LC_ALL\"] = \"en_US.UTF-8\"\n\n # When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`\n #\n if cmd.startswith(\"kubectl \") and \"AZDATA_OPENSHIFT\" in os.environ:\n cmd_actual[0] = cmd_actual[0].replace(\"kubectl\", \"oc\")\n\n # To aid supportabilty, determine which binary file will actually be executed on the machine\n #\n which_binary = None\n\n # Special case for CURL on Windows. The version of CURL in Windows System32 does not work to\n # get JWT tokens, it returns \"(56) Failure when receiving data from the peer\". If another instance\n # of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost\n # always the first curl.exe in the path, and it can't be uninstalled from System32, so here we\n # look for the 2nd installation of CURL in the path)\n if platform.system() == \"Windows\" and cmd.startswith(\"curl \"):\n path = os.getenv('PATH')\n for p in path.split(os.path.pathsep):\n p = os.path.join(p, \"curl.exe\")\n if os.path.exists(p) and os.access(p, os.X_OK):\n if p.lower().find(\"system32\") == -1:\n cmd_actual[0] = p\n which_binary = p\n break\n\n # Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this\n # seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound) \n #\n # NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.\n #\n if which_binary == None:\n which_binary = shutil.which(cmd_actual[0])\n\n if which_binary == None:\n if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:\n display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))\n\n raise FileNotFoundError(f\"Executable '{cmd_actual[0]}' not found in path (where/which)\")\n else: \n cmd_actual[0] = which_binary\n\n start_time = datetime.datetime.now().replace(microsecond=0)\n\n print(f\"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)\")\n print(f\" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})\")\n print(f\" cwd: {os.getcwd()}\")\n\n # Command-line tools such as CURL and AZDATA HDFS commands output\n # scrolling progress bars, which causes Jupyter to hang forever, to\n # workaround this, use no_output=True\n #\n\n # Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait\n #\n wait = True \n\n try:\n if no_output:\n p = Popen(cmd_actual)\n else:\n p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)\n with p.stdout:\n for line in iter(p.stdout.readline, b''):\n line = line.decode()\n if return_output:\n output = output + line\n else:\n if cmd.startswith(\"azdata notebook run\"): # Hyperlink the .ipynb file\n regex = re.compile(' \"(.*)\"\\: \"(.*)\"') \n match = regex.match(line)\n if match:\n if match.group(1).find(\"HTML\") != -1:\n display(Markdown(f' - \"{match.group(1)}\": \"{match.group(2)}\"'))\n else:\n display(Markdown(f' - \"{match.group(1)}\": \"[{match.group(2)}]({match.group(2)})\"'))\n\n wait = False\n break # otherwise infinite hang, have not worked out why yet.\n else:\n print(line, end='')\n if rules is not None:\n apply_expert_rules(line)\n\n if wait:\n p.wait()\n except FileNotFoundError as e:\n if install_hint is not None:\n display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))\n\n raise FileNotFoundError(f\"Executable '{cmd_actual[0]}' not found in path (where/which)\") from e\n\n exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()\n\n if not no_output:\n for line in iter(p.stderr.readline, b''):\n try:\n line_decoded = line.decode()\n except UnicodeDecodeError:\n # NOTE: Sometimes we get characters back that cannot be decoded(), e.g.\n #\n # \\xa0\n #\n # For example see this in the response from `az group create`:\n #\n # ERROR: Get Token request returned http error: 400 and server \n # response: {\"error\":\"invalid_grant\",# \"error_description\":\"AADSTS700082: \n # The refresh token has expired due to inactivity.\\xa0The token was \n # issued on 2018-10-25T23:35:11.9832872Z\n #\n # which generates the exception:\n #\n # UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte\n #\n print(\"WARNING: Unable to decode stderr line, printing raw bytes:\")\n print(line)\n line_decoded = \"\"\n pass\n else:\n\n # azdata emits a single empty line to stderr when doing an hdfs cp, don't\n # print this empty \"ERR:\" as it confuses.\n #\n if line_decoded == \"\":\n continue\n \n print(f\"STDERR: {line_decoded}\", end='')\n\n if line_decoded.startswith(\"An exception has occurred\") or line_decoded.startswith(\"ERROR: An error occurred while executing the following cell\"):\n exit_code_workaround = 1\n\n # inject HINTs to next TSG/SOP based on output in stderr\n #\n if user_provided_exe_name in error_hints:\n for error_hint in error_hints[user_provided_exe_name]:\n if line_decoded.find(error_hint[0]) != -1:\n display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))\n\n # apply expert rules (to run follow-on notebooks), based on output\n #\n if rules is not None:\n apply_expert_rules(line_decoded)\n\n # Verify if a transient error, if so automatically retry (recursive)\n #\n if user_provided_exe_name in retry_hints:\n for retry_hint in retry_hints[user_provided_exe_name]:\n if line_decoded.find(retry_hint) != -1:\n if retry_count < MAX_RETRIES:\n print(f\"RETRY: {retry_count} (due to: {retry_hint})\")\n retry_count = retry_count + 1\n output = run(cmd, return_output=return_output, retry_count=retry_count)\n\n if return_output:\n return output\n else:\n return\n\n elapsed = datetime.datetime.now().replace(microsecond=0) - start_time\n\n # WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so\n # don't wait here, if success known above\n #\n if wait: \n if p.returncode != 0:\n raise SystemExit(f'Shell command:\\n\\n\\t{cmd} ({elapsed}s elapsed)\\n\\nreturned non-zero exit code: {str(p.returncode)}.\\n')\n else:\n if exit_code_workaround !=0 :\n raise SystemExit(f'Shell command:\\n\\n\\t{cmd} ({elapsed}s elapsed)\\n\\nreturned non-zero exit code: {str(exit_code_workaround)}.\\n')\n\n print(f'\\nSUCCESS: {elapsed}s elapsed.\\n')\n\n if return_output:\n return output\n\ndef load_json(filename):\n \"\"\"Load a json file from disk and return the contents\"\"\"\n\n with open(filename, encoding=\"utf8\") as json_file:\n return json.load(json_file)\n\ndef load_rules():\n \"\"\"Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable\"\"\"\n\n # Load this notebook as json to get access to the expert rules in the notebook metadata.\n #\n try:\n j = load_json(\"tsg097-get-statefulsets.ipynb\")\n except:\n pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?\n else:\n if \"metadata\" in j and \\\n \"azdata\" in j[\"metadata\"] and \\\n \"expert\" in j[\"metadata\"][\"azdata\"] and \\\n \"expanded_rules\" in j[\"metadata\"][\"azdata\"][\"expert\"]:\n\n rules = j[\"metadata\"][\"azdata\"][\"expert\"][\"expanded_rules\"]\n\n rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.\n\n # print (f\"EXPERT: There are {len(rules)} rules to evaluate.\")\n\n return rules\n\ndef apply_expert_rules(line):\n \"\"\"Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so\n inject a 'HINT' to the follow-on SOP/TSG to run\"\"\"\n\n global rules\n\n for rule in rules:\n notebook = rule[1]\n cell_type = rule[2]\n output_type = rule[3] # i.e. stream or error\n output_type_name = rule[4] # i.e. ename or name \n output_type_value = rule[5] # i.e. SystemExit or stdout\n details_name = rule[6] # i.e. evalue or text \n expression = rule[7].replace(\"\\\\*\", \"*\") # Something escaped *, and put a \\ in front of it!\n\n if debug_logging:\n print(f\"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.\")\n\n if re.match(expression, line, re.DOTALL):\n\n if debug_logging:\n print(\"EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'\".format(output_type_name, output_type_value, expression, notebook))\n\n match_found = True\n\n display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))\n\n\n\nprint('Common functions defined successfully.')\n\n# Hints for binary (transient fault) retry, (known) error and install guide\n#\nretry_hints = {'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond']}\nerror_hints = {'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb']]}\ninstall_hint = {'kubectl': ['SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb']}",
"_____no_output_____"
]
],
[
[
"### Get the Kubernetes namespace for the big data cluster\n\nGet the namespace of the Big Data Cluster use the kubectl command line\ninterface .\n\n**NOTE:**\n\nIf there is more than one Big Data Cluster in the target Kubernetes\ncluster, then either:\n\n- set \\[0\\] to the correct value for the big data cluster.\n- set the environment variable AZDATA\\_NAMESPACE, before starting\n Azure Data Studio.",
"_____no_output_____"
]
],
[
[
"# Place Kubernetes namespace name for BDC into 'namespace' variable\n\nif \"AZDATA_NAMESPACE\" in os.environ:\n namespace = os.environ[\"AZDATA_NAMESPACE\"]\nelse:\n try:\n namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True)\n except:\n from IPython.display import Markdown\n print(f\"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.\")\n display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))\n display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))\n display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))\n raise\n\nprint(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}')",
"_____no_output_____"
]
],
[
[
"### Run kubectl to display the Stateful sets",
"_____no_output_____"
]
],
[
[
"run(f\"kubectl get statefulset -n {namespace} -o wide\")",
"_____no_output_____"
],
[
"print('Notebook execution complete.')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0551bc685a14848e4935e5d63ff58a49a440885 | 8,960 | ipynb | Jupyter Notebook | AWS Linguipedia/linguipedia_aws.ipynb | aakash2016/hackathons-analytics-vidhya | 4cc60eb4ea89feaaa49614e362ec9dd5aec96896 | [
"MIT"
] | null | null | null | AWS Linguipedia/linguipedia_aws.ipynb | aakash2016/hackathons-analytics-vidhya | 4cc60eb4ea89feaaa49614e362ec9dd5aec96896 | [
"MIT"
] | null | null | null | AWS Linguipedia/linguipedia_aws.ipynb | aakash2016/hackathons-analytics-vidhya | 4cc60eb4ea89feaaa49614e362ec9dd5aec96896 | [
"MIT"
] | null | null | null | 30.684932 | 192 | 0.590737 | [
[
[
"## Importing the libraries\n\nimport pandas as pd\nimport re\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.linear_model import LogisticRegression\nfrom scipy.sparse import hstack\nfrom sklearn.metrics import f1_score\n\ntrain = pd.read_csv('trainl.csv')\ntest = pd.read_csv('testl.csv')\nprint(train.shape); print(test.shape)",
"(7920, 3)\n(1953, 2)\n"
],
[
"## This is typical sentiment Analysis problem.\n# Customer Tweets related to tech firms who are manufacturers of mobiles, laptops are given to us.\n# The task is to determine tweets which have negative sentiments towards such companies or products.\ntrain.label.value_counts() #Most of the tweets have positive sentiments.",
"_____no_output_____"
],
[
"# train.isna().sum()\n## Clearly there are no missing values.\n## Data Preprocessing\n## Not using deep learning models using simple ml algorithm - Logistic Regression.\n# And so we will simply use frequency based embeddings loke tfidf or count vectorizer.\ndef clean_text(text):\n # firstly put all the texts in lower cases\n text = text.lower()\n text = text.replace('$&@*#', 'bakwas')\n text = text.replace('f**k', 'fuck')\n text = text.replace('@$$hole', 'asshole')\n text = text.replace('f#%*king', 'fucking')\n text = text.replace(':@', 'bakwas')\n return text\ntrain['tweet']=train['tweet'].apply(lambda x: clean_text(x))\ntest['tweet']=test['tweet'].apply(lambda x: clean_text(x))",
"_____no_output_____"
],
[
"## Since twitter ID can be '@' followed by some alphanumeric we need to remove them.\n# Because they are just ID's and will play any role in determining the sentiments.\ndef remove_user(text):\n r = re.findall('@[\\w]*', text)\n for i in r:\n text = re.sub(i, '', text)\n return text \ntrain.tweet = train.tweet.apply(lambda x: remove_user(x))\ntest.tweet = test.tweet.apply(lambda x: remove_user(x))",
"_____no_output_____"
],
[
"## Similarly there are many URL's which we need to remove as they wont play any role in sentiments.\ndef remove_url(text):\n text = re.sub('(http|ftp|https)://([\\w_-]+(?:(?:\\.[\\w_-]+)+))([\\w.,@?^=%&:/~+#-]*[\\w@?^=%&/~+#-])?', '', text)\n return text \ntrain.tweet = train.tweet.apply(lambda x: remove_url(x))\ntest.tweet = test.tweet.apply(lambda x: remove_url(x))",
"_____no_output_____"
],
[
"## Now we will split our training data into train and validation so that we can do proper regularisation.\nX_train, X_valid, y_train, y_valid = train_test_split(train['tweet'], train['label'], test_size = 0.1,\n random_state=12)",
"_____no_output_____"
],
[
"## Part1 -- using count vectoriser and Naive Bayes Algorithm.\nvect = CountVectorizer().fit(X_train)\nX_train_vectorized = vect.transform(X_train)\n\nmodel = MultinomialNB(alpha = 0.0925)\nmodel.fit(X_train_vectorized, y_train)\npredictions = model.predict(vect.transform(X_valid))\n## Clearly our submissions are evaluated on the basis of F1Score\nprint(f1_score(y_valid, predictions))",
"0.819672131147541\n"
],
[
"## Part2 -- using tfidf vectorizer and Naive Bayes Algorithm.\ntfvect = TfidfVectorizer().fit(X_train)\nX_train_vectorized = tfvect.transform(X_train)\n\nmodel = MultinomialNB(alpha = 0.0955)\nmodel.fit(X_train_vectorized, y_train)\npredictions = model.predict(tfvect.transform(X_valid))\nprint(f1_score(y_valid, predictions))",
"0.8253275109170305\n"
],
[
"## Part3 -- using count vectoriser and Logistic Regression Algorithm.\nvect = CountVectorizer(min_df=2, ngram_range=(1,3)).fit(X_train)\nX_train_vectorized = vect.transform(X_train)\n\nmodel = LogisticRegression(C = 1.6, solver = 'sag')\nmodel.fit(X_train_vectorized, y_train)\npredictions = model.predict(vect.transform(X_valid))\nprint(f1_score(y_valid, predictions))",
"0.8301886792452831\n"
],
[
"## Part4 -- using tfidf vectorizer and Logistic Regression Algorithm.\n## Word Level tf idf vectorizer.\n\ntext = pd.concat([train.tweet, test.tweet])\nTfword_vectorizer = TfidfVectorizer(sublinear_tf=True,strip_accents='unicode',analyzer='word',ngram_range=(1, 3),max_features=10000).fit(text)\nword_train_vectorized = Tfword_vectorizer.transform(X_train)\nword_valid_vectorized = Tfword_vectorizer.transform(X_valid)\nword_test_vectorized = Tfword_vectorizer.transform(test.tweet)",
"_____no_output_____"
],
[
"## Character level tf idf vectoriser.\nTfchar_vectorizer = TfidfVectorizer(sublinear_tf=True,strip_accents='unicode',analyzer='char',ngram_range=(1, 15),max_features=50000).fit(text)\nchar_train_vectorized = Tfchar_vectorizer.transform(X_train)\nchar_valid_vectorized = Tfchar_vectorizer.transform(X_valid)\nchar_test_vectorized = Tfchar_vectorizer.transform(test.tweet)",
"_____no_output_____"
],
[
"## Horizontally stacking the tf idf vectorizers.\ntrain_features = hstack([char_train_vectorized, word_train_vectorized])\nvalid_features = hstack([char_valid_vectorized, word_valid_vectorized])\ntest_features = hstack([char_test_vectorized, word_test_vectorized])",
"_____no_output_____"
],
[
"model = LogisticRegression(max_iter=300,C=2.0,solver='sag')\nmodel.fit(train_features, y_train)\npredictions = model.predict(valid_features)\npred_y = model.predict(test_features)\nprint(f1_score(y_valid, predictions))",
"0.8364485981308412\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0553ba0bc91d8d1024c280ca5f3958e2c2853d1 | 87,881 | ipynb | Jupyter Notebook | examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb | tnakaicode/ChargedPaticle-LowEnergy | 47b751bcada2af7fc50cef587a48b1a3c12bcbba | [
"MIT"
] | 6 | 2019-04-14T06:19:40.000Z | 2021-09-14T13:46:26.000Z | examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb | tnakaicode/ChargedPaticle-LowEnergy | 47b751bcada2af7fc50cef587a48b1a3c12bcbba | [
"MIT"
] | 31 | 2018-03-02T12:05:20.000Z | 2019-02-20T09:29:08.000Z | examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb | tnakaicode/ChargedPaticle-LowEnergy | 47b751bcada2af7fc50cef587a48b1a3c12bcbba | [
"MIT"
] | 10 | 2017-12-21T15:16:55.000Z | 2020-10-31T23:59:50.000Z | 96.149891 | 5,396 | 0.812314 | [
[
[
"## Trajectory equations:",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"from sympy import *\ninit_printing()",
"_____no_output_____"
],
[
"Bx, By, Bz, B = symbols(\"B_x, B_y, B_z, B\")\nx, y, z = symbols(\"x, y, z\" )\nx_0, y_0, z_0 = symbols(\"x_0, y_0, z_0\")\nvx, vy, vz, v = symbols(\"v_x, v_y, v_z, v\")\nvx_0, vy_0, vz_0 = symbols(\"v_x0, v_y0, v_z0\")\nt = symbols(\"t\")\nq, m = symbols(\"q, m\")\nc, eps0 = symbols(\"c, epsilon_0\")",
"_____no_output_____"
]
],
[
[
"The equation of motion:\n$$\n\\begin{gather*}\n m \\frac{d^2 \\vec{r} }{dt^2} = \\frac{q}{c} [ \\vec{v} \\vec{B} ] \n\\end{gather*}\n$$",
"_____no_output_____"
],
[
"For the case of a uniform magnetic field along the $z$-axis: \n$$ \\vec{B} = B_z = B, \\quad B_x = 0, \\quad B_y = 0 $$",
"_____no_output_____"
],
[
"In Cortesian coordinates:",
"_____no_output_____"
]
],
[
[
"eq_x = Eq( Derivative(x(t), t, 2), q / c / m * Bz * Derivative(y(t),t) )\neq_y = Eq( Derivative(y(t), t, 2), - q / c / m * Bz * Derivative(x(t),t) )\neq_z = Eq( Derivative(z(t), t, 2), 0 )\ndisplay( eq_x, eq_y, eq_z )",
"_____no_output_____"
]
],
[
[
"Motion is uniform along the $z$-axis:",
"_____no_output_____"
]
],
[
[
"z_eq = dsolve( eq_z, z(t) )\nvz_eq = Eq( z_eq.lhs.diff(t), z_eq.rhs.diff(t) )\ndisplay( z_eq, vz_eq )",
"_____no_output_____"
]
],
[
[
"The constants of integration can be found from the initial conditions $z(0) = z_0$ and $v_z(0) = v_{z0}$:",
"_____no_output_____"
]
],
[
[
"c1_c2_system = []\ninitial_cond_subs = [(t, 0), (z(0), z_0), (diff(z(t),t).subs(t,0), vz_0) ]\nc1_c2_system.append( z_eq.subs( initial_cond_subs ) )\nc1_c2_system.append( vz_eq.subs( initial_cond_subs ) )\n\nc1, c2 = symbols(\"C1, C2\")\nc1_c2 = solve( c1_c2_system, [c1, c2] )\nc1_c2",
"_____no_output_____"
]
],
[
[
"So that",
"_____no_output_____"
]
],
[
[
"z_sol = z_eq.subs( c1_c2 )\nvz_sol = vz_eq.subs( c1_c2 ).subs( [( diff(z(t),t), vz(t) ) ] )\ndisplay( z_sol, vz_sol )",
"_____no_output_____"
]
],
[
[
"For some reason I have not been able to solve the system of differential equations for $x$ and $y$ directly\nwith Sympy's `dsolve` function:",
"_____no_output_____"
]
],
[
[
"#dsolve( [eq_x, eq_y], [x(t),y(t)] )",
"_____no_output_____"
]
],
[
[
"It is necessary to resort to the manual solution. The method is to differentiate one of them over \ntime and substitute the other. This will result in oscillator-type second-order equations for $v_y$ and $v_x$. Their solution is known. Integrating one more time, it is possible to obtain laws of motion $x(t)$ and $y(t)$.",
"_____no_output_____"
]
],
[
[
"v_subs = [ (Derivative(x(t),t), vx(t)), (Derivative(y(t),t), vy(t)) ]\neq_vx = eq_x.subs( v_subs )\neq_vy = eq_y.subs( v_subs )\ndisplay( eq_vx, eq_vy )\n\neq_d2t_vx = Eq( diff(eq_vx.lhs,t), diff(eq_vx.rhs,t))\neq_d2t_vx = eq_d2t_vx.subs( [(eq_vy.lhs, eq_vy.rhs)] )\ndisplay( eq_d2t_vx )",
"_____no_output_____"
]
],
[
[
"The solution of the last equation is",
"_____no_output_____"
]
],
[
[
"C1, C2, Omega = symbols( \"C1, C2, Omega\" )\nvx_eq = Eq( vx(t), C1 * cos( Omega * t ) + C2 * sin( Omega * t ))\ndisplay( vx_eq )\nomega_eq = Eq( Omega, Bz * q / c / m )\ndisplay( omega_eq )",
"_____no_output_____"
]
],
[
[
"where $\\Omega$ is a cyclotron frequency.",
"_____no_output_____"
]
],
[
[
"display( vx_eq )\n\nvy_eq = Eq( vy(t), solve( Eq( diff(vx_eq.rhs,t), eq_vx.rhs ), ( vy(t) ) )[0] )\nvy_eq = vy_eq.subs( [(Omega*c*m / Bz / q, omega_eq.rhs * c * m / Bz / q)]).simplify()\ndisplay( vy_eq )",
"_____no_output_____"
]
],
[
[
"For initial conditions $v_x(0) = v_{x0}, v_y(0) = v_{y0}$:",
"_____no_output_____"
]
],
[
[
"initial_cond_subs = [(t,0), (vx(0), vx_0), (vy(0), vy_0) ]\nvx0_eq = vx_eq.subs( initial_cond_subs )\nvy0_eq = vy_eq.subs( initial_cond_subs )\ndisplay( vx0_eq, vy0_eq )\n\nc1_c2 = solve( [vx0_eq, vy0_eq] )\nc1_c2_subs = [ (\"C1\", c1_c2[c1]), (\"C2\", c1_c2[c2]) ]\nvx_eq = vx_eq.subs( c1_c2_subs )\nvy_eq = vy_eq.subs( c1_c2_subs )\ndisplay( vx_eq, vy_eq )",
"_____no_output_____"
]
],
[
[
"These equations can be integrated to obtain the laws of motion:",
"_____no_output_____"
]
],
[
[
"x_eq = vx_eq.subs( vx(t), diff(x(t),t))\nx_eq = dsolve( x_eq )\ny_eq = vy_eq.subs( vy(t), diff(y(t),t))\ny_eq = dsolve( y_eq ).subs( C1, C2 )\ndisplay( x_eq, y_eq )",
"_____no_output_____"
]
],
[
[
"For nonzero $\\Omega$:",
"_____no_output_____"
]
],
[
[
"x_eq = x_eq.subs( [(Omega, 123)] ).subs( [(123, Omega)] ).subs( [(Rational(1,123), 1/Omega)] )\ny_eq = y_eq.subs( [(Omega, 123)] ).subs( [(123, Omega)] ).subs( [(Rational(1,123), 1/Omega)] )\ndisplay( x_eq, y_eq )",
"_____no_output_____"
]
],
[
[
"For initial conditions $x(0) = x_0, y(0) = y_0$:",
"_____no_output_____"
]
],
[
[
"initial_cond_subs = [(t,0), (x(0), x_0), (y(0), y_0) ]\nx0_eq = x_eq.subs( initial_cond_subs )\ny0_eq = y_eq.subs( initial_cond_subs )\ndisplay( x0_eq, y0_eq )\n\nc1_c2 = solve( [x0_eq, y0_eq] )\nc1_c2_subs = [ (\"C1\", c1_c2[0][c1]), (\"C2\", c1_c2[0][c2]) ]\nx_eq = x_eq.subs( c1_c2_subs )\ny_eq = y_eq.subs( c1_c2_subs )\ndisplay( x_eq, y_eq )",
"_____no_output_____"
],
[
"x_eq = x_eq.simplify()\ny_eq = y_eq.simplify()\nx_eq = x_eq.expand().collect(Omega)\ny_eq = y_eq.expand().collect(Omega)\ndisplay( x_eq, y_eq )",
"_____no_output_____"
]
],
[
[
"Finally",
"_____no_output_____"
]
],
[
[
"display( x_eq, y_eq, z_sol )\ndisplay( vx_eq, vy_eq, vz_sol )\ndisplay( omega_eq )",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0554c40e8777d852dfb7b0c575f025b99d3eeca | 37,598 | ipynb | Jupyter Notebook | documents/Advanced/ManpowerPlanning/manpower_planning.ipynb | biancaitian/gurobi-official-examples | 175e1d0aa88dffde644fac188c48c7d3c185d007 | [
"Apache-2.0"
] | 4 | 2021-08-01T11:50:14.000Z | 2022-03-13T01:49:24.000Z | documents/Advanced/ManpowerPlanning/manpower_planning.ipynb | biancaitian/gurobi-official-examples | 175e1d0aa88dffde644fac188c48c7d3c185d007 | [
"Apache-2.0"
] | null | null | null | documents/Advanced/ManpowerPlanning/manpower_planning.ipynb | biancaitian/gurobi-official-examples | 175e1d0aa88dffde644fac188c48c7d3c185d007 | [
"Apache-2.0"
] | 4 | 2021-04-14T06:58:00.000Z | 2022-03-15T10:37:40.000Z | 37.56044 | 680 | 0.52806 | [
[
[
"# 人力规划\n\n等级:高级\n\n## 目的和先决条件\n\n此模型是人员编制问题的一个示例。在人员编制计划问题中,必须在招聘,培训,裁员(裁员)和安排工时方面做出选择。人员配备问题在制造业和服务业广泛存在。\n\n### What You Will Learn\n\nIn this example, we will model and solve a manpower planning problem. We have three types of workers with different skills levels. For each year in the planning horizon, the forecasted number of required workers with specific skills is given. It is possible to recruit new people, train workers to improve their skills, or shift them to a part-time working arrangement. The aim is to create an optimal multi-period operation plan that achieves one of the following two objectives: minimizing the total number of layoffs over the whole horizon or minimizing total costs.\n\nMore information on this type of model can be found in example #5 of the fifth edition of Model Building in Mathematical Programming, by H. Paul Williams on pages 256-257 and 303-304.\n\nThis modeling example is at the advanced level, where we assume that you know Python and the Gurobi Python API and that you have advanced knowledge of building mathematical optimization models. Typically, the objective function and/or constraints of these examples are complex or require advanced features of the Gurobi Python API.\n\n**Note:** You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an [evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=CommercialDataScience) as a *commercial user*, or download a [free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=AcademicDataScience) as an *academic user*.\n\n---\n## Problem Description\n\nA company is changing how it runs its business, and therefore its staffing needs are expected to change.\n\nThrough the purchase of new machinery, it is expected that there will be less need for unskilled labor and more need for skilled and semi-skilled labor. In addition, a lower sales forecast — driven by an economic slowdown that is predicted to happen in the next year — is expected to further reduce labor needs across all categories.\n\nThe forecast for labor needs over the next three years is as follows:\n\n| <i></i> | Unskilled | Semi-skilled | Skilled |\n| --- | --- | --- | --- |\n| Current Strength | 2000 | 1500 | 1000 |\n| Year 1 | 1000 | 1400 | 1000 |\n| Year 2 | 500 | 2000 | 1500 |\n| Year 3 | 0 | 2500 | 2000 |\n\nThe company needs to determine the following for each of the next three years:\n\n- Recruitment\n- Retraining\n- Layoffs (redundancy)\n- Part-time vs. full-time employees\n\nIt is important to note that labor is subject to a certain level of natural attrition each year. The rate of attrition is relatively high in the first year after a new employee is hired and relatively low in subsequent years. The expected attrition rates are as follows:\n\n| <i></i> | Unskilled (%)| Semi-skilled (%) | Skilled (%) |\n| --- | --- | --- | --- |\n| $< 1$ year of service | 25 | 20 | 10 |\n| $\\geq 1$ year of service | 10 | 5 | 5 |\n\nAll of the current workers have been with the company for at least one year.\n\n### Recruitment\n\nEach year, it is possible to hire a limited number of employees in each classification from outside the company as follows:\n\n| Unskilled | Semi-skilled | Skilled |\n| --- | --- | --- |\n| 500 | 800 | 500 |\n\n### Retraining\n\nEach year, it is possible to train up to 200 unskilled workers to make them into semi-skilled workers. This training costs the company $\\$400$ per worker.\n\nIn addition, it is possible train semi-skilled workers to make them into skilled workers. However, this number can not exceed 25% of the current skilled labor force and this training costs $\\$500$ per worker.\n\nLastly, downgrading workers to a lower skill level can be done. However, 50% of the downgraded workers will leave the company, increasing the natural attrition rate described above.\n\n### Layoffs\n\nEach laid-off worker is entitled to a separation payment at the rate of $\\$200$ per unskilled worker and $\\$500$ per semi-skilled or skilled worker.\n\n### Excess Employees\n\nIt is possible to have workers in excess of the actual number needed, up to 150 workers in total in any given year, but this will result in the following additional cost per excess employee per year.\n\n| Unskilled | Semi-skilled | Skilled |\n| --- | --- | --- |\n| $\\$1500$ | $\\$2000$ | $\\$3000$ |\n\n### Part-time Workers\n\nUp to 50 employees of each skill level can be assigned to part-time work. The cost of doing so (per employee, per year) is as follows:\n\n| Unskilled | Semi-skilled | Skilled |\n| --- | --- | --- |\n| $\\$500$ | $\\$400$ | $\\$400$ |\n\n**Note:** A part-time employee is half as productive as a full-time employee.\n\nIf the company’s objective is to minimize layoffs, what plan should they adopt in order to do this?\n\nIf their objective is to minimize costs, how much could they further reduce costs?\n\nHow can they determine the annual savings possible across each job?\n\n---\n## Model Formulation\n\n### Sets and Indices\n\n$t \\in \\text{Years}=\\{1,2,3\\}$: Set of years.\n\n$s \\in \\text{Skills}=\\{s_1: \\text{unskilled},s_2: \\text{semi_skilled},s_3: \\text{skilled}\\}$: Set of skills.\n\n### Parameters\n\n$\\text{rookie_attrition} \\in [0,1] \\subset \\mathbb{R}^+$: Percentage of workers who leave within the first year of service.\n\n$\\text{veteran_attrition} \\in [0,1] \\subset \\mathbb{R}^+$: Percentage of workers who leave after the first year of service.\n\n$\\text{demoted_attrition} \\in [0,1] \\subset \\mathbb{R}^+$: Percentage of workers who leave the company after a demotion.\n\n$\\text{parttime_cap} \\in [0,1] \\subset \\mathbb{R}^+$: Productivity of part-time workers with respect to full-time workers.\n\n$\\text{max_train_unskilled} \\in \\mathbb{N}$: Maximum number of unskilled workers that can be trained on any given year.\n\n$\\text{max_train_semiskilled} \\in [0,1] \\subset \\mathbb{R}^+$: Maximum proportion of semi-skilled workers (w.r.t. skilled ones) that can be trained on any given year.\n\n$\\text{max_parttime} \\in \\mathbb{N}$: Maximum number of part-time workers of each skill at any given year.\n\n$\\text{max_overmanning} \\in \\mathbb{N}$: Maximum number of overmanned workers at any given year.\n\n$\\text{max_hiring}_s \\in \\mathbb{N}$: Maximum number of workers of skill $s$ that can be hired any given year.\n\n$\\text{training_cost}_s \\in \\mathbb{R}^+$: Cost for training a worker of skill $s$ to the next level.\n\n$\\text{layoff_cost}_s \\in \\mathbb{R}^+$: Cost for laying off a worker of skill $s$.\n\n$\\text{parttime_cost}_s \\in \\mathbb{R}^+$: Cost for assigning a worker of skill $s$ to part-time work.\n\n$\\text{overmanning_cost}_s \\in \\mathbb{R}^+$: Yearly cost for having excess manpower of skill $s$.\n\n$\\text{curr_workforce}_s \\in \\mathbb{N}$: Current manpower of skill $s$ at the beginning of the planning horizon.\n\n$\\text{demand}_{t,s} \\in \\mathbb{N}$: Required manpower of skill $s$ in year $t$.\n\n\n### Decision Variables\n\n$\\text{hire}_{t,s} \\in [0,\\text{max_hiring}_s] \\subset \\mathbb{R}^+$: Number of workers of skill $s$ to hire in year $t$.\n\n$\\text{part_time}_{t,s} \\in [0,\\text{max_parttime}] \\subset \\mathbb{R}^+$: Number of part-time workers of skill $s$ working in year $t$.\n\n$\\text{workforce}_{t,s} \\in \\mathbb{R}^+$: Number of workers of skill $s$ that are available in year $t$.\n\n$\\text{layoff}_{t,s} \\in \\mathbb{R}^+$: Number of workers of skill $s$ that are laid off in year $t$.\n\n$\\text{excess}_{t,s} \\in \\mathbb{R}^+$: Number of workers of skill $s$ that are overmanned in year $t$.\n\n$\\text{train}_{t,s,s'} \\in \\mathbb{R}^+$: Number of workers of skill $s$ to retrain to skill $s'$ in year $t$.\n\n### Objective Function\n\n- **Layoffs:** Minimize the total layoffs during the planning horizon.\n\n\\begin{equation}\n\\text{Minimize} \\quad Z = \\sum_{t \\in \\text{Years}}\\sum_{s \\in \\text{Skills}}{\\text{layoff}_{t,s}}\n\\end{equation}\n\n- **Cost:** Minimize the total cost (in USD) incurred by training, overmanning, part-time workers, and layoffs in the planning horizon.\n\n\\begin{equation}\n\\text{Minimize} \\quad W = \\sum_{t \\in \\text{Years}}{\\{\\text{training_cost}_{s_1}*\\text{train}_{t,s1,s2} + \\text{training_cost}_{s_2}*\\text{train}_{t,s2,s3}\\}}\n\\end{equation}\n\n\\begin{equation}\n+ \\sum_{t \\in \\text{Years}}\\sum_{s \\in \\text{Skills}}{\\{\\text{parttime_cost}*\\text{part_time}_{t,s} + \\text{layoff_cost}_s*\\text{layoff}_{t,s} + \\text{overmanning_cost}_s*\\text{excess}_{t,s}\\}}\n\\end{equation}\n\n### Constraints\n\n- **Initial Balance:** Workforce $s$ available in year $t=1$ is equal to the workforce of the previous year, recent hires, promoted and demoted workers (after accounting for attrition), minus layoffs and transferred workers.\n\n\\begin{equation}\n\\text{workforce}_{1,s} = (1-\\text{veteran_attrition}_s)*\\text{curr_workforce} + (1-\\text{rookie_attrition}_s)*\\text{hire}_{1,s} \n\\end{equation}\n\n\\begin{equation}\n+ \\sum_{s' \\in \\text{Skills} | s' < s}{\\{(1-\\text{veteran_attrition})*\\text{train}_{1,s',s} - \\text{train}_{1,s,s'}\\}} \n\\end{equation}\n\n\\begin{equation}\n+ \\sum_{s' \\in \\text{Skills} | s' > s}{\\{(1-\\text{demoted_attrition})*\\text{train}_{1,s',s} - \\text{train}_{1,s,s'}\\}} - \\text{layoff}_{1,s} \\qquad \\forall s \\in \\text{Skills}\n\\end{equation}\n\n\n- **Balance:** Workforce $s$ available in year $t > 1$ is equal to the workforce of the previous year, recent hires, promoted and demoted workers (after accounting for attrition), minus layoffs and transferred workers.\n\n\\begin{equation}\n\\text{workforce}_{t,s} = (1-\\text{veteran_attrition}_s)*\\text{workforce}_{t-1,s} + (1-\\text{rookie_attrition}_s)*\\text{hire}_{t,s} \n\\end{equation}\n\n\\begin{equation}\n+ \\sum_{s' \\in \\text{Skills} | s' < s}{\\{(1-\\text{veteran_attrition})*\\text{train}_{t,s',s} - \\text{train}_{t,s,s'}\\}}\n\\end{equation}\n\n\\begin{equation}\n+ \\sum_{s' \\in \\text{Skills} | s' > s}{\\{(1-\\text{demotion_attrition})*\\text{train}_{t,s',s} - \\text{train}_{t,s,s'}\\}} - \\text{layoff}_{t,s} \\quad \\forall (t > 1,s) \\in \\text{Years} \\times \\text{Skills}\n\\end{equation}\n\n- **Unskilled Training:** Unskilled workers trained in year $t$ cannot exceed the maximum allowance. Unskilled workers cannot be immediately transformed into skilled workers.\n\n\\begin{equation}\n\\text{train}_{t,s_1,s_2} \\leq 200 \\quad \\forall t \\in \\text{Years}\n\\end{equation}\n\n\\begin{equation}\n\\text{train}_{t,s_1,s_3} = 0 \\quad \\forall t \\in \\text{Years}\n\\end{equation}\n\n- **Semi-skilled Training:** Semi-skilled workers trained in year $t$ cannot exceed the maximum allowance.\n\n\\begin{equation}\n\\text{train}_{t,s_2,s_3} \\leq 0.25*\\text{available}_{t,s_3} \\quad \\forall t \\in \\text{Years}\n\\end{equation}\n\n- **Overmanning:** Excess workers in year $t$ cannot exceed the maximum allowance.\n\n\\begin{equation}\n\\sum_{s \\in \\text{Skills}}{\\text{excess}_{t,s}} \\leq \\text{max_overmanning} \\quad \\forall t \\in \\text{Years}\n\\end{equation}\n\n- **Demand:** Workforce $s$ available in year $t$ equals the required number of workers plus the excess workers and the part-time workers.\n\n\\begin{equation}\n\\text{available}_{t,s} = \\text{demand}_{t,s} + \\text{excess}_{t,s} + \\text{parttime_cap}*\\text{part_time}_{t,s} \\quad \\forall (t,s) \\in \\text{Years} \\times \\text{Skills}\n\\end{equation}\n\n---\n## Python Implementation\n\nWe import the Gurobi Python Module and other Python libraries.",
"_____no_output_____"
]
],
[
[
"import gurobipy as gp\nimport numpy as np\nimport pandas as pd\nfrom gurobipy import GRB\n\n# tested with Python 3.7.0 & Gurobi 9.0",
"_____no_output_____"
]
],
[
[
"## Input Data\nWe define all the input data of the model.",
"_____no_output_____"
]
],
[
[
"# Parameters\n\nyears = [1, 2, 3]\nskills = ['s1', 's2', 's3']\n\ncurr_workforce = {'s1': 2000, 's2': 1500, 's3': 1000}\ndemand = {\n (1, 's1'): 1000,\n (1, 's2'): 1400,\n (1, 's3'): 1000,\n (2, 's1'): 500,\n (2, 's2'): 2000,\n (2, 's3'): 1500,\n (3, 's1'): 0,\n (3, 's2'): 2500,\n (3, 's3'): 2000\n}\nrookie_attrition = {'s1': 0.25, 's2': 0.20, 's3': 0.10}\nveteran_attrition = {'s1': 0.10, 's2': 0.05, 's3': 0.05}\ndemoted_attrition = 0.50\nmax_hiring = {\n (1, 's1'): 500,\n (1, 's2'): 800,\n (1, 's3'): 500,\n (2, 's1'): 500,\n (2, 's2'): 800,\n (2, 's3'): 500,\n (3, 's1'): 500,\n (3, 's2'): 800,\n (3, 's3'): 500\n}\nmax_overmanning = 150\nmax_parttime = 50\nparttime_cap = 0.50\nmax_train_unskilled = 200\nmax_train_semiskilled = 0.25\n\ntraining_cost = {'s1': 400, 's2': 500}\nlayoff_cost = {'s1': 200, 's2': 500, 's3': 500}\nparttime_cost = {'s1': 500, 's2': 400, 's3': 400}\novermanning_cost = {'s1': 1500, 's2': 2000, 's3': 3000}",
"_____no_output_____"
]
],
[
[
"## Model Deployment\nWe create a model and the variables. For each of the three skill levels and for each year, we will create variables for the number of workers that get recruited, transferred into part-time work, are available as workers, are redundant, or are overmanned. For each pair of skill levels and each year, we have a variable for the amount of workers that get retrained to a higher/lower skill level. The number of people who are part-time and can be recruited is limited.",
"_____no_output_____"
]
],
[
[
"manpower = gp.Model('Manpower planning')\n\nhire = manpower.addVars(years, skills, ub=max_hiring, name=\"Hire\")\npart_time = manpower.addVars(years, skills, ub=max_parttime,\n name=\"Part_time\")\nworkforce = manpower.addVars(years, skills, name=\"Available\")\nlayoff = manpower.addVars(years, skills, name=\"Layoff\")\nexcess = manpower.addVars(years, skills, name=\"Overmanned\")\ntrain = manpower.addVars(years, skills, skills, name=\"Train\")",
"Using license file c:\\gurobi\\gurobi.lic\nSet parameter TokenServer to value SANTOS-SURFACE-\n"
]
],
[
[
"Next, we insert the constraints. The balance constraints ensure that per skill level and per year the workers who are currently required (LaborForce) and the people who get laid off, and the people who get retrained to the current level, minus the people who get retrained from the current level to a different skill, equals the LaborForce of the last year (or the CurrentStrength in the first year) plus the recruited people. A certain amount of people leave the company each year, so this is also considered to be a factor. This constraint describes the change in the total amount of employed workers.",
"_____no_output_____"
]
],
[
[
"#1.1 & 1.2 Balance\n\nBalance = manpower.addConstrs(\n (workforce[year, level] == (1-veteran_attrition[level])*(curr_workforce[level] if year == 1 else workforce[year-1, level])\n + (1-rookie_attrition[level])*hire[year, level] + gp.quicksum((1- veteran_attrition[level])* train[year, level2, level]\n -train[year, level, level2] for level2 in skills if level2 < level)\n + gp.quicksum((1- demoted_attrition)* train[year, level2, level] -train[year, level, level2] for level2 in skills if level2 > level)\n - layoff[year, level] for year in years for level in skills), \"Balance\")",
"_____no_output_____"
]
],
[
[
"The Unskilled training constraints force that per year only 200 workers can be retrained from Unskilled to Semi-skilled due to capacity limitations. Also, no one can be trained in one year from Unskilled to Skilled.",
"_____no_output_____"
]
],
[
[
"#2.1 & 2.2 Unskilled training\nUnskilledTrain1 = manpower.addConstrs((train[year, 's1', 's2'] <= max_train_unskilled for year in years), \"Unskilled_training1\")\nUnskilledTrain2 = manpower.addConstrs((train[year, 's1', 's3'] == 0 for year in years), \"Unskilled_training2\")",
"_____no_output_____"
]
],
[
[
"The Semi-skilled training states that the retraining of Semi-skilled workers to skilled workers is limited to no more than one quarter of the skilled labor force at this time. This is due to capacity limitations.",
"_____no_output_____"
]
],
[
[
"#3. Semi-skilled training\n\nSemiskilledTrain = manpower.addConstrs((train[year,'s2', 's3'] <= max_train_semiskilled * workforce[year,'s3'] for year in years), \"Semiskilled_training\")",
"_____no_output_____"
]
],
[
[
"The overmanning constraints ensure that the total overmanning over all skill levels in one year is no more than 150.",
"_____no_output_____"
]
],
[
[
"#4. Overmanning\nOvermanning = manpower.addConstrs((excess.sum(year, '*') <= max_overmanning for year in years), \"Overmanning\")",
"_____no_output_____"
]
],
[
[
"The demand constraints ensure that the number of workers of each level and year equals the required number of workers plus the Overmanned workers and the number of workers who are working part-time.",
"_____no_output_____"
]
],
[
[
"#5. Demand\nDemand = manpower.addConstrs((workforce[year, level] ==\n demand[year,level] + excess[year, level] + parttime_cap * part_time[year, level]\n for year in years for level in skills), \"Requirements\")",
"_____no_output_____"
]
],
[
[
"The first objective is to minimize the total number of laid off workers. This can be stated as:",
"_____no_output_____"
]
],
[
[
"#0.1 Objective Function: Minimize layoffs\nobj1 = layoff.sum()\nmanpower.setObjective(obj1, GRB.MINIMIZE)",
"_____no_output_____"
]
],
[
[
"The second alternative objective is to minimize the total cost of all employed workers and costs for retraining:\n\n```\nobj2 = quicksum((training_cost[level]*train[year, level, skills[skills.index(level)+1]] if level < 's3' else 0)\n + layoff_cost[level]*layoff[year, level]\n + parttime_cost[level]*part_time[year, level]\n + overmanning_cost[level] * excess[year, level] for year in years for level in skills)\n```\n\nNext we start the optimization with the objective function of minimizing layoffs, and Gurobi finds the optimal solution.",
"_____no_output_____"
]
],
[
[
"manpower.optimize()",
"Gurobi Optimizer version 9.0.0 build v9.0.0rc2 (win64)\nOptimize a model with 30 rows, 72 columns and 117 nonzeros\nModel fingerprint: 0x06ec5b66\nCoefficient statistics:\n Matrix range [3e-01, 1e+00]\n Objective range [1e+00, 1e+00]\n Bounds range [5e+01, 8e+02]\n RHS range [2e+02, 3e+03]\nPresolve removed 18 rows and 44 columns\nPresolve time: 0.01s\nPresolved: 12 rows, 28 columns, 56 nonzeros\n\nIteration Objective Primal Inf. Dual Inf. Time\n 0 8.4000000e+02 6.484375e+01 0.000000e+00 0s\n 8 8.4179688e+02 0.000000e+00 0.000000e+00 0s\n\nSolved in 8 iterations and 0.01 seconds\nOptimal objective 8.417968750e+02\n"
]
],
[
[
"## Analysis\n\nThe minimum number of layoffs is 841.80. The optimal policies to achieve this minimum number of layoffs are given below.\n\n\n### Hiring Plan\nThis plan determines the number of new workers to hire at each year of the planning horizon (rows) and each skill level (columns). For example, at year 2 we are going to hire 649.3 Semi-skilled workers.",
"_____no_output_____"
]
],
[
[
"rows = years.copy()\ncolumns = skills.copy()\nhire_plan = pd.DataFrame(columns=columns, index=rows, data=0.0)\n\nfor year, level in hire.keys():\n if (abs(hire[year, level].x) > 1e-6):\n hire_plan.loc[year, level] = np.round(hire[year, level].x, 1)\nhire_plan",
"_____no_output_____"
]
],
[
[
"### Training and Demotions Plan\nThis plan defines the number of workers to promote by training (or demote) at each year of the planning horizon. For example, in year 1 we are going to demote 168.4 skilled (s3) workers to the level of semi-skilled (s2).",
"_____no_output_____"
]
],
[
[
"rows = years.copy()\ncolumns = ['{0} to {1}'.format(level1, level2) for level1 in skills for level2 in skills if level1 != level2]\ntrain_plan = pd.DataFrame(columns=columns, index=rows, data=0.0)\n\nfor year, level1, level2 in train.keys():\n col = '{0} to {1}'.format(level1, level2)\n if (abs(train[year, level1, level2].x) > 1e-6):\n train_plan.loc[year, col] = np.round(train[year, level1, level2].x, 1)\ntrain_plan",
"_____no_output_____"
]
],
[
[
"### Layoffs Plan\n\nThis plan determines the number of workers to layoff of each skill level at each year of the planning horizon. For example, we are going to layoff 232.5 Unskilled workers in year 3.",
"_____no_output_____"
]
],
[
[
"rows = years.copy()\ncolumns = skills.copy()\nlayoff_plan = pd.DataFrame(columns=columns, index=rows, data=0.0)\n\nfor year, level in layoff.keys():\n if (abs(layoff[year, level].x) > 1e-6):\n layoff_plan.loc[year, level] = np.round(layoff[year, level].x, 1)\nlayoff_plan",
"_____no_output_____"
]
],
[
[
"### Part-time Plan\n\nThis plan defines the number of part-time workers of each skill level working at each year of the planning horizon. For example, in year 1, we have 50 part-time skilled workers.",
"_____no_output_____"
]
],
[
[
"rows = years.copy()\ncolumns = skills.copy()\nparttime_plan = pd.DataFrame(columns=columns, index=rows, data=0.0)\n\nfor year, level in part_time.keys():\n if (abs(part_time[year, level].x) > 1e-6):\n parttime_plan.loc[year, level] = np.round(part_time[year, level].x, 1)\nparttime_plan",
"_____no_output_____"
]
],
[
[
"### Overmanning Plan\n\nThis plan determines the number of excess workers of each skill level working at each year of the planning horizon. For example, we have 150 Unskilled excess workers in year 3.",
"_____no_output_____"
]
],
[
[
"rows = years.copy()\ncolumns = skills.copy()\nexcess_plan = pd.DataFrame(columns=columns, index=rows, data=0.0)\n\nfor year, level in excess.keys():\n if (abs(excess[year, level].x) > 1e-6):\n excess_plan.loc[year, level] = np.round(excess[year, level].x, 1)\nexcess_plan",
"_____no_output_____"
]
],
[
[
"By minimizing the cost instead, we could implement policies that would cost $\\$498,677.29$ over the three-year period and result in 1,423.7 layoffs. Alternative optimal solutions could be considered to reduce layoffs without increasing cost. If we minimize costs instead of layoffs, we can save $\\$942,712.51$ at the expense of 581.9 additional layoffs. Thus, the cost of saving each job, when minimizing layoffs, could be regarded as $\\$1,620.06$.\n\n**Note:** If you want to write your solution to a file, rather than print it to the terminal, you can use the model.write() command. An example implementation is:\n\n`manpower.write(\"manpower-planning-output.sol\")`\n\n---\n## References\n\nH. Paul Williams, Model Building in Mathematical Programming, fifth edition.\n\nCopyright © 2020 Gurobi Optimization, LLC",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d05557f5375b83892ec965fbc3db5b1fbd76d970 | 88,833 | ipynb | Jupyter Notebook | 02.2.LinearRegression-sklearn.ipynb | LossJ/Statistical-Machine-Learning | c70fd82ee287f4902d8607ec459e52b0a301d6a2 | [
"MIT"
] | null | null | null | 02.2.LinearRegression-sklearn.ipynb | LossJ/Statistical-Machine-Learning | c70fd82ee287f4902d8607ec459e52b0a301d6a2 | [
"MIT"
] | 1 | 2020-09-26T07:57:23.000Z | 2020-09-26T07:57:23.000Z | 02.2.LinearRegression-sklearn.ipynb | LossJ/Statistical-Machine-Learning | c70fd82ee287f4902d8607ec459e52b0a301d6a2 | [
"MIT"
] | null | null | null | 55.520625 | 19,180 | 0.756183 | [
[
[
"import os, sys\n\nfrom LossJLearn.utils.plot import show_prediction_face_comparison, show_linear_point, show_regressor_linear\nfrom LossJLearn.datasets import load_linear_data\n\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport sklearn\nfrom sklearn.linear_model import LinearRegression, Ridge, RidgeCV, Lasso, SGDRegressor\nfrom sklearn.preprocessing import PolynomialFeatures, StandardScaler, Normalizer\nfrom sklearn.datasets import fetch_olivetti_faces, fetch_california_housing, load_diabetes\nfrom sklearn.model_selection import train_test_split\n\nprint(\"python version: \", sys.version_info)\nprint(sklearn.__name__, sklearn.__version__)",
"python version: sys.version_info(major=3, minor=7, micro=2, releaselevel='final', serial=0)\nsklearn 0.23.2\n"
]
],
[
[
"## 1. 基础回归",
"_____no_output_____"
],
[
"### 1.1 线性回归",
"_____no_output_____"
],
[
"#### 1.1.1 sklearn.linear_model.LinearRegression",
"_____no_output_____"
],
[
"https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression",
"_____no_output_____"
]
],
[
[
"X_data, y_data = load_linear_data(point_count=500, max_=10, w=3.2412, b=-5.2941, random_state=10834)\nX_train, X_test, y_train, y_test = train_test_split(X_data, y_data, random_state=19332)\n\nrgs = LinearRegression(fit_intercept=True, normalize=False, copy_X=True, n_jobs=None)\nrgs.fit(X_train, y_train)\nrgs.coef_, rgs.intercept_",
"_____no_output_____"
],
[
"rgs.score(X_test, y_test)",
"_____no_output_____"
],
[
"show_regressor_linear(X_test, y_test, rgs.coef_, rgs.intercept_)",
"_____no_output_____"
]
],
[
[
"##### 正规化Normalizer",
"_____no_output_____"
],
[
"每个样本求范数,再用每个特征除以范数",
"_____no_output_____"
]
],
[
[
"norm = Normalizer(norm=\"l2\", copy=True)\nX_train_norm = norm.fit_transform(X_train)\nX_test_norm = norm.transform(X_test)",
"_____no_output_____"
],
[
"rgs = LinearRegression()\nrgs.fit(X_train_norm, y_train)\nrgs.coef_, rgs.intercept_",
"_____no_output_____"
],
[
"rgs.score(X_test_norm, y_test)",
"_____no_output_____"
],
[
"X_train_norm[:10], X_test_norm[:10]",
"_____no_output_____"
],
[
"X_train[:5]",
"_____no_output_____"
],
[
"rgs = LinearRegression(fit_intercept=True, \n normalize=True, # bool. fit_intercept为True才生效。 如果为True,则将在回归之前通过减去均值并除以12范数来对回归变量X进行归一化。\n copy_X=False, \n n_jobs=None)\nrgs.fit(X_train, y_train)\nX_train[:5]",
"_____no_output_____"
],
[
"X_test[:5]",
"_____no_output_____"
],
[
"rgs.score(X_test, y_test)",
"_____no_output_____"
],
[
"X_test[:5]",
"_____no_output_____"
],
[
"rgs.coef_, rgs.intercept_",
"_____no_output_____"
],
[
"%%timeit\nrgs = LinearRegression(n_jobs=2)\nrgs.fit(X_train, y_train)",
"327 µs ± 20 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n"
],
[
"%%timeit\nrgs = LinearRegression(n_jobs=-1)\nrgs.fit(X_train, y_train)",
"354 µs ± 66.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n"
],
[
"%%timeit\nrgs = LinearRegression(n_jobs=None)\nrgs.fit(X_train, y_train)",
"376 µs ± 35.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n"
]
],
[
[
"#### 1.1.2 sklearn.linear_model.SGDRegressor",
"_____no_output_____"
],
[
"https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDRegressor.html#sklearn.linear_model.SGDRegressor",
"_____no_output_____"
]
],
[
[
"X_data, y_data = load_linear_data(point_count=500, max_=10, w=3.2412, b=-5.2941, random_state=10834)\nX_train, X_test, y_train, y_test = train_test_split(X_data, y_data, random_state=19332)",
"_____no_output_____"
],
[
"rgs = SGDRegressor(random_state=10190)\nrgs.fit(X_train, y_train)\nrgs.score(X_test, y_test)",
"_____no_output_____"
]
],
[
[
"##### 标准化StandardScaler",
"_____no_output_____"
],
[
"https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html#sklearn.preprocessing.StandardScaler",
"_____no_output_____"
],
[
"z = (x - u) / s, u是均值, s是标准差",
"_____no_output_____"
]
],
[
[
"scaler = StandardScaler(copy=True, with_mean=True, with_std=True)\nX_train_scaler = scaler.fit_transform(X_train)\nX_test_scaler = scaler.transform(X_test)",
"_____no_output_____"
],
[
"scaler.mean_, scaler.scale_",
"_____no_output_____"
],
[
"rgs = SGDRegressor(\n loss='squared_loss', # ‘squared_loss’, ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’\n penalty='l2', # 惩罚项(正则项)\n alpha=0.0001, # 正则系数\n fit_intercept=True, \n max_iter=100,\n tol=0.001,\n shuffle=True,\n verbose=0,\n epsilon=0.1,\n random_state=10190, \n learning_rate='invscaling', \n eta0=0.01,\n power_t=0.25,\n early_stopping=True,\n validation_fraction=0.1,\n n_iter_no_change=5,\n warm_start=False,\n average=False\n)\nrgs.fit(X_train_scaler, y_train)\nrgs.coef_, rgs.intercept_",
"_____no_output_____"
],
[
"rgs.score(X_test_scaler, y_test)",
"_____no_output_____"
],
[
"show_regressor_linear(X_test_scaler, y_test, pred_coef=rgs.coef_, pred_intercept=rgs.intercept_)",
"_____no_output_____"
]
],
[
[
"### 1.2 多项式回归",
"_____no_output_____"
]
],
[
[
"def load_data_from_func(func=lambda X_data: 0.1383 * np.square(X_data) - 1.2193 * X_data + 2.4096,\n x_min=0, x_max=10, n_samples=500, loc=0, scale=1, random_state=None):\n if random_state is not None and isinstance(random_state, int):\n np.random.seed(random_state)\n x = np.random.uniform(x_min, x_max, n_samples)\n y = func(x)\n noise = np.random.normal(loc=loc, scale=scale, size=n_samples)\n y += noise\n return x.reshape([-1, 1]), y\n\nX_data, y_data = load_data_from_func(n_samples=500, random_state=10392)",
"_____no_output_____"
]
],
[
[
"https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html#sklearn.preprocessing.PolynomialFeatures/",
"_____no_output_____"
]
],
[
[
"X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, random_state=10319)\npoly = PolynomialFeatures() # [1, a, b, a^2, ab, b^2]\nX_train_poly = poly.fit_transform(X_train)\nX_test_poly = poly.transform(X_test)\n\nX_train_poly.shape",
"_____no_output_____"
],
[
"rgs = LinearRegression()\nrgs.fit(X_train_poly, y_train)\nrgs.score(X_test_poly, y_test)",
"_____no_output_____"
],
[
"y_pred = rgs.predict(X_test_poly)\n\ndef show_regression_line(X_data, y_data, y_pred):\n plt.figure(figsize=[10, 5])\n plt.xlabel(\"x\")\n plt.ylabel(\"y\")\n if X_data.ndim == 2:\n X_data = X_data.reshape(-1)\n plt.scatter(X_data, y_data)\n idx = np.argsort(X_data)\n X_data = X_data[idx]\n y_pred = y_pred[idx]\n plt.plot(X_data, y_pred, color=\"darkorange\")\n plt.show()\n \nshow_regression_line(X_test, y_test, y_pred)",
"_____no_output_____"
]
],
[
[
"## 2. 加利福尼亚房价数据集",
"_____no_output_____"
]
],
[
[
"df = fetch_california_housing(data_home=\"./data\", as_frame=True)",
"_____no_output_____"
],
[
"X_data = df['data']",
"_____no_output_____"
],
[
"X_data.describe()",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X_data, df.target, random_state=1, shuffle=True)",
"_____no_output_____"
]
],
[
[
"### 2.1 线性回归",
"_____no_output_____"
]
],
[
[
"rgs = LinearRegression()\nrgs.fit(X_train, y_train)",
"_____no_output_____"
],
[
"rgs.score(X_test, y_test)",
"_____no_output_____"
],
[
"scaler = StandardScaler()\nX_train_scaler = scaler.fit_transform(X_train)\nX_test_scaler = scaler.transform(X_test)\nrgs = LinearRegression()\nrgs.fit(X_train_scaler, y_train)\nrgs.score(X_test_scaler, y_test)",
"_____no_output_____"
]
],
[
[
"### 2.2 岭回归",
"_____no_output_____"
],
[
"https://scikit-learn.org/stable/modules/linear_model.html#ridge-regression-and-classification \nhttps://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html#sklearn.linear_model.Ridge ",
"_____no_output_____"
]
],
[
[
"rgs = Ridge(alpha=1.0, solver=\"auto\")\nrgs.fit(X_train, y_train)\nrgs.score(X_test, y_test)",
"_____no_output_____"
],
[
"rgs.coef_",
"_____no_output_____"
]
],
[
[
"https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html#sklearn.linear_model.RidgeCV",
"_____no_output_____"
],
[
"#### 2.2.1 交叉验证",
"_____no_output_____"
]
],
[
[
"rgs = RidgeCV(\n alphas=(0.001, 0.01, 0.1, 1.0, 10.0),\n fit_intercept=True,\n normalize= False,\n scoring=None, # 如果为None,则当cv为'auto'或为None时为负均方误差,否则为r2得分。scorer(estimator, X, y)\n cv=None, # int, cross-validation generator or an iterable, default=None\n gcv_mode='auto', # {‘auto’, ‘svd’, eigen’}, default=’auto’\n store_cv_values=None, # bool, 是否将与每个alpha对应的交叉验证值存储在cv_values_属性中, 仅cv=None有效\n)\nrgs.fit(X_train, y_train)",
"_____no_output_____"
],
[
"rgs.best_score_",
"_____no_output_____"
],
[
"rgs.score(X_test, y_test)",
"_____no_output_____"
],
[
"rgs = RidgeCV(\n alphas=(0.001, 0.01, 0.1, 1.0, 10.0),\n fit_intercept=True,\n normalize= False,\n scoring=None, # 如果为None,则当cv为'auto'或为None时为负均方误差,否则为r2得分。scorer(estimator, X, y)\n cv=10, # int, cross-validation generator or an iterable, default=None\n gcv_mode='auto', # {‘auto’, ‘svd’, eigen’}, default=’auto’\n store_cv_values=None, # bool, 是否将与每个alpha对应的交叉验证值存储在cv_values_属性中, 仅cv=None有效\n)\nrgs.fit(X_train, y_train)\nrgs.best_score_, rgs.score(X_test, y_test)",
"_____no_output_____"
]
],
[
[
"### 2.3 索套回归",
"_____no_output_____"
],
[
"https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html#sklearn.linear_model.Lasso \nhttps://scikit-learn.org/stable/modules/linear_model.html#lasso ",
"_____no_output_____"
]
],
[
[
"rgs = Lasso()\nrgs.fit(X_train, y_train)\nrgs.score(X_test, y_test)",
"_____no_output_____"
],
[
"rgs.coef_",
"_____no_output_____"
]
],
[
[
"### 2.4 多项式回归",
"_____no_output_____"
],
[
"https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html?highlight=polynomialfeatures#sklearn.preprocessing.PolynomialFeatures",
"_____no_output_____"
]
],
[
[
"poly = PolynomialFeatures(degree=2, interaction_only=False, include_bias=True)\nX_train_poly = poly.fit_transform(X_train) # [1, a, b, a^2, ab, b^2]\nX_train_poly.shape ",
"_____no_output_____"
],
[
"poly.get_feature_names()",
"_____no_output_____"
],
[
"X_test_poly = poly.transform(X_test)",
"_____no_output_____"
],
[
"rgs = LinearRegression()\nrgs.fit(X_train_poly, y_train)\nrgs.score(X_test_poly, y_test)",
"_____no_output_____"
],
[
"poly = PolynomialFeatures(degree=2, \n interaction_only=True, # 是否只保留插乘特征,除去指数项\n include_bias=True, \n order=\"C\") # Order of output array in the dense case. ‘F’ order is faster to compute, but may slow down subsequent estimators.\nX_train_poly = poly.fit_transform(X_train) \nX_test_poly = poly.transform(X_test)\nX_train_poly.shape",
"_____no_output_____"
],
[
"poly.get_feature_names()",
"_____no_output_____"
],
[
"rgs = LinearRegression()\nrgs.fit(X_train_poly, y_train)\nrgs.score(X_test_poly, y_test)",
"_____no_output_____"
]
],
[
[
"## 总结",
"_____no_output_____"
],
[
"1. sklearn的线性回归相关的模型放在sklearn.linear_model下 \n > from sklearn.linear_model import LinearRegression, Ridge, RidgeCV, Lasso, SGDRegressor \n\n2. 调参数 \n > LinearRegression(fit_intercept=True, normalize=False, copy_X=True, n_jobs=None) \n > \n > SGDRegressor(loss='squared_loss',\n penalty='l2',\n alpha=0.0001,\n fit_intercept=True,\n max_iter=1000,\n tol=0.001,\n shuffle=True,\n epsilon=0.1,\n random_state=None,\n learning_rate='invscaling',\n eta0=0.01,\n early_stopping=False,\n validation_fraction=0.1,\n n_iter_no_change=5) \n > \n > Ridge(alpha=1.0, fit_intercept=True,\n normalize=False,\n copy_X=True,\n max_iter=None,\n tol=0.001,\n solver='auto',\n random_state=None) \n > \n > Lasso(alpha=1.0,\n fit_intercept=True,\n normalize=False,\n precompute=False,\n copy_X=True,\n max_iter=1000,\n tol=0.0001,\n random_state=None) \n > \n > RidgeCV(alphas=(0.1, 1.0, 10.0),\n *,\n fit_intercept=True,\n normalize=False,\n scoring=None,\n cv=None,\n gcv_mode=None,\n store_cv_values=False,) \n \n3. 多项式回归使用PolynomialFeatures做特征工程实现 \n > from sklearn.preprocessing import PolynomialFeatures \n > poly = PolynomialFeatures(degree=2, interaction_only=False, include_bias=True, order='C') \n > X_train_poly = poly.fit_transform(X_train) \n > X_test_poly = poly.transform(X_test) \n\n4. 正规化Normalizer和标准化StandardScaler\n > from sklearn.preprocessing import StandardScaler, Normalizer \n > \n > scaler = StandardScaler(copy=True, with_mean=True, with_std=True) \n > X_train_scaler = scaler.fit_transform(X_train) \n > X_test_scaler = scaler.transform(X_test) \n > \n > norm = Normalizer(norm=\"l2\", copy=True) \n > X_train_norm = norm.fit_transform(X_train) \n > X_test_norm = norm.transform(X_test) \n > ",
"_____no_output_____"
],
[
"## 作业 \n\n1. 熟悉每个模型的各个参数 \n2. 三种归一化有什么区别?什么时候用Normalizer,什么时候用StandardScaler,什么时候用MinMaxScaler? \n3. 试着用numpy实现PolynomialFeatures",
"_____no_output_____"
],
[
"## 相关链接 \n \n<a href=\"./02.1.LinearRegression.ipynb\" style=\"\"> 2.1 线性回归、岭回归、Lasso、SGD、局部加权线性回归原理</a> \n \n<a href=\"./02.3.LinearRegression-numpy.ipynb\" style=\"\"> 2.3 numpy实现线性回归、岭回归、SGD回归</a> \n<a href=\"./02.4.LinearRegression-tf2.ipynb\"> 2.4 TensorFlow2实现线性回归、岭回归、SGD回归 </a> \n<a href=\"./02.5.LinearRegression-torch1.ipynb\"> 2.5 PyTorch1实现线性回归、岭回归、SGD回归 </a> ",
"_____no_output_____"
],
[
"## 项目源码 \n\nhttps://github.com/LossJ \n进入后点击Statistic-Machine-Learning",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d055730816bca4573a45830e73671d67c7ef2435 | 38,351 | ipynb | Jupyter Notebook | preprocess-dainis.ipynb | arjunnlp/NLP-papers-tools-discussion | 2c8f9b60b81152db7043d41bc1f8d25562fd10bd | [
"MIT"
] | 1 | 2019-03-08T14:44:32.000Z | 2019-03-08T14:44:32.000Z | preprocess-dainis.ipynb | arjunnlp/NLP-papers-tools-discussion | 2c8f9b60b81152db7043d41bc1f8d25562fd10bd | [
"MIT"
] | null | null | null | preprocess-dainis.ipynb | arjunnlp/NLP-papers-tools-discussion | 2c8f9b60b81152db7043d41bc1f8d25562fd10bd | [
"MIT"
] | null | null | null | 42.423673 | 742 | 0.425126 | [
[
[
"%matplotlib inline\n%reload_ext autoreload\n%autoreload 2\nfrom ipyexperiments import *\nfrom lib.fastai.imports import * \nfrom lib.fastai.structured import *\nimport pandas as pd\nimport numpy as np\nimport lightgbm as lgb\nfrom scipy.sparse import vstack, csr_matrix, save_npz, load_npz\nfrom sklearn.preprocessing import LabelEncoder, OneHotEncoder\nfrom sklearn.model_selection import StratifiedKFold\nfrom datetime import datetime\nfrom path import Path\nimport re2 as re\nimport joblib",
"_____no_output_____"
],
[
"## Dainis's work\n\ndef display_n(df, n=250):\n with pd.option_context(\"display.max_rows\", n):\n with pd.option_context(\"display.max_columns\", n):\n display(df)\n \ndef add_datepart(df, fldname, drop=False, time=False):\n \"Helper function that adds columns relevant to a date.\"\n fld = df[fldname]\n fld_dtype = fld.dtype\n if isinstance(fld_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype):\n fld_dtype = np.datetime64\n\n if not np.issubdtype(fld_dtype, np.datetime64):\n df[fldname] = fld = pd.to_datetime(fld, infer_datetime_format=True)\n targ_pre = re.sub('[Dd]ate$', '', fldname)\n attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear',\n 'Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']\n if time: attr = attr + ['Hour', 'Minute', 'Second']\n for n in attr: df[targ_pre + n] = getattr(fld.dt, n.lower())\n df[targ_pre + 'Elapsed'] = fld.astype(np.int64) // 10 ** 9\n if drop: df.drop(fldname, axis=1, inplace=True)\n\n## Pietro and Wojtek work\ndef add_timestamps(df):\n \"Funection that loads time values from numpy files\"\n datedictAS = np.load('dates/AvSigVersionTimestamps.npy')[()]\n df['DateAS'] = df['AvSigVersion'].map(datedictAS) \n\n datedictOS = np.load('dates/OSVersionTimestamps.npy')[()]\n df['DateOS'] = df['Census_OSVersion'].map(datedictOS) \n # BL timestamp\n def convert(x):\n try:\n d = datetime.strptime(x.split('.')[4],'%y%m%d-%H%M')\n except:\n d = np.nan\n return d\n df['DateBL'] = df['OsBuildLab'].map(convert)",
"_____no_output_____"
],
[
"dtypes = {\n 'MachineIdentifier': 'category',\n 'ProductName': 'category',\n 'EngineVersion': 'category',\n 'AppVersion': 'category',\n 'AvSigVersion': 'category',\n 'IsBeta': 'int8',\n 'RtpStateBitfield': 'float16',\n 'IsSxsPassiveMode': 'int8',\n 'DefaultBrowsersIdentifier': 'float16',\n 'AVProductStatesIdentifier': 'float32',\n 'AVProductsInstalled': 'float16',\n 'AVProductsEnabled': 'float16',\n 'HasTpm': 'int8',\n 'CountryIdentifier': 'int16',\n 'CityIdentifier': 'float32',\n 'OrganizationIdentifier': 'float16',\n 'GeoNameIdentifier': 'float16',\n 'LocaleEnglishNameIdentifier': 'int8',\n 'Platform': 'category',\n 'Processor': 'category',\n 'OsVer': 'category',\n 'OsBuild': 'int16',\n 'OsSuite': 'int16',\n 'OsPlatformSubRelease': 'category',\n 'OsBuildLab': 'category',\n 'SkuEdition': 'category',\n 'IsProtected': 'float16',\n 'AutoSampleOptIn': 'int8',\n 'PuaMode': 'category',\n 'SMode': 'float16',\n 'IeVerIdentifier': 'float16',\n 'SmartScreen': 'category',\n 'Firewall': 'float16',\n 'UacLuaenable': 'float32',\n 'Census_MDC2FormFactor': 'category',\n 'Census_DeviceFamily': 'category',\n 'Census_OEMNameIdentifier': 'float16',\n 'Census_OEMModelIdentifier': 'float32',\n 'Census_ProcessorCoreCount': 'float16',\n 'Census_ProcessorManufacturerIdentifier': 'float16',\n 'Census_ProcessorModelIdentifier': 'float16',\n 'Census_ProcessorClass': 'category',\n 'Census_PrimaryDiskTotalCapacity': 'float32',\n 'Census_PrimaryDiskTypeName': 'category',\n 'Census_SystemVolumeTotalCapacity': 'float32',\n 'Census_HasOpticalDiskDrive': 'int8',\n 'Census_TotalPhysicalRAM': 'float32',\n 'Census_ChassisTypeName': 'category',\n 'Census_InternalPrimaryDiagonalDisplaySizeInInches': 'float16',\n 'Census_InternalPrimaryDisplayResolutionHorizontal': 'float16',\n 'Census_InternalPrimaryDisplayResolutionVertical': 'float16',\n 'Census_PowerPlatformRoleName': 'category',\n 'Census_InternalBatteryType': 'category',\n 'Census_InternalBatteryNumberOfCharges': 'float32',\n 'Census_OSVersion': 'category',\n 'Census_OSArchitecture': 'category',\n 'Census_OSBranch': 'category',\n 'Census_OSBuildNumber': 'int16',\n 'Census_OSBuildRevision': 'int32',\n 'Census_OSEdition': 'category',\n 'Census_OSSkuName': 'category',\n 'Census_OSInstallTypeName': 'category',\n 'Census_OSInstallLanguageIdentifier': 'float16',\n 'Census_OSUILocaleIdentifier': 'int16',\n 'Census_OSWUAutoUpdateOptionsName': 'category',\n 'Census_IsPortableOperatingSystem': 'int8',\n 'Census_GenuineStateName': 'category',\n 'Census_ActivationChannel': 'category',\n 'Census_IsFlightingInternal': 'float16',\n 'Census_IsFlightsDisabled': 'float16',\n 'Census_FlightRing': 'category',\n 'Census_ThresholdOptIn': 'float16',\n 'Census_FirmwareManufacturerIdentifier': 'float16',\n 'Census_FirmwareVersionIdentifier': 'float32',\n 'Census_IsSecureBootEnabled': 'int8',\n 'Census_IsWIMBootEnabled': 'float16',\n 'Census_IsVirtualDevice': 'float16',\n 'Census_IsTouchEnabled': 'int8',\n 'Census_IsPenCapable': 'int8',\n 'Census_IsAlwaysOnAlwaysConnectedCapable': 'float16',\n 'Wdft_IsGamer': 'float16',\n 'Wdft_RegionIdentifier': 'float16',\n 'HasDetections': 'int8'\n }\n\n# Uncomment the followng block on the first run\n'''\n\nwith IPyExperimentsCPU(): \n print('Download Train and Test Data.\\n')\n\n # Pietro, uncomment the following line and comment out the next one\n # INPUT_DIR = Path('E:/malware_microsoft' )\n INPUT_DIR = Path('./input' )\n\n train = pd.read_csv(Path(INPUT_DIR / 'train.csv'), dtype=dtypes, low_memory=True)\n train['MachineIdentifier'] = train.index.astype('uint32')\n\n test = pd.read_csv(Path(INPUT_DIR /'test.csv'), dtype=dtypes, low_memory=True)\n test['MachineIdentifier'] = test.index.astype('uint32')\n\n add_timestamps(train)\n add_timestamps(test)\n\n joblib.dump(train, 'data/train_w_time_origin.pkl')\n joblib.dump(test, 'data/test_w_time_origin.pkl')\n'''",
"_____no_output_____"
],
[
"def versioning(df, fldname, drop=False):\n \"Helper function that adds columns relevant to a date.\"\n versions = df[fldname].str.split('.', expand=True)\n for i, v in enumerate(versions):\n df[fldname+'V'+str(i)] = versions[v]\n if drop: df.drop(fldname, axis=1, inplace=True)\n\ndef versioning(df, fldname, categorical_vars, drop=False):\n \"Helper function that adds columns relevant to a date.\"\n versions = df[fldname].str.split(',', expand=True)\n for i, v in enumerate(versions):\n newfld = fldname+'V'+i\n df[newfld] = versions[v]\n categorical_vars.append(newfld)\n if drop: df.drop(fldname, axis=1, inplace=True)\n\nwith IPyExperimentsCPU() as preprocess:\n categorical_vars = [\n 'MachineIdentifier', \n 'ProductName', \n 'EngineVersion', \n 'AppVersion', \n 'AvSigVersion', \n 'Platform', \n 'Processor', \n 'OsVer', \n 'OsPlatformSubRelease', \n 'OsBuildLab', \n 'SkuEdition', \n 'PuaMode', \n 'SmartScreen', \n 'Census_MDC2FormFactor', \n 'Census_DeviceFamily', \n 'Census_ProcessorClass', \n 'Census_PrimaryDiskTypeName', \n 'Census_ChassisTypeName', \n 'Census_PowerPlatformRoleName', \n 'Census_InternalBatteryType', \n 'Census_OSVersion', \n 'Census_OSArchitecture', \n 'Census_OSBranch', \n 'Census_OSEdition', \n 'Census_OSSkuName', \n 'Census_OSInstallTypeName', \n 'Census_OSWUAutoUpdateOptionsName', \n 'Census_GenuineStateName', \n 'Census_ActivationChannel', \n 'Census_FlightRing',\n ]\n train=joblib.load('data/train_w_time_origin.pkl')\n test=joblib.load('data/test_w_time_origin.pkl')\n test['HasDetections'] = -1\n\n add_datepart(train, 'DateAS', drop=False, time=True)\n add_datepart(train, 'DateOS', drop=False, time=True)\n add_datepart(train, 'DateBL', drop=False, time=True)\n add_datepart(test, 'DateAS', drop=False, time=True)\n add_datepart(test, 'DateOS', drop=False, time=True)\n add_datepart(test, 'DateBL', drop=False, time=True)\n \n preprocess.keep_var_names('train', 'test', 'categorical_vars')\n ",
"\n*** Experiment started with the CPU-only backend\n\n\n*** Current state:\nRAM: Used Free Total Util\nCPU: 2,099 57,811 64,352 MB 3.26% \n\n\n・ RAM: △Consumed △Peaked Used Total | Exec time 0:00:39.245\n・ CPU: 7,045 127 9,195 MB |\n\nIPyExperimentsCPU: Finishing\n\n*** Experiment finished in 00:00:39 (elapsed wallclock time)\n\n*** Newly defined local variables:\nKept: test, train\n\n*** Experiment memory:\nRAM: Consumed Reclaimed\nCPU: 7,096 0 MB ( 0.00%)\n\n*** Current state:\nRAM: Used Free Total Util\nCPU: 9,195 50,712 64,352 MB 14.29% \n\n\n"
],
[
"\n\njoblib.dump(categorical_vars, 'val/categorical.pkl')",
"_____no_output_____"
],
[
"with pd.option_context(\"display.max_rows\", 100):\n with pd.option_context(\"display.max_columns\", 100):\n display(train[categorical_vars].head())\n \n",
"_____no_output_____"
],
[
"\n\nversioned = ['EngineVersion','AppVersion','AvSigVersion','OsVer','Census_OSVersion','OsBuildLab']\n\nwith IPyExperimentsCPU() as vsplits:\n for ver in versioned:\n versioning(train, ver)\n versioning(test, ver)",
"\n*** Experiment started with the CPU-only backend\n\n\n*** Current state:\nRAM: Used Free Total Util\nCPU: 11,004 47,670 64,352 MB 17.10% \n\n\n・ RAM: △Consumed △Peaked Used Total | Exec time 0:02:56.148\n・ CPU: 3,645 428 11,009 MB |\n\nIPyExperimentsCPU: Finishing\n\n*** Experiment finished in 00:02:56 (elapsed wallclock time)\n\n*** Newly defined local variables:\nDeleted: ver\n\n*** Experiment memory:\nRAM: Consumed Reclaimed\nCPU: 4 0 MB ( 0.00%)\n\n*** Current state:\nRAM: Used Free Total Util\nCPU: 11,009 47,630 64,352 MB 17.11% \n\n\n"
],
[
"\ndf_raw = pd.concat([train, test], sort=False)\ntrain_cats(df_raw)\ndf, y, nas = proc_df(df_raw)\ntrain = df.head(len(train)).reset_index(drop=True)\ntest = df.tail(len(test)).reset_index(drop=True)\njoblib.dump(train,'data/train_dainis.pkl')\njoblib.dump(test,'data/test_dainis.pkl')",
"_____no_output_____"
],
[
"with IPyExperimentsCPU() as transform:\n '''\n print('Transform all features to category.\\n')\n \n for i, usecol in enumerate(categorical_vars):\n print(str(i) + \" / \" + str(len(categorical_vars)))\n train[usecol] = train[usecol].astype('str')\n test[usecol] = test[usecol].astype('str')\n\n train[usecol] = train[usecol].astype('str')\n test[usecol] = test[usecol].astype('str')\n\n #Fit LabelEncoder\n le = LabelEncoder().fit(\n np.unique(train[usecol].unique().tolist()+\n test[usecol].unique().tolist()))\n\n #At the end 0 will be used for dropped values\n train[usecol] = le.transform(train[usecol])+1\n test[usecol] = le.transform(test[usecol])+1\n\n agg_tr = (train\n .groupby([usecol])\n .aggregate({'MachineIdentifier':'count'})\n .reset_index()\n .rename({'MachineIdentifier':'Train'}, axis=1))\n agg_te = (test\n .groupby([usecol])\n .aggregate({'MachineIdentifier':'count'})\n .reset_index()\n .rename({'MachineIdentifier':'Test'}, axis=1))\n\n agg = pd.merge(agg_tr, agg_te, on=usecol, how='outer').replace(np.nan, 0)\n #Select values with more than 1000 observations\n agg = agg[(agg['Train'] > 1000)].reset_index(drop=True)\n agg['Total'] = agg['Train'] + agg['Test']\n #Drop unbalanced values\n agg = agg[(agg['Train'] / agg['Total'] > 0.2) & (agg['Train'] / agg['Total'] < 0.8)]\n agg[usecol+'Copy'] = agg[usecol]\n\n train[usecol+'bis'] = (pd.merge(train[[usecol]], \n agg[[usecol, usecol+'Copy']], \n on=usecol, how='left')[usecol+'Copy']\n .replace(np.nan, 0).astype('int').astype('category'))\n\n test[usecol+'bis'] = (pd.merge(test[[usecol]], \n agg[[usecol, usecol+'Copy']], \n on=usecol, how='left')[usecol+'Copy']\n .replace(np.nan, 0).astype('int').astype('category'))\n\n del le, agg_tr, agg_te, agg, usecol\n '''\n \n EXP_TAG=Path('dainis0')\n train_ids = train.index\n test_ids = test.index\n y_train = np.array(train['HasDetections'])\n \n # Fulfill contract with evaluator notebook\n joblib.dump(categorical_vars, Path('val' / EXP_TAG / 'categorical.pkl'))\n joblib.dump(train, Path('val' / EXP_TAG / 'train-original.pkl'))\n joblib.dump(test,Path( 'val' / EXP_TAG / ' test-original.pkl'))\n joblib.dump(y_train, Path('val' / EXP_TAG / 'y_train-original.pkl'))\n joblib.dump(train_ids,Path( 'val' / EXP_TAG / 'train_ids-original.pkl'))\n joblib.dump(test_ids, Path('val' / EXP_TAG / 'test_ids-original.pkl'))\n \n ",
"\n*** Experiment started with the CPU-only backend\n\n\n*** Current state:\nRAM: Used Free Total Util\nCPU: 32,254 27,282 64,352 MB 50.12% \n\n\n・ RAM: △Consumed △Peaked Used Total | Exec time 0:02:09.890\n・ CPU: 68 16 32,254 MB |\n\nIPyExperimentsCPU: Finishing\n\n*** Experiment finished in 00:02:10 (elapsed wallclock time)\n\n*** Newly defined local variables:\nDeleted: EXP_TAG, test_ids, train_ids, y_train\n\n*** Experiment memory:\nRAM: Consumed Reclaimed\nCPU: 0 0 MB (100.00%)\n\n*** Current state:\nRAM: Used Free Total Util\nCPU: 32,254 27,292 64,352 MB 50.12% \n\n\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d055775d3855c85d1855fdba28d8ee38b91da26e | 147,921 | ipynb | Jupyter Notebook | development/Obtain Universal Time UT using astropy.ipynb | waltersmartinsf/iraf_task | 66aade8736fcd26f02b3a3645d0e8b64d9ad35e8 | [
"CC-BY-4.0"
] | 1 | 2019-02-26T11:43:31.000Z | 2019-02-26T11:43:31.000Z | development/Obtain Universal Time UT using astropy.ipynb | waltersmartinsf/iraf_task | 66aade8736fcd26f02b3a3645d0e8b64d9ad35e8 | [
"CC-BY-4.0"
] | null | null | null | development/Obtain Universal Time UT using astropy.ipynb | waltersmartinsf/iraf_task | 66aade8736fcd26f02b3a3645d0e8b64d9ad35e8 | [
"CC-BY-4.0"
] | 1 | 2018-10-04T18:18:32.000Z | 2018-10-04T18:18:32.000Z | 54.563261 | 27,508 | 0.52192 | [
[
[
"#Goal: obtain a universal time, in Julian Date from a local time in the header of the fits images\n\nfrom astropy.io import fits #work with fits images\nfrom astropy.time import Time #work with time in header\nimport glob #work with files in the directory\nimport yaml #work with yaml files\nimport numpy as np\nimport sys\nimport os",
"_____no_output_____"
],
[
"%matplotlib inline\n\nimport matplotlib.pyplot as plt #plot library\n\ndef init_plotting():\n plt.rcParams['figure.figsize'] = (14.0,8.0)\n plt.rcParams['font.size'] = 10\n #plt.rcParams['font.family'] = 'Times New Roman'\n plt.rcParams['axes.labelsize'] = plt.rcParams['font.size']\n plt.rcParams['axes.titlesize'] = 2*plt.rcParams['font.size']\n plt.rcParams['legend.fontsize'] = 0.65*plt.rcParams['font.size']\n plt.rcParams['xtick.labelsize'] = plt.rcParams['font.size']\n plt.rcParams['ytick.labelsize'] = plt.rcParams['font.size']\n plt.rcParams['xtick.major.size'] = 3\n plt.rcParams['xtick.minor.size'] = 3\n plt.rcParams['xtick.major.width'] = 1\n plt.rcParams['xtick.minor.width'] = 1\n plt.rcParams['ytick.major.size'] = 3\n plt.rcParams['ytick.minor.size'] = 3\n plt.rcParams['ytick.major.width'] = 1\n plt.rcParams['ytick.minor.width'] = 1\n plt.rcParams['legend.frameon'] = True\n plt.rcParams['legend.loc'] = 'best'\n plt.rcParams['axes.linewidth'] = 1\n\ninit_plotting()",
"_____no_output_____"
],
[
"#BAR Progress function to visualize the progress status:\ndef update_progress(progress):\n \"\"\"\n Progress Bar to visualize the status of a procedure\n ___\n INPUT:\n progress: percent of the data\n\n ___\n Example:\n print \"\"\n print \"progress : 0->1\"\n for i in range(100):\n time.sleep(0.1)\n update_progress(i/100.0)\n \"\"\"\n barLength = 10 # Modify this to change the length of the progress bar\n status = \"\"\n if isinstance(progress, int):\n progress = float(progress)\n if not isinstance(progress, float):\n progress = 0\n status = \"error: progress var must be float\\r\\n\"\n if progress < 0:\n progress = 0\n status = \"Halt...\\r\\n\"\n if progress >= 1:\n progress = 1\n status = \"Done...\\r\\n\"\n block = int(round(barLength*progress))\n text = \"\\rPercent: [{0}] {1}% {2}\".format( \"#\"*block + \"-\"*(barLength-block), progress*100, status)\n sys.stdout.write(text)\n sys.stdout.flush()",
"_____no_output_____"
],
[
"save_path = u'C:\\\\Users\\\\walte\\\\Desktop\\\\exoplanet\\\\data\\\\xo2b\\\\xo2b.b\\\\teste_pyraf'\ndata_path = u'C:\\\\Users\\\\walte\\\\Desktop\\\\exoplanet\\\\data\\\\xo2b\\\\xo2b.b'",
"_____no_output_____"
],
[
"pwd",
"_____no_output_____"
],
[
"cd C:/Users/walte/Desktop/exoplanet/data/xo2b/xo2b.b/teste_pyraf/",
"C:\\Users\\walte\\Desktop\\exoplanet\\data\\xo2b\\xo2b.b\\teste_pyraf\n"
],
[
"images = glob.glob('ABxo2b*.fits')",
"_____no_output_____"
],
[
"print images",
"['ABxo2b.0002.fits', 'ABxo2b.0004.fits', 'ABxo2b.0006.fits', 'ABxo2b.0008.fits', 'ABxo2b.0010.fits', 'ABxo2b.0012.fits', 'ABxo2b.0014.fits', 'ABxo2b.0016.fits', 'ABxo2b.0018.fits', 'ABxo2b.0020.fits', 'ABxo2b.0022.fits', 'ABxo2b.0024.fits', 'ABxo2b.0026.fits', 'ABxo2b.0028.fits', 'ABxo2b.0030.fits', 'ABxo2b.0032.fits', 'ABxo2b.0034.fits', 'ABxo2b.0036.fits', 'ABxo2b.0038.fits', 'ABxo2b.0040.fits', 'ABxo2b.0042.fits', 'ABxo2b.0044.fits', 'ABxo2b.0046.fits', 'ABxo2b.0048.fits', 'ABxo2b.0050.fits', 'ABxo2b.0052.fits', 'ABxo2b.0054.fits', 'ABxo2b.0056.fits', 'ABxo2b.0058.fits', 'ABxo2b.0060.fits', 'ABxo2b.0062.fits', 'ABxo2b.0064.fits', 'ABxo2b.0066.fits', 'ABxo2b.0068.fits', 'ABxo2b.0070.fits', 'ABxo2b.0072.fits', 'ABxo2b.0074.fits', 'ABxo2b.0076.fits', 'ABxo2b.0078.fits', 'ABxo2b.0080.fits', 'ABxo2b.0082.fits', 'ABxo2b.0084.fits', 'ABxo2b.0086.fits', 'ABxo2b.0088.fits', 'ABxo2b.0090.fits', 'ABxo2b.0092.fits', 'ABxo2b.0094.fits', 'ABxo2b.0096.fits', 'ABxo2b.0098.fits', 'ABxo2b.0100.fits', 'ABxo2b.0102.fits', 'ABxo2b.0104.fits', 'ABxo2b.0106.fits', 'ABxo2b.0108.fits', 'ABxo2b.0110.fits', 'ABxo2b.0112.fits', 'ABxo2b.0114.fits', 'ABxo2b.0116.fits', 'ABxo2b.0118.fits', 'ABxo2b.0120.fits', 'ABxo2b.0122.fits', 'ABxo2b.0124.fits', 'ABxo2b.0126.fits', 'ABxo2b.0128.fits', 'ABxo2b.0130.fits', 'ABxo2b.0132.fits', 'ABxo2b.0134.fits', 'ABxo2b.0136.fits', 'ABxo2b.0138.fits', 'ABxo2b.0140.fits', 'ABxo2b.0142.fits', 'ABxo2b.0144.fits', 'ABxo2b.0146.fits', 'ABxo2b.0148.fits', 'ABxo2b.0150.fits', 'ABxo2b.0152.fits', 'ABxo2b.0154.fits', 'ABxo2b.0156.fits', 'ABxo2b.0158.fits', 'ABxo2b.0160.fits', 'ABxo2b.0162.fits', 'ABxo2b.0164.fits', 'ABxo2b.0166.fits', 'ABxo2b.0168.fits', 'ABxo2b.0170.fits', 'ABxo2b.0172.fits', 'ABxo2b.0174.fits', 'ABxo2b.0176.fits', 'ABxo2b.0178.fits', 'ABxo2b.0180.fits', 'ABxo2b.0182.fits', 'ABxo2b.0184.fits', 'ABxo2b.0186.fits', 'ABxo2b.0188.fits', 'ABxo2b.0190.fits', 'ABxo2b.0192.fits', 'ABxo2b.0194.fits', 'ABxo2b.0196.fits', 'ABxo2b.0198.fits', 'ABxo2b.0200.fits', 'ABxo2b.0202.fits', 'ABxo2b.0204.fits', 'ABxo2b.0206.fits', 'ABxo2b.0208.fits', 'ABxo2b.0210.fits', 'ABxo2b.0212.fits', 'ABxo2b.0214.fits', 'ABxo2b.0216.fits', 'ABxo2b.0218.fits', 'ABxo2b.0220.fits', 'ABxo2b.0222.fits', 'ABxo2b.0224.fits', 'ABxo2b.0226.fits', 'ABxo2b.0228.fits', 'ABxo2b.0230.fits', 'ABxo2b.0232.fits', 'ABxo2b.0234.fits', 'ABxo2b.0236.fits', 'ABxo2b.0238.fits', 'ABxo2b.0240.fits', 'ABxo2b.0242.fits', 'ABxo2b.0244.fits', 'ABxo2b.0246.fits', 'ABxo2b.0248.fits', 'ABxo2b.0250.fits', 'ABxo2b.0252.fits', 'ABxo2b.0254.fits', 'ABxo2b.0256.fits', 'ABxo2b.0258.fits', 'ABxo2b.0260.fits', 'ABxo2b.0262.fits', 'ABxo2b.0264.fits', 'ABxo2b.0266.fits']\n"
],
[
"print len(images)",
"133\n"
],
[
"im,hdr = fits.getdata(images[0],header=True) #reading the fits image (data + header)",
"_____no_output_____"
],
[
"hdr",
"_____no_output_____"
]
],
[
[
"# Local Time",
"_____no_output_____"
]
],
[
[
"hdr['LOCTIME'] #local time at start of exposure in header",
"_____no_output_____"
],
[
"images_time = []\nfor i in range(len(images)):\n im,hdr = fits.getdata(images[i],header=True) #reading the fits image (data + header)\n images_time.append(hdr['LOCTIME'])\n update_progress((i+1.)/len(images))",
"Percent: [##########] 100% Done...\n"
],
[
"print images_time #our local time series",
"['22:29:57', '22:32:14', '22:34:31', '22:36:48', '22:39:05', '22:41:22', '22:43:39', '22:45:57', '22:48:14', '22:50:31', '22:52:48', '22:55:06', '22:57:23', '22:59:40', '23:01:57', '23:04:14', '23:06:31', '23:08:48', '23:11:05', '23:13:23', '23:15:40', '23:17:57', '23:20:14', '23:22:31', '23:24:48', '23:27:05', '23:29:22', '23:31:40', '23:33:57', '23:36:14', '23:38:31', '23:40:48', '23:43:05', '23:45:22', '23:47:40', '23:49:57', '23:52:14', '23:54:31', '23:56:48', '23:59:05', '00:01:22', '00:03:39', '00:05:57', '00:08:14', '00:10:31', '00:12:48', '00:15:05', '00:17:22', '00:19:39', '00:21:56', '00:24:13', '00:26:31', '00:28:48', '00:31:05', '00:33:22', '00:35:39', '00:37:56', '00:40:13', '00:42:30', '00:44:48', '00:47:05', '00:49:22', '00:51:39', '00:53:56', '00:56:13', '00:58:30', '01:00:47', '01:03:04', '01:05:22', '01:07:39', '01:09:56', '01:12:13', '01:14:30', '01:16:47', '01:19:04', '01:21:22', '01:23:39', '01:25:56', '01:28:13', '01:30:30', '01:32:47', '01:35:04', '01:37:21', '01:39:38', '01:41:56', '01:44:13', '01:46:30', '01:48:47', '01:51:04', '01:58:51', '02:01:08', '02:03:25', '02:05:42', '02:07:59', '02:10:16', '02:12:33', '02:14:50', '02:17:08', '02:19:25', '02:21:42', '02:23:59', '02:26:16', '02:28:33', '02:30:50', '02:33:07', '02:35:24', '02:37:42', '02:39:59', '02:42:16', '02:44:33', '02:46:50', '02:49:07', '02:51:24', '02:53:41', '02:55:59', '02:58:16', '03:00:33', '03:02:50', '03:05:07', '03:07:24', '03:09:41', '03:11:59', '03:14:16', '03:16:33', '03:18:50', '03:21:07', '03:23:24', '03:25:41', '03:27:58', '03:30:16', '03:32:33', '03:34:50', '03:37:07']\n"
]
],
[
[
"# FITS Time",
"_____no_output_____"
]
],
[
[
"fits_time = []\nfor i in range(len(images)):\n im,hdr = fits.getdata(images[i],header=True) #reading the fits image (data + header)\n fits_time.append(hdr['DATE'])\n update_progress((i+1.)/len(images))",
"Percent: [##########] 100% Done...\n"
],
[
"print fits_time",
"['2016-02-08T17:01:06', '2016-02-08T17:01:07', '2016-02-08T17:01:07', '2016-02-08T17:01:07', '2016-02-08T17:01:08', '2016-02-08T17:01:09', '2016-02-08T17:01:10', '2016-02-08T17:01:10', '2016-02-08T17:01:10', '2016-02-08T17:01:11', '2016-02-08T17:01:11', '2016-02-08T17:01:12', '2016-02-08T17:01:12', '2016-02-08T17:01:14', '2016-02-08T17:01:15', '2016-02-08T17:01:16', '2016-02-08T17:01:16', '2016-02-08T17:01:16', '2016-02-08T17:01:17', '2016-02-08T17:01:18', '2016-02-08T17:01:18', '2016-02-08T17:01:18', '2016-02-08T17:01:19', '2016-02-08T17:01:19', '2016-02-08T17:01:20', '2016-02-08T17:01:20', '2016-02-08T17:01:21', '2016-02-08T17:01:21', '2016-02-08T17:01:21', '2016-02-08T17:01:22', '2016-02-08T17:01:22', '2016-02-08T17:01:22', '2016-02-08T17:01:23', '2016-02-08T17:01:23', '2016-02-08T17:01:24', '2016-02-08T17:01:24', '2016-02-08T17:01:25', '2016-02-08T17:01:25', '2016-02-08T17:01:25', '2016-02-08T17:01:26', '2016-02-08T17:01:26', '2016-02-08T17:01:27', '2016-02-08T17:01:27', '2016-02-08T17:01:28', '2016-02-08T17:01:30', '2016-02-08T17:01:30', '2016-02-08T17:01:31', '2016-02-08T17:01:31', '2016-02-08T17:01:31', '2016-02-08T17:01:32', '2016-02-08T17:01:32', '2016-02-08T17:01:33', '2016-02-08T17:01:33', '2016-02-08T17:01:35', '2016-02-08T17:01:36', '2016-02-08T17:01:38', '2016-02-08T17:01:39', '2016-02-08T17:01:41', '2016-02-08T17:01:42', '2016-02-08T17:01:43', '2016-02-08T17:01:44', '2016-02-08T17:01:44', '2016-02-08T17:01:46', '2016-02-08T17:01:47', '2016-02-08T17:01:49', '2016-02-08T17:01:50', '2016-02-08T17:01:50', '2016-02-08T17:01:51', '2016-02-08T17:01:52', '2016-02-08T17:01:53', '2016-02-08T17:01:54', '2016-02-08T17:01:55', '2016-02-08T17:01:56', '2016-02-08T17:01:58', '2016-02-08T17:01:58', '2016-02-08T17:01:59', '2016-02-08T17:01:59', '2016-02-08T17:02:00', '2016-02-08T17:02:00', '2016-02-08T17:02:00', '2016-02-08T17:02:01', '2016-02-08T17:02:01', '2016-02-08T17:02:02', '2016-02-08T17:02:02', '2016-02-08T17:02:02', '2016-02-08T17:02:03', '2016-02-08T17:02:03', '2016-02-08T17:02:04', '2016-02-08T17:02:04', '2016-02-08T17:02:05', '2016-02-08T17:02:05', '2016-02-08T17:02:06', '2016-02-08T17:02:06', '2016-02-08T17:02:06', '2016-02-08T17:02:07', '2016-02-08T17:02:07', '2016-02-08T17:02:08', '2016-02-08T17:02:08', '2016-02-08T17:02:09', '2016-02-08T17:02:09', '2016-02-08T17:02:09', '2016-02-08T17:02:10', '2016-02-08T17:02:10', '2016-02-08T17:02:11', '2016-02-08T17:02:11', '2016-02-08T17:02:12', '2016-02-08T17:02:12', '2016-02-08T17:02:13', '2016-02-08T17:02:13', '2016-02-08T17:02:13', '2016-02-08T17:02:14', '2016-02-08T17:02:14', '2016-02-08T17:00:55', '2016-02-08T17:00:55', '2016-02-08T17:00:56', '2016-02-08T17:00:56', '2016-02-08T17:00:56', '2016-02-08T17:00:57', '2016-02-08T17:00:57', '2016-02-08T17:00:57', '2016-02-08T17:00:58', '2016-02-08T17:00:58', '2016-02-08T17:00:58', '2016-02-08T17:00:59', '2016-02-08T17:01:00', '2016-02-08T17:01:01', '2016-02-08T17:01:03', '2016-02-08T17:01:04', '2016-02-08T17:01:04', '2016-02-08T17:01:05', '2016-02-08T17:01:05', '2016-02-08T17:01:06', '2016-02-08T17:01:06']\n"
]
],
[
[
"# Observatory (location)",
"_____no_output_____"
]
],
[
[
"#geting the observatory\nim,hdr = fits.getdata(images[0],header=True) #reading the fits image (data + header)",
"_____no_output_____"
],
[
"observatory_loc = hdr['OBSERVAT']\nprint observatory_loc",
"mtbigelow\n"
]
],
[
[
"# Obtain UT using local time and observatory",
"_____no_output_____"
]
],
[
[
"#time formats\nprint list(Time.FORMATS)",
"[u'jd', u'mjd', u'decimalyear', u'unix', u'cxcsec', u'gps', u'plot_date', u'datetime', u'iso', u'isot', u'yday', u'fits', u'byear', u'jyear', u'byear_str', u'jyear_str']\n"
],
[
"#Let's using fits time\nteste = Time(fits_time[0],format=u'fits')",
"_____no_output_____"
],
[
"teste",
"_____no_output_____"
],
[
"teste.jd #convert my object test in fits date to julian date",
"_____no_output_____"
],
[
"#Let's make to all time series\nserie = np.zeros(len(fits_time))\nfor i in range(len(fits_time)):\n serie[i] = Time(fits_time[i],format=u'fits').jd",
"_____no_output_____"
],
[
"serie",
"_____no_output_____"
],
[
"#Let's confirm our serie\n\nhjd = np.loadtxt('../Results/hjd') #original data",
"_____no_output_____"
],
[
"hjd",
"_____no_output_____"
]
],
[
[
"# Error 404: Date don't found!\n\nYes, and I know why! THe date in abxo2b*.fits images are the date from when it were created. Because of that, we need to extract the date from original images!",
"_____no_output_____"
]
],
[
[
"os.chdir('../')\nimages = glob.glob('xo2b*.fits')\nos.chdir(save_path)",
"_____no_output_____"
],
[
"print images",
"['xo2b.0002.fits', 'xo2b.0004.fits', 'xo2b.0006.fits', 'xo2b.0008.fits', 'xo2b.0010.fits', 'xo2b.0012.fits', 'xo2b.0014.fits', 'xo2b.0016.fits', 'xo2b.0018.fits', 'xo2b.0020.fits', 'xo2b.0022.fits', 'xo2b.0024.fits', 'xo2b.0026.fits', 'xo2b.0028.fits', 'xo2b.0030.fits', 'xo2b.0032.fits', 'xo2b.0034.fits', 'xo2b.0036.fits', 'xo2b.0038.fits', 'xo2b.0040.fits', 'xo2b.0042.fits', 'xo2b.0044.fits', 'xo2b.0046.fits', 'xo2b.0048.fits', 'xo2b.0050.fits', 'xo2b.0052.fits', 'xo2b.0054.fits', 'xo2b.0056.fits', 'xo2b.0058.fits', 'xo2b.0060.fits', 'xo2b.0062.fits', 'xo2b.0064.fits', 'xo2b.0066.fits', 'xo2b.0068.fits', 'xo2b.0070.fits', 'xo2b.0072.fits', 'xo2b.0074.fits', 'xo2b.0076.fits', 'xo2b.0078.fits', 'xo2b.0080.fits', 'xo2b.0082.fits', 'xo2b.0084.fits', 'xo2b.0086.fits', 'xo2b.0088.fits', 'xo2b.0090.fits', 'xo2b.0092.fits', 'xo2b.0094.fits', 'xo2b.0096.fits', 'xo2b.0098.fits', 'xo2b.0100.fits', 'xo2b.0102.fits', 'xo2b.0104.fits', 'xo2b.0106.fits', 'xo2b.0108.fits', 'xo2b.0110.fits', 'xo2b.0112.fits', 'xo2b.0114.fits', 'xo2b.0116.fits', 'xo2b.0118.fits', 'xo2b.0120.fits', 'xo2b.0122.fits', 'xo2b.0124.fits', 'xo2b.0126.fits', 'xo2b.0128.fits', 'xo2b.0130.fits', 'xo2b.0132.fits', 'xo2b.0134.fits', 'xo2b.0136.fits', 'xo2b.0138.fits', 'xo2b.0140.fits', 'xo2b.0142.fits', 'xo2b.0144.fits', 'xo2b.0146.fits', 'xo2b.0148.fits', 'xo2b.0150.fits', 'xo2b.0152.fits', 'xo2b.0154.fits', 'xo2b.0156.fits', 'xo2b.0158.fits', 'xo2b.0160.fits', 'xo2b.0162.fits', 'xo2b.0164.fits', 'xo2b.0166.fits', 'xo2b.0168.fits', 'xo2b.0170.fits', 'xo2b.0172.fits', 'xo2b.0174.fits', 'xo2b.0176.fits', 'xo2b.0178.fits', 'xo2b.0180.fits', 'xo2b.0182.fits', 'xo2b.0184.fits', 'xo2b.0186.fits', 'xo2b.0188.fits', 'xo2b.0190.fits', 'xo2b.0192.fits', 'xo2b.0194.fits', 'xo2b.0196.fits', 'xo2b.0198.fits', 'xo2b.0200.fits', 'xo2b.0202.fits', 'xo2b.0204.fits', 'xo2b.0206.fits', 'xo2b.0208.fits', 'xo2b.0210.fits', 'xo2b.0212.fits', 'xo2b.0214.fits', 'xo2b.0216.fits', 'xo2b.0218.fits', 'xo2b.0220.fits', 'xo2b.0222.fits', 'xo2b.0224.fits', 'xo2b.0226.fits', 'xo2b.0228.fits', 'xo2b.0230.fits', 'xo2b.0232.fits', 'xo2b.0234.fits', 'xo2b.0236.fits', 'xo2b.0238.fits', 'xo2b.0240.fits', 'xo2b.0242.fits', 'xo2b.0244.fits', 'xo2b.0246.fits', 'xo2b.0248.fits', 'xo2b.0250.fits', 'xo2b.0252.fits', 'xo2b.0254.fits', 'xo2b.0256.fits', 'xo2b.0258.fits', 'xo2b.0260.fits', 'xo2b.0262.fits', 'xo2b.0264.fits', 'xo2b.0266.fits']\n"
],
[
"fits_time = []\nos.chdir(data_path)\nfor i in range(len(images)):\n im,hdr = fits.getdata(images[i],header=True) #reading the fits image (data + header)\n fits_time.append(hdr['DATE'])\n update_progress((i+1.)/len(images))\nos.chdir(save_path)",
"Percent: [##########] 100% Done...\n"
],
[
"print fits_time",
"['2012-12-10T05:30:40', '2012-12-10T05:32:57', '2012-12-10T05:35:14', '2012-12-10T05:37:31', '2012-12-10T05:39:48', '2012-12-10T05:42:05', '2012-12-10T05:44:22', '2012-12-10T05:46:39', '2012-12-10T05:48:56', '2012-12-10T05:51:13', '2012-12-10T05:53:30', '2012-12-10T05:55:49', '2012-12-10T05:58:06', '2012-12-10T06:00:23', '2012-12-10T06:02:40', '2012-12-10T06:04:57', '2012-12-10T06:07:14', '2012-12-10T06:09:31', '2012-12-10T06:11:48', '2012-12-10T06:14:05', '2012-12-10T06:16:22', '2012-12-10T06:18:39', '2012-12-10T06:20:56', '2012-12-10T06:23:14', '2012-12-10T06:25:31', '2012-12-10T06:27:48', '2012-12-10T06:30:05', '2012-12-10T06:32:22', '2012-12-10T06:34:39', '2012-12-10T06:36:56', '2012-12-10T06:39:15', '2012-12-10T06:41:31', '2012-12-10T06:43:48', '2012-12-10T06:46:06', '2012-12-10T06:48:23', '2012-12-10T06:50:38', '2012-12-10T06:52:57', '2012-12-10T06:55:14', '2012-12-10T06:57:29', '2012-12-10T06:59:47', '2012-12-10T07:02:04', '2012-12-10T07:04:21', '2012-12-10T07:06:38', '2012-12-10T07:08:55', '2012-12-10T07:11:12', '2012-12-10T07:13:31', '2012-12-10T07:15:47', '2012-12-10T07:18:04', '2012-12-10T07:20:21', '2012-12-10T07:22:38', '2012-12-10T07:24:55', '2012-12-10T07:27:12', '2012-12-10T07:29:29', '2012-12-10T07:31:46', '2012-12-10T07:34:05', '2012-12-10T07:36:22', '2012-12-10T07:38:39', '2012-12-10T07:40:56', '2012-12-10T07:43:13', '2012-12-10T07:45:30', '2012-12-10T07:47:48', '2012-12-10T07:50:05', '2012-12-10T07:52:22', '2012-12-10T07:54:39', '2012-12-10T07:56:56', '2012-12-10T07:59:12', '2012-12-10T08:01:29', '2012-12-10T08:03:46', '2012-12-10T08:06:04', '2012-12-10T08:08:21', '2012-12-10T08:10:38', '2012-12-10T08:12:55', '2012-12-10T08:15:13', '2012-12-10T08:17:29', '2012-12-10T08:19:46', '2012-12-10T08:22:03', '2012-12-10T08:24:22', '2012-12-10T08:26:38', '2012-12-10T08:28:55', '2012-12-10T08:31:12', '2012-12-10T08:33:30', '2012-12-10T08:35:46', '2012-12-10T08:38:03', '2012-12-10T08:40:21', '2012-12-10T08:42:38', '2012-12-10T08:44:54', '2012-12-10T08:47:11', '2012-12-10T08:49:29', '2012-12-10T08:51:45', '2012-12-10T08:59:33', '2012-12-10T09:01:50', '2012-12-10T09:04:07', '2012-12-10T09:06:24', '2012-12-10T09:08:41', '2012-12-10T09:10:58', '2012-12-10T09:13:15', '2012-12-10T09:15:32', '2012-12-10T09:17:51', '2012-12-10T09:20:06', '2012-12-10T09:22:25', '2012-12-10T09:24:42', '2012-12-10T09:26:59', '2012-12-10T09:29:17', '2012-12-10T09:31:32', '2012-12-10T09:33:49', '2012-12-10T09:36:06', '2012-12-10T09:38:23', '2012-12-10T09:40:42', '2012-12-10T09:42:57', '2012-12-10T09:45:14', '2012-12-10T09:47:34', '2012-12-10T09:49:51', '2012-12-10T09:52:06', '2012-12-10T09:54:23', '2012-12-10T09:56:42', '2012-12-10T09:58:59', '2012-12-10T10:01:16', '2012-12-10T10:03:33', '2012-12-10T10:05:50', '2012-12-10T10:08:07', '2012-12-10T10:10:24', '2012-12-10T10:12:41', '2012-12-10T10:14:58', '2012-12-10T10:17:15', '2012-12-10T10:19:32', '2012-12-10T10:21:49', '2012-12-10T10:24:06', '2012-12-10T10:26:24', '2012-12-10T10:28:41', '2012-12-10T10:30:58', '2012-12-10T10:33:15', '2012-12-10T10:35:32', '2012-12-10T10:37:49']\n"
],
[
"#Let's make to all time series\nserie = np.zeros(len(fits_time))\nfor i in range(len(fits_time)):\n serie[i] = Time(fits_time[i],format=u'fits').jd",
"_____no_output_____"
],
[
"serie",
"_____no_output_____"
],
[
"hjd",
"_____no_output_____"
],
[
"diff = serie-hjd",
"_____no_output_____"
],
[
"plt.figure()\nplt.grid()\nplt.scatter(hjd,diff)\nplt.ylim(min(diff),max(diff))",
"_____no_output_____"
],
[
"im,hdr = fits.getdata('../'+images[0],header=True)",
"_____no_output_____"
],
[
"hdr",
"_____no_output_____"
],
[
"hdr['LOCTIME'],hdr['DATE-OBS']",
"_____no_output_____"
],
[
"tempo_imagem = hdr['DATE-OBS']+' '+hdr['LOCTIME']",
"_____no_output_____"
],
[
"print tempo_imagem",
"2012-12-10 22:29:57\n"
],
[
"teste = Time(tempo_imagem,format=u'iso')",
"_____no_output_____"
],
[
"teste.jd #Nope",
"_____no_output_____"
],
[
"hjd[0]",
"_____no_output_____"
],
[
"#****** change time\nhdr['UT']",
"_____no_output_____"
],
[
"location = '+32:24:59.3 110:44:04.3'",
"_____no_output_____"
],
[
"teste = Time(hdr['DATE-OBS']+'T'+hdr['UT'],format='isot',scale='utc')",
"_____no_output_____"
],
[
"teste",
"_____no_output_____"
],
[
"teste.jd",
"_____no_output_____"
],
[
"hjd[0]",
"_____no_output_____"
],
[
"hdr.['']",
"_____no_output_____"
]
],
[
[
"# Working with date in header following Kyle's subroutine stcoox.cl in ExoDRPL",
"_____no_output_____"
]
],
[
[
"import yaml",
"_____no_output_____"
],
[
"file = yaml.load(open('C:/Users/walte/MEGA/work/codes/iraf_task/input_path.yaml'))",
"_____no_output_____"
],
[
"RA,DEC, epoch = file['RA'],file['DEC'],file['epoch']\n\nprint RA,DEC,epoch",
" 07:48:06.46 +50:13:32.9 2000.0\n"
],
[
"hdr['DATE-OBS'], hdr['UT']",
"_____no_output_____"
],
[
"local_time = Time(hdr['DATE-OBS']+'T'+hdr['ut'],format='isot')\nprint local_time.jd\n\nteste_loc_time = Time('2012-12-09'+'T'+hdr['ut'],format='isot')\nprint teste_loc_time.jd",
"2456271.72914\n2456270.72914\n"
],
[
"hdr['DATE']",
"_____no_output_____"
],
[
"Time(hdr['DATE'],format='fits',scale='tai')",
"_____no_output_____"
],
[
"hjd[0]",
"_____no_output_____"
],
[
"Time(hdr['DATE'],format='fits',scale='tai').jd2000",
"_____no_output_____"
],
[
"hdr",
"_____no_output_____"
],
[
"import datetime",
"_____no_output_____"
],
[
"hdr['DATE-OBS'],hdr['DATE'],hdr['LOCTIME'],hdr['TIME-OBS'],hdr['TIMESYS']",
"_____no_output_____"
],
[
"Time(hdr['DATE'],format='fits',scale='utc')",
"_____no_output_____"
],
[
"print Time(hdr['DATE'],scale='utc',format='isot').jd\nprint Time(hdr['DATE-OBS']+'T'+hdr['TIME-OBS'],scale='utc',format='isot').jd",
"2456271.72963\n2456271.72914\n"
],
[
"hjd[0], len(hjd)",
"_____no_output_____"
],
[
"hdr['UTC-OBS']",
"_____no_output_____"
],
[
"Time(hdr['IRAF-TLM'],scale='utc',format='isot').jd",
"_____no_output_____"
],
[
"diff = (Time(hdr['IRAF-TLM'],scale='utc',format='isot').jd - Time(hdr['DATE'],scale='utc',format='isot').jd)/2",
"_____no_output_____"
],
[
"print diff\nprint Time(hdr['IRAF-TLM'],scale='utc',format='isot').jd - diff",
"0.000370370224118\n2456271.73\n"
]
],
[
[
"# Local Time to sideral time",
"_____no_output_____"
]
],
[
[
"local_time = Time(hdr['DATE-OBS']+'T'+hdr['Time-obs'],format='isot',scale='utc')",
"_____no_output_____"
],
[
"time_sd = local_time.sidereal_time('apparent',longitude=file['lon-obs'])#with precession and nutation\nprint time_sd",
"3h24m25.7991s\n"
],
[
"time_sd.T.hms[0],time_sd.T.hms[1],time_sd.T.hms[2]",
"_____no_output_____"
],
[
"local_time.sidereal_time('mean',longitude=file['lon-obs']) #with precession",
"_____no_output_____"
],
[
"file['observatory'],file['lon-obs']",
"_____no_output_____"
],
[
"time_sd.deg, time_sd.hour",
"_____no_output_____"
]
],
[
[
"# Change degrees to hours...",
"_____no_output_____"
]
],
[
[
"from astropy.coordinates import SkyCoord\nfrom astropy import units as unit\nfrom astropy.coordinates import Angle",
"_____no_output_____"
],
[
"RA = Angle(file['RA']+file['u.RA'])\nDEC = Angle(file['DEC']+file['u.DEC'])",
"_____no_output_____"
],
[
"coordenadas = SkyCoord(RA,DEC,frame='fk5')",
"_____no_output_____"
],
[
"coordenadas",
"_____no_output_____"
],
[
"coordenadas.ra.hour, coordenadas.dec.deg,coordenadas.equinox,coordenadas.equinox.value",
"_____no_output_____"
],
[
"local_time",
"_____no_output_____"
],
[
"local_time.hjd",
"_____no_output_____"
],
[
"#airmass\nairmass = np.loadtxt('../Results/XYpos+Airmass.txt',unpack=True)",
"_____no_output_____"
],
[
"airmass[2]",
"_____no_output_____"
],
[
"hdr['DATE-OBS'],hdr['UTC-OBS']",
"_____no_output_____"
],
[
"file['time-zone'] = 7",
"_____no_output_____"
],
[
"file['time-zone']",
"_____no_output_____"
],
[
"local_time",
"_____no_output_____"
],
[
"import string",
"_____no_output_____"
],
[
"hdr['DATE-OBS'].split('-')",
"_____no_output_____"
],
[
"float(hdr['DATE-OBS'].split('-')[2])",
"_____no_output_____"
],
[
"hdr['UTC-OBS'].split(':'),hdr['UTC-OBS'].split(':')[0]",
"_____no_output_____"
],
[
"if float(hdr['UTC-OBS'].split(':')[0]) < file['time-zone']:\n new_date = float(hdr['DATE-OBS'].split('-')[2]) - 1\n hdr['DATE-OBS'] = hdr['DATE-OBS'].split('-')[0]+'-'+hdr['DATE-OBS'].split('-')[1]+'-'+str(int(new_date))",
"_____no_output_____"
],
[
"new_date",
"_____no_output_____"
],
[
"hdr['DATE-OBS']",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d05578ed92937e74a1363e044db321ebcb3279d1 | 17,193 | ipynb | Jupyter Notebook | course/Geospatial Analysis/exercise-interactive-maps.ipynb | furyhawk/kaggle_practice | 04bf045ae179db6a849fd2c2e833acc2e869f0f8 | [
"MIT"
] | 2 | 2021-11-22T09:21:25.000Z | 2021-12-18T13:12:06.000Z | course/Geospatial Analysis/exercise-interactive-maps.ipynb | furyhawk/kaggle_practice | 04bf045ae179db6a849fd2c2e833acc2e869f0f8 | [
"MIT"
] | null | null | null | course/Geospatial Analysis/exercise-interactive-maps.ipynb | furyhawk/kaggle_practice | 04bf045ae179db6a849fd2c2e833acc2e869f0f8 | [
"MIT"
] | null | null | null | 17,193 | 17,193 | 0.738498 | [
[
[
"**This notebook is an exercise in the [Geospatial Analysis](https://www.kaggle.com/learn/geospatial-analysis) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/interactive-maps).**\n\n---\n",
"_____no_output_____"
],
[
"# Introduction\n\nYou are an urban safety planner in Japan, and you are analyzing which areas of Japan need extra earthquake reinforcement. Which areas are both high in population density and prone to earthquakes?\n\n<center>\n<img src=\"https://i.imgur.com/Kuh9gPj.png\" width=\"450\"><br/>\n</center>\n\nBefore you get started, run the code cell below to set everything up.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport geopandas as gpd\n\nimport folium\nfrom folium import Choropleth\nfrom folium.plugins import HeatMap\n\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.geospatial.ex3 import *",
"_____no_output_____"
]
],
[
[
"We define a function `embed_map()` for displaying interactive maps. It accepts two arguments: the variable containing the map, and the name of the HTML file where the map will be saved.\n\nThis function ensures that the maps are visible [in all web browsers](https://github.com/python-visualization/folium/issues/812).",
"_____no_output_____"
]
],
[
[
"def embed_map(m, file_name):\n from IPython.display import IFrame\n m.save(file_name)\n return IFrame(file_name, width='100%', height='500px')",
"_____no_output_____"
]
],
[
[
"# Exercises\n\n### 1) Do earthquakes coincide with plate boundaries?\n\nRun the code cell below to create a DataFrame `plate_boundaries` that shows global plate boundaries. The \"coordinates\" column is a list of (latitude, longitude) locations along the boundaries.",
"_____no_output_____"
]
],
[
[
"plate_boundaries = gpd.read_file(\"../input/geospatial-learn-course-data/Plate_Boundaries/Plate_Boundaries/Plate_Boundaries.shp\")\nplate_boundaries['coordinates'] = plate_boundaries.apply(lambda x: [(b,a) for (a,b) in list(x.geometry.coords)], axis='columns')\nplate_boundaries.drop('geometry', axis=1, inplace=True)\n\nplate_boundaries.head()",
"_____no_output_____"
]
],
[
[
"Next, run the code cell below without changes to load the historical earthquake data into a DataFrame `earthquakes`.",
"_____no_output_____"
]
],
[
[
"# Load the data and print the first 5 rows\nearthquakes = pd.read_csv(\"../input/geospatial-learn-course-data/earthquakes1970-2014.csv\", parse_dates=[\"DateTime\"])\nearthquakes.head()",
"_____no_output_____"
]
],
[
[
"The code cell below visualizes the plate boundaries on a map. Use all of the earthquake data to add a heatmap to the same map, to determine whether earthquakes coincide with plate boundaries. ",
"_____no_output_____"
]
],
[
[
"# Create a base map with plate boundaries\nm_1 = folium.Map(location=[35,136], tiles='cartodbpositron', zoom_start=5)\nfor i in range(len(plate_boundaries)):\n folium.PolyLine(locations=plate_boundaries.coordinates.iloc[i], weight=2, color='black').add_to(m_1)\n\n# Your code here: Add a heatmap to the map\nHeatMap(data=earthquakes[['Latitude', 'Longitude']], radius=10).add_to(m_1)\n\n# Uncomment to see a hint\n#q_1.a.hint()\n\n# Show the map\nembed_map(m_1, 'q_1.html')",
"_____no_output_____"
],
[
"# Get credit for your work after you have created a map\nq_1.a.check()\n\n# Uncomment to see our solution (your code may look different!)\nq_1.a.solution()",
"_____no_output_____"
]
],
[
[
"So, given the map above, do earthquakes coincide with plate boundaries?",
"_____no_output_____"
]
],
[
[
"# View the solution (Run this code cell to receive credit!)\nq_1.b.solution()",
"_____no_output_____"
]
],
[
[
"### 2) Is there a relationship between earthquake depth and proximity to a plate boundary in Japan?\n\nYou recently read that the depth of earthquakes tells us [important information](https://www.usgs.gov/faqs/what-depth-do-earthquakes-occur-what-significance-depth?qt-news_science_products=0#qt-news_science_products) about the structure of the earth. You're interested to see if there are any intereresting global patterns, and you'd also like to understand how depth varies in Japan.\n\n",
"_____no_output_____"
]
],
[
[
"# Create a base map with plate boundaries\nm_2 = folium.Map(location=[35,136], tiles='cartodbpositron', zoom_start=5)\nfor i in range(len(plate_boundaries)):\n folium.PolyLine(locations=plate_boundaries.coordinates.iloc[i], weight=2, color='black').add_to(m_2)\n \n# Your code here: Add a map to visualize earthquake depth\n# Custom function to assign a color to each circle\ndef color_producer(val):\n if val < 50:\n return 'forestgreen'\n elif val < 100:\n return 'darkorange'\n else:\n return 'darkred'\n# Add a map to visualize earthquake depth\nfor i in range(0,len(earthquakes)):\n folium.Circle(\n location=[earthquakes.iloc[i]['Latitude'], earthquakes.iloc[i]['Longitude']],\n radius=2000,\n color=color_producer(earthquakes.iloc[i]['Depth'])).add_to(m_2)\n# Uncomment to see a hint\n#q_2.a.hint()\n\n# View the map\nembed_map(m_2, 'q_2.html')",
"_____no_output_____"
],
[
"# Get credit for your work after you have created a map\nq_2.a.check()\n\n# Uncomment to see our solution (your code may look different!)\nq_2.a.solution()",
"_____no_output_____"
]
],
[
[
"Can you detect a relationship between proximity to a plate boundary and earthquake depth? Does this pattern hold globally? In Japan?",
"_____no_output_____"
]
],
[
[
"# View the solution (Run this code cell to receive credit!)\nq_2.b.solution()",
"_____no_output_____"
]
],
[
[
"### 3) Which prefectures have high population density?\n\nRun the next code cell (without changes) to create a GeoDataFrame `prefectures` that contains the geographical boundaries of Japanese prefectures.",
"_____no_output_____"
]
],
[
[
"# GeoDataFrame with prefecture boundaries\nprefectures = gpd.read_file(\"../input/geospatial-learn-course-data/japan-prefecture-boundaries/japan-prefecture-boundaries/japan-prefecture-boundaries.shp\")\nprefectures.set_index('prefecture', inplace=True)\nprefectures.head()",
"_____no_output_____"
]
],
[
[
"The next code cell creates a DataFrame `stats` containing the population, area (in square kilometers), and population density (per square kilometer) for each Japanese prefecture. Run the code cell without changes.",
"_____no_output_____"
]
],
[
[
"# DataFrame containing population of each prefecture\npopulation = pd.read_csv(\"../input/geospatial-learn-course-data/japan-prefecture-population.csv\")\npopulation.set_index('prefecture', inplace=True)\n\n# Calculate area (in square kilometers) of each prefecture\narea_sqkm = pd.Series(prefectures.geometry.to_crs(epsg=32654).area / 10**6, name='area_sqkm')\nstats = population.join(area_sqkm)\n\n# Add density (per square kilometer) of each prefecture\nstats['density'] = stats[\"population\"] / stats[\"area_sqkm\"]\nstats.head()",
"_____no_output_____"
]
],
[
[
"Use the next code cell to create a choropleth map to visualize population density.",
"_____no_output_____"
]
],
[
[
"# Create a base map\nm_3 = folium.Map(location=[35,136], tiles='cartodbpositron', zoom_start=5)\n\n# Your code here: create a choropleth map to visualize population density\nChoropleth(geo_data=prefectures['geometry'].__geo_interface__, \n data=stats['density'], \n key_on=\"feature.id\", \n fill_color='YlGnBu', \n legend_name='Population density (per square kilometer)'\n ).add_to(m_3)\n\n# Uncomment to see a hint\n# q_3.a.hint()\n\n# View the map\nembed_map(m_3, 'q_3.html')",
"_____no_output_____"
],
[
"# Get credit for your work after you have created a map\nq_3.a.check()\n\n# Uncomment to see our solution (your code may look different!)\nq_3.a.solution()",
"_____no_output_____"
]
],
[
[
"Which three prefectures have relatively higher density than the others? Are they spread throughout the country, or all located in roughly the same geographical region? (*If you're unfamiliar with Japanese geography, you might find [this map](https://en.wikipedia.org/wiki/Prefectures_of_Japan) useful to answer the questions.)*",
"_____no_output_____"
]
],
[
[
"# View the solution (Run this code cell to receive credit!)\nq_3.b.solution()",
"_____no_output_____"
]
],
[
[
"### 4) Which high-density prefecture is prone to high-magnitude earthquakes?\n\nCreate a map to suggest one prefecture that might benefit from earthquake reinforcement. Your map should visualize both density and earthquake magnitude.",
"_____no_output_____"
]
],
[
[
"# Create a base map\nm_4 = folium.Map(location=[35,136], tiles='cartodbpositron', zoom_start=5)\n\n# Your code here: create a map\ndef color_producer(magnitude):\n if magnitude > 6.5:\n return 'red'\n else:\n return 'green'\n\nChoropleth(\n geo_data=prefectures['geometry'].__geo_interface__,\n data=stats['density'],\n key_on=\"feature.id\",\n fill_color='BuPu',\n legend_name='Population density (per square kilometer)').add_to(m_4)\n\nfor i in range(0,len(earthquakes)):\n folium.Circle(\n location=[earthquakes.iloc[i]['Latitude'], earthquakes.iloc[i]['Longitude']],\n popup=(\"{} ({})\").format(\n earthquakes.iloc[i]['Magnitude'],\n earthquakes.iloc[i]['DateTime'].year),\n radius=earthquakes.iloc[i]['Magnitude']**5.5,\n color=color_producer(earthquakes.iloc[i]['Magnitude'])).add_to(m_4)\n\n# Uncomment to see a hint\nq_4.a.hint()\n\n# View the map\nembed_map(m_4, 'q_4.html')",
"_____no_output_____"
],
[
"# Get credit for your work after you have created a map\nq_4.a.check()\n\n# Uncomment to see our solution (your code may look different!)\nq_4.a.solution()",
"_____no_output_____"
]
],
[
[
"Which prefecture do you recommend for extra earthquake reinforcement?",
"_____no_output_____"
]
],
[
[
"# View the solution (Run this code cell to receive credit!)\nq_4.b.solution()",
"_____no_output_____"
]
],
[
[
"# Keep going\n\nLearn how to convert names of places to geographic coordinates with **[geocoding](https://www.kaggle.com/alexisbcook/manipulating-geospatial-data)**. You'll also explore special ways to join information from multiple GeoDataFrames.",
"_____no_output_____"
],
[
"---\n\n\n\n\n*Have questions or comments? Visit the [course discussion forum](https://www.kaggle.com/learn/geospatial-analysis/discussion) to chat with other learners.*",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d055a1010f70f0602fae7c0420217cc404829063 | 77,592 | ipynb | Jupyter Notebook | _Project_Analysis/Neural_Network_model_training.ipynb | sijal001/Yoga_Pose_Detection | 6e2da2265f015d5fbda997e49cf150206e030c98 | [
"Unlicense"
] | 1 | 2021-05-25T12:06:14.000Z | 2021-05-25T12:06:14.000Z | _Project_Analysis/Neural_Network_model_training.ipynb | sijal001/Yoga_Pose_Detection | 6e2da2265f015d5fbda997e49cf150206e030c98 | [
"Unlicense"
] | null | null | null | _Project_Analysis/Neural_Network_model_training.ipynb | sijal001/Yoga_Pose_Detection | 6e2da2265f015d5fbda997e49cf150206e030c98 | [
"Unlicense"
] | 1 | 2021-06-07T19:29:22.000Z | 2021-06-07T19:29:22.000Z | 103.871486 | 22,176 | 0.766832 | [
[
[
"# Classification with Neural Network for Yoga poses detection",
"_____no_output_____"
],
[
"## Import Dependencies ",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport os\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\n\nfrom tensorflow.keras.utils import to_categorical\nfrom tensorflow.keras.preprocessing.image import load_img, img_to_array\nfrom tensorflow.python.keras.preprocessing.image import ImageDataGenerator\n\nfrom sklearn.metrics import classification_report, log_loss, accuracy_score\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
]
],
[
[
"## Getting the data (images) and labels",
"_____no_output_____"
]
],
[
[
"# Data path\n\ntrain_dir = 'pose_recognition_data/dataset'",
"_____no_output_____"
],
[
"# Getting the folders name to be able to labelize the data\n\nName=[]\nfor file in os.listdir(train_dir):\n Name+=[file]\nprint(Name)\nprint(len(Name))",
"['adho mukha svanasana', 'adho mukha vriksasana', 'agnistambhasana', 'ananda balasana', 'anantasana', 'anjaneyasana', 'ardha bhekasana', 'ardha chandrasana', 'ardha matsyendrasana', 'ardha pincha mayurasana', 'ardha uttanasana', 'ashtanga namaskara', 'astavakrasana', 'baddha konasana', 'bakasana', 'balasana', 'bhairavasana', 'bharadvajasana i', 'bhekasana', 'bhujangasana', 'bhujapidasana', 'bitilasana', 'camatkarasana', 'chakravakasana', 'chaturanga dandasana', 'dandasana', 'dhanurasana', 'durvasasana', 'dwi pada viparita dandasana', 'eka pada koundinyanasana i', 'eka pada koundinyanasana ii', 'eka pada rajakapotasana', 'eka pada rajakapotasana ii', 'ganda bherundasana', 'garbha pindasana', 'garudasana', 'gomukhasana', 'halasana', 'hanumanasana', 'janu sirsasana', 'kapotasana', 'krounchasana', 'kurmasana', 'lolasana', 'makara adho mukha svanasana', 'makarasana', 'malasana', 'marichyasana i', 'marichyasana iii', 'marjaryasana', 'matsyasana', 'mayurasana', 'natarajasana', 'padangusthasana', 'padmasana', 'parighasana', 'paripurna navasana', 'parivrtta janu sirsasana', 'parivrtta parsvakonasana', 'parivrtta trikonasana', 'parsva bakasana', 'parsvottanasana', 'pasasana', 'paschimottanasana', 'phalakasana', 'pincha mayurasana', 'prasarita padottanasana', 'purvottanasana', 'salabhasana', 'salamba bhujangasana', 'salamba sarvangasana', 'salamba sirsasana', 'savasana', 'setu bandha sarvangasana', 'simhasana', 'sukhasana', 'supta baddha konasana', 'supta matsyendrasana', 'supta padangusthasana', 'supta virasana', 'tadasana', 'tittibhasana', 'tolasana', 'tulasana', 'upavistha konasana', 'urdhva dhanurasana', 'urdhva hastasana', 'urdhva mukha svanasana', 'urdhva prasarita eka padasana', 'ustrasana', 'utkatasana', 'uttana shishosana', 'uttanasana', 'utthita ashwa sanchalanasana', 'utthita hasta padangustasana', 'utthita parsvakonasana', 'utthita trikonasana', 'vajrasana', 'vasisthasana', 'viparita karani', 'virabhadrasana i', 'virabhadrasana ii', 'virabhadrasana iii', 'virasana', 'vriksasana', 'vrischikasana', 'yoganidrasana']\n107\n"
],
[
"N=[]\nfor i in range(len(Name)):\n N+=[i]\n \nnormal_mapping=dict(zip(Name,N)) \nreverse_mapping=dict(zip(N,Name)) \n\ndef mapper(value):\n return reverse_mapping[value]",
"_____no_output_____"
],
[
"dataset=[]\ntestset=[]\ncount=0\nfor file in os.listdir(train_dir):\n t=0\n path=os.path.join(train_dir,file)\n for im in os.listdir(path):\n image=load_img(os.path.join(path,im), grayscale=False, color_mode='rgb', target_size=(40,40))\n image=img_to_array(image)\n image=image/255.0\n if t<60:\n dataset+=[[image,count]]\n else:\n testset+=[[image,count]]\n t+=1\n count=count+1",
"C:\\Users\\rolin\\anaconda3\\envs\\vision\\lib\\site-packages\\PIL\\Image.py:962: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images\n warnings.warn(\n"
],
[
"data,labels0=zip(*dataset)\ntest,testlabels0=zip(*testset)",
"_____no_output_____"
],
[
"labels1=to_categorical(labels0)\nlabels=np.array(labels1)",
"_____no_output_____"
],
[
"# Transforming the into Numerical Data\ndata=np.array(data)\ntest=np.array(test)",
"_____no_output_____"
],
[
"trainx,testx,trainy,testy=train_test_split(data,labels,test_size=0.2,random_state=44)",
"_____no_output_____"
],
[
"print(trainx.shape)\nprint(testx.shape)\nprint(trainy.shape)\nprint(testy.shape)",
"(4495, 40, 40, 3)\n(1124, 40, 40, 3)\n(4495, 107)\n(1124, 107)\n"
],
[
"# Data augmentation\n\ndatagen = ImageDataGenerator(horizontal_flip=True,vertical_flip=True,rotation_range=20,zoom_range=0.2,\n width_shift_range=0.2,height_shift_range=0.2,shear_range=0.1,fill_mode=\"nearest\")",
"_____no_output_____"
],
[
"# Loading the pretrained model , here DenseNet201\npretrained_model3 = tf.keras.applications.DenseNet201(input_shape=(40,40,3),include_top=False,weights='imagenet',pooling='avg')\npretrained_model3.trainable = False",
"_____no_output_____"
],
[
"inputs3 = pretrained_model3.input\nx3 = tf.keras.layers.Dense(128, activation='relu')(pretrained_model3.output)\noutputs3 = tf.keras.layers.Dense(107, activation='softmax')(x3)\nmodel = tf.keras.Model(inputs=inputs3, outputs=outputs3)",
"_____no_output_____"
],
[
"model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])",
"_____no_output_____"
],
[
"his=model.fit(datagen.flow(trainx,trainy,batch_size=32),validation_data=(testx,testy),epochs=50)",
"Epoch 1/50\n141/141 [==============================] - 16s 112ms/step - loss: 4.5914 - accuracy: 0.0289 - val_loss: 4.3583 - val_accuracy: 0.0427\nEpoch 2/50\n141/141 [==============================] - 14s 101ms/step - loss: 4.2505 - accuracy: 0.0685 - val_loss: 3.9496 - val_accuracy: 0.0952\nEpoch 3/50\n141/141 [==============================] - 15s 103ms/step - loss: 3.9702 - accuracy: 0.0977 - val_loss: 3.7706 - val_accuracy: 0.1210\nEpoch 4/50\n141/141 [==============================] - 14s 101ms/step - loss: 3.8243 - accuracy: 0.1179 - val_loss: 3.6197 - val_accuracy: 0.1317\nEpoch 5/50\n141/141 [==============================] - 16s 110ms/step - loss: 3.7160 - accuracy: 0.1433 - val_loss: 3.5676 - val_accuracy: 0.1584\nEpoch 6/50\n141/141 [==============================] - 16s 115ms/step - loss: 3.6107 - accuracy: 0.1588 - val_loss: 3.4713 - val_accuracy: 0.1815\nEpoch 7/50\n141/141 [==============================] - 16s 117ms/step - loss: 3.5385 - accuracy: 0.1755 - val_loss: 3.4932 - val_accuracy: 0.1744\nEpoch 8/50\n141/141 [==============================] - 16s 115ms/step - loss: 3.4554 - accuracy: 0.1820 - val_loss: 3.4227 - val_accuracy: 0.1904\nEpoch 9/50\n141/141 [==============================] - 16s 110ms/step - loss: 3.3996 - accuracy: 0.1960 - val_loss: 3.3644 - val_accuracy: 0.2002\nEpoch 10/50\n141/141 [==============================] - 15s 106ms/step - loss: 3.3661 - accuracy: 0.2018 - val_loss: 3.3423 - val_accuracy: 0.1948\nEpoch 11/50\n141/141 [==============================] - 14s 101ms/step - loss: 3.3229 - accuracy: 0.2058 - val_loss: 3.3392 - val_accuracy: 0.2073\nEpoch 12/50\n141/141 [==============================] - 14s 99ms/step - loss: 3.2997 - accuracy: 0.2056 - val_loss: 3.2917 - val_accuracy: 0.2091\nEpoch 13/50\n141/141 [==============================] - 15s 108ms/step - loss: 3.2496 - accuracy: 0.2171 - val_loss: 3.2836 - val_accuracy: 0.2189\nEpoch 14/50\n141/141 [==============================] - 16s 112ms/step - loss: 3.1839 - accuracy: 0.2291 - val_loss: 3.3423 - val_accuracy: 0.2180\nEpoch 15/50\n141/141 [==============================] - 16s 113ms/step - loss: 3.1578 - accuracy: 0.2440 - val_loss: 3.2960 - val_accuracy: 0.2144\nEpoch 16/50\n141/141 [==============================] - 16s 117ms/step - loss: 3.1533 - accuracy: 0.2314 - val_loss: 3.2559 - val_accuracy: 0.2251\nEpoch 17/50\n141/141 [==============================] - 16s 114ms/step - loss: 3.1300 - accuracy: 0.2311 - val_loss: 3.2709 - val_accuracy: 0.2224\nEpoch 18/50\n141/141 [==============================] - 16s 114ms/step - loss: 3.1040 - accuracy: 0.2458 - val_loss: 3.2919 - val_accuracy: 0.2242\nEpoch 19/50\n141/141 [==============================] - 15s 110ms/step - loss: 3.0713 - accuracy: 0.2440 - val_loss: 3.1814 - val_accuracy: 0.2313\nEpoch 20/50\n141/141 [==============================] - 16s 111ms/step - loss: 3.0598 - accuracy: 0.2449 - val_loss: 3.2570 - val_accuracy: 0.2198\nEpoch 21/50\n141/141 [==============================] - 16s 111ms/step - loss: 3.0384 - accuracy: 0.2581 - val_loss: 3.2443 - val_accuracy: 0.2233\nEpoch 22/50\n141/141 [==============================] - 16s 111ms/step - loss: 2.9978 - accuracy: 0.2590 - val_loss: 3.2301 - val_accuracy: 0.2242\nEpoch 23/50\n141/141 [==============================] - 16s 111ms/step - loss: 3.0114 - accuracy: 0.2578 - val_loss: 3.2433 - val_accuracy: 0.2144\nEpoch 24/50\n141/141 [==============================] - 16s 111ms/step - loss: 2.9668 - accuracy: 0.2636 - val_loss: 3.2163 - val_accuracy: 0.2224\nEpoch 25/50\n141/141 [==============================] - 16s 117ms/step - loss: 2.9405 - accuracy: 0.2661 - val_loss: 3.2361 - val_accuracy: 0.2420\nEpoch 26/50\n141/141 [==============================] - 16s 114ms/step - loss: 2.9531 - accuracy: 0.2687 - val_loss: 3.1938 - val_accuracy: 0.2411\nEpoch 27/50\n141/141 [==============================] - 16s 112ms/step - loss: 2.8845 - accuracy: 0.2861 - val_loss: 3.1931 - val_accuracy: 0.2491\nEpoch 28/50\n141/141 [==============================] - 16s 116ms/step - loss: 2.8873 - accuracy: 0.2881 - val_loss: 3.2396 - val_accuracy: 0.2286\nEpoch 29/50\n141/141 [==============================] - 16s 113ms/step - loss: 2.8612 - accuracy: 0.2854 - val_loss: 3.2430 - val_accuracy: 0.2375\nEpoch 30/50\n141/141 [==============================] - 16s 114ms/step - loss: 2.8370 - accuracy: 0.2892 - val_loss: 3.2405 - val_accuracy: 0.2286\nEpoch 31/50\n141/141 [==============================] - 17s 118ms/step - loss: 2.8601 - accuracy: 0.2903 - val_loss: 3.1923 - val_accuracy: 0.2331\nEpoch 32/50\n141/141 [==============================] - 18s 131ms/step - loss: 2.8430 - accuracy: 0.2843 - val_loss: 3.2553 - val_accuracy: 0.2393\nEpoch 33/50\n141/141 [==============================] - 17s 118ms/step - loss: 2.8488 - accuracy: 0.3019 - val_loss: 3.2067 - val_accuracy: 0.2295\nEpoch 34/50\n141/141 [==============================] - 16s 110ms/step - loss: 2.8156 - accuracy: 0.2999 - val_loss: 3.2195 - val_accuracy: 0.2402\nEpoch 35/50\n141/141 [==============================] - 16s 110ms/step - loss: 2.8122 - accuracy: 0.3006 - val_loss: 3.2143 - val_accuracy: 0.2464\nEpoch 36/50\n141/141 [==============================] - 15s 110ms/step - loss: 2.8381 - accuracy: 0.2970 - val_loss: 3.2615 - val_accuracy: 0.2331\nEpoch 37/50\n141/141 [==============================] - 16s 110ms/step - loss: 2.8183 - accuracy: 0.3052 - val_loss: 3.2228 - val_accuracy: 0.2429\nEpoch 38/50\n141/141 [==============================] - 17s 123ms/step - loss: 2.7808 - accuracy: 0.3099 - val_loss: 3.2358 - val_accuracy: 0.2260\nEpoch 39/50\n141/141 [==============================] - 16s 115ms/step - loss: 2.7821 - accuracy: 0.2943 - val_loss: 3.1560 - val_accuracy: 0.2429\nEpoch 40/50\n141/141 [==============================] - 16s 111ms/step - loss: 2.7912 - accuracy: 0.3046 - val_loss: 3.2276 - val_accuracy: 0.2367\nEpoch 41/50\n141/141 [==============================] - 16s 113ms/step - loss: 2.7407 - accuracy: 0.3130 - val_loss: 3.2287 - val_accuracy: 0.2482\nEpoch 42/50\n141/141 [==============================] - 19s 134ms/step - loss: 2.7629 - accuracy: 0.3106 - val_loss: 3.2435 - val_accuracy: 0.2358\nEpoch 43/50\n141/141 [==============================] - 15s 103ms/step - loss: 2.7458 - accuracy: 0.3201 - val_loss: 3.3039 - val_accuracy: 0.2429\nEpoch 44/50\n141/141 [==============================] - 15s 107ms/step - loss: 2.7286 - accuracy: 0.3219 - val_loss: 3.2536 - val_accuracy: 0.2527\nEpoch 45/50\n141/141 [==============================] - 16s 111ms/step - loss: 2.7291 - accuracy: 0.3121 - val_loss: 3.2292 - val_accuracy: 0.2553\nEpoch 46/50\n141/141 [==============================] - 16s 110ms/step - loss: 2.7470 - accuracy: 0.3108 - val_loss: 3.2958 - val_accuracy: 0.2482\nEpoch 47/50\n141/141 [==============================] - 16s 113ms/step - loss: 2.7486 - accuracy: 0.3103 - val_loss: 3.2967 - val_accuracy: 0.2384\nEpoch 48/50\n141/141 [==============================] - 16s 113ms/step - loss: 2.6745 - accuracy: 0.3293 - val_loss: 3.2729 - val_accuracy: 0.2393\nEpoch 49/50\n141/141 [==============================] - 16s 114ms/step - loss: 2.6832 - accuracy: 0.3190 - val_loss: 3.2732 - val_accuracy: 0.2411\nEpoch 50/50\n141/141 [==============================] - 16s 112ms/step - loss: 2.7199 - accuracy: 0.3219 - val_loss: 3.2757 - val_accuracy: 0.2411\n"
],
[
"y_pred=model.predict(testx)\npred=np.argmax(y_pred,axis=1)\nground = np.argmax(testy,axis=1)\nprint(classification_report(ground,pred))",
" precision recall f1-score support\n\n 0 0.25 0.80 0.38 5\n 1 0.00 0.00 0.00 9\n 2 0.33 0.14 0.20 7\n 3 0.27 0.27 0.27 11\n 4 0.38 0.62 0.48 8\n 5 0.14 0.15 0.15 13\n 6 0.00 0.00 0.00 5\n 7 0.60 0.38 0.46 8\n 8 0.38 0.50 0.43 10\n 9 0.42 0.50 0.45 10\n 10 0.00 0.00 0.00 10\n 11 0.43 0.33 0.38 9\n 12 0.23 0.70 0.35 10\n 13 0.17 0.12 0.14 17\n 14 0.38 0.38 0.38 8\n 15 0.40 0.20 0.27 10\n 16 0.25 0.25 0.25 12\n 17 0.00 0.00 0.00 14\n 18 0.50 0.11 0.18 9\n 19 0.15 0.22 0.18 9\n 20 0.11 0.08 0.09 13\n 21 0.17 0.06 0.08 18\n 22 0.11 0.22 0.15 9\n 23 0.26 0.50 0.34 10\n 24 0.67 0.27 0.38 15\n 25 0.50 0.50 0.50 12\n 26 0.20 0.18 0.19 11\n 27 0.14 0.29 0.19 7\n 28 0.38 0.25 0.30 12\n 29 0.00 0.00 0.00 10\n 30 0.25 0.18 0.21 11\n 31 0.08 0.09 0.09 11\n 32 0.17 0.07 0.10 14\n 33 0.00 0.00 0.00 10\n 34 0.33 0.45 0.38 11\n 35 0.10 0.25 0.14 4\n 36 0.43 0.33 0.38 9\n 37 0.18 0.44 0.26 9\n 38 0.19 0.38 0.25 8\n 39 0.22 0.18 0.20 11\n 40 0.21 0.29 0.24 14\n 41 0.00 0.00 0.00 16\n 42 0.11 0.20 0.14 5\n 43 1.00 0.11 0.20 9\n 44 0.50 0.33 0.40 6\n 45 0.14 0.20 0.17 10\n 46 0.27 0.20 0.23 15\n 47 0.00 0.00 0.00 7\n 48 0.08 0.33 0.12 3\n 49 0.33 0.25 0.29 8\n 50 0.17 0.09 0.12 11\n 51 0.00 0.00 0.00 10\n 52 0.25 0.10 0.14 10\n 53 0.17 0.50 0.25 2\n 54 0.18 0.33 0.23 15\n 55 0.67 0.12 0.20 17\n 56 0.62 0.31 0.42 16\n 57 0.23 0.30 0.26 10\n 58 0.00 0.00 0.00 7\n 59 0.00 0.00 0.00 12\n 60 0.33 0.25 0.29 16\n 61 0.00 0.00 0.00 9\n 62 0.28 0.56 0.37 9\n 63 0.30 0.30 0.30 10\n 64 0.00 0.00 0.00 12\n 65 0.33 0.20 0.25 5\n 66 0.33 0.75 0.46 12\n 67 0.15 0.57 0.24 7\n 68 0.15 0.18 0.17 11\n 69 0.17 0.08 0.11 13\n 70 0.36 0.50 0.42 8\n 71 0.33 0.14 0.20 14\n 72 0.00 0.00 0.00 8\n 73 0.20 0.27 0.23 11\n 74 0.50 0.27 0.35 11\n 75 0.14 0.43 0.21 7\n 76 0.50 0.08 0.14 12\n 77 0.33 0.08 0.13 12\n 78 0.00 0.00 0.00 10\n 79 0.36 0.44 0.40 9\n 80 0.15 0.42 0.22 12\n 81 0.14 0.09 0.11 11\n 82 0.33 0.27 0.30 15\n 83 0.08 0.17 0.11 6\n 84 0.50 0.40 0.44 10\n 85 0.17 0.45 0.25 11\n 86 0.17 0.14 0.15 7\n 87 0.20 0.38 0.26 8\n 88 0.12 0.07 0.09 14\n 89 0.38 0.20 0.26 15\n 90 0.22 0.13 0.17 15\n 91 0.50 0.19 0.27 16\n 92 0.22 0.27 0.24 15\n 93 0.00 0.00 0.00 3\n 94 0.00 0.00 0.00 12\n 95 0.42 0.57 0.48 14\n 96 0.38 0.57 0.46 14\n 97 0.38 0.45 0.42 11\n 98 0.00 0.00 0.00 8\n 99 0.25 0.08 0.12 12\n 100 0.83 0.36 0.50 14\n 101 0.35 0.89 0.50 9\n 102 0.50 0.36 0.42 14\n 103 0.10 0.18 0.13 11\n 104 0.37 0.58 0.45 12\n 105 0.00 0.00 0.00 9\n 106 0.09 0.25 0.13 8\n\n accuracy 0.24 1124\n macro avg 0.25 0.25 0.22 1124\nweighted avg 0.26 0.24 0.22 1124\n\n"
],
[
"#Checking accuracy of our model\n\nget_acc = his.history['accuracy']\nvalue_acc = his.history['val_accuracy']\nget_loss = his.history['loss']\nvalidation_loss = his.history['val_loss']\n\nepochs = range(len(get_acc))\nplt.plot(epochs, get_acc, 'r', label='Accuracy of Training data')\nplt.plot(epochs, value_acc, 'b', label='Accuracy of Validation data')\nplt.title('Training vs validation accuracy')\nplt.legend(loc=0)\nplt.figure()\nplt.show()",
"_____no_output_____"
],
[
"# Checking the loss of data\n\nepochs = range(len(get_loss))\nplt.plot(epochs, get_loss, 'r', label='Loss of Training data')\nplt.plot(epochs, validation_loss, 'b', label='Loss of Validation data')\nplt.title('Training vs validation loss')\nplt.legend(loc=0)\nplt.figure()\nplt.show()",
"_____no_output_____"
],
[
"load_img(\"pose_recognition_data/dataset/adho mukha svanasana/95. downward-facing-dog-pose.png\",target_size=(40,40))",
"_____no_output_____"
],
[
"image = load_img(\"pose_recognition_data/dataset/adho mukha svanasana/95. downward-facing-dog-pose.png\",target_size=(40,40))\n\nimage=img_to_array(image) \nimage=image/255.0\nprediction_image=np.array(image)\nprediction_image= np.expand_dims(image, axis=0)",
"_____no_output_____"
],
[
"prediction=model.predict(prediction_image)\nvalue=np.argmax(prediction)\nmove_name=mapper(value)\nprint(\"Prediction is {}.\".format(move_name))",
"Prediction is adho mukha svanasana.\n"
],
[
"print(test.shape)\npred2=model.predict(test)\nprint(pred2.shape)\n\nPRED=[]\nfor item in pred2:\n value2=np.argmax(item) \n PRED+=[value2]",
"(375, 40, 40, 3)\n(375, 107)\n"
],
[
"ANS=testlabels0",
"_____no_output_____"
],
[
"accuracy=accuracy_score(ANS,PRED)\nprint(accuracy)",
"0.27466666666666667\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d055a2125a889d9d77d56a4ba3f6d09c7cdeb3ae | 102,617 | ipynb | Jupyter Notebook | tutorials/single_table_data/04_TVAE_Model.ipynb | HDI-Project/SDV | 04cfd6557d3676fa487e49e1cbd56eecd69a9bc6 | [
"MIT"
] | 39 | 2018-07-07T01:02:42.000Z | 2019-12-17T13:53:47.000Z | tutorials/single_table_data/04_TVAE_Model.ipynb | HDI-Project/SDV | 04cfd6557d3676fa487e49e1cbd56eecd69a9bc6 | [
"MIT"
] | 75 | 2018-06-29T00:35:02.000Z | 2019-12-23T16:59:55.000Z | tutorials/single_table_data/04_TVAE_Model.ipynb | HDI-Project/SDV | 04cfd6557d3676fa487e49e1cbd56eecd69a9bc6 | [
"MIT"
] | 31 | 2018-10-29T13:16:38.000Z | 2020-01-02T13:10:42.000Z | 35.25146 | 426 | 0.404777 | [
[
[
"TVAE Model\n===========\n\nIn this guide we will go through a series of steps that will let you\ndiscover functionalities of the `TVAE` model, including how to:\n\n- Create an instance of `TVAE`.\n- Fit the instance to your data.\n- Generate synthetic versions of your data.\n- Use `TVAE` to anonymize PII information.\n- Specify hyperparameters to improve the output quality.\n\nWhat is TVAE?\n--------------\n\nThe `sdv.tabular.TVAE` model is based on the VAE-based Deep Learning\ndata synthesizer which was presented at the NeurIPS 2020 conference by\nthe paper titled [Modeling Tabular data using Conditional\nGAN](https://arxiv.org/abs/1907.00503).\n\nLet\\'s now discover how to learn a dataset and later on generate\nsynthetic data with the same format and statistical properties by using\nthe `TVAE` class from SDV.\n\nQuick Usage\n-----------\n\nWe will start by loading one of our demo datasets, the\n`student_placements`, which contains information about MBA students that\napplied for placements during the year 2020.\n\n<div class=\"alert alert-warning\">\n\n**Warning**\n\nIn order to follow this guide you need to have `tvae` installed on your\nsystem. If you have not done it yet, please install `tvae` now by\nexecuting the command `pip install sdv` in a terminal.\n\n</div>",
"_____no_output_____"
]
],
[
[
"from sdv.demo import load_tabular_demo\n\ndata = load_tabular_demo('student_placements')\ndata.head()",
"_____no_output_____"
]
],
[
[
"As you can see, this table contains information about students which\nincludes, among other things:\n\n- Their id and gender\n- Their grades and specializations\n- Their work experience\n- The salary that they were offered\n- The duration and dates of their placement\n\nYou will notice that there is data with the following characteristics:\n\n- There are float, integer, boolean, categorical and datetime values.\n- There are some variables that have missing data. In particular, all\n the data related to the placement details is missing in the rows\n where the student was not placed.\n\nT There are float, integer, boolean, categorical and datetime values.\n- There are some variables that have missing data. In particular, all\n the data related to the placement details is missing in the rows\n where the student was not placed.\n\nLet us use `TVAE` to learn this data and then sample synthetic data\nabout new students to see how well the model captures the characteristics\nindicated above. In order to do this you will need to:\n\n- Import the `sdv.tabular.TVAE` class and create an instance of it.\n- Call its `fit` method passing our table.\n- Call its `sample` method indicating the number of synthetic rows\n that you want to generate.",
"_____no_output_____"
]
],
[
[
"from sdv.tabular import TVAE\n\nmodel = TVAE()\nmodel.fit(data)",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-info\">\n\n**Note**\n\nNotice that the model `fitting` process took care of transforming the\ndifferent fields using the appropriate [Reversible Data\nTransforms](http://github.com/sdv-dev/RDT) to ensure that the data has a\nformat that the underlying TVAESynthesizer class can handle.\n\n</div>\n\n### Generate synthetic data from the model\n\nOnce the modeling has finished you are ready to generate new synthetic\ndata by calling the `sample` method from your model passing the number\nof rows that we want to generate.",
"_____no_output_____"
]
],
[
[
"new_data = model.sample(num_rows=200)",
"_____no_output_____"
]
],
[
[
"This will return a table identical to the one which the model was fitted\non, but filled with new data which resembles the original one.",
"_____no_output_____"
]
],
[
[
"new_data.head()",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-info\">\n\n**Note**\n\nThere are a number of other parameters in this method that you can use to\noptimize the process of generating synthetic data. Use ``output_file_path``\nto directly write results to a CSV file, ``batch_size`` to break up sampling\ninto smaller pieces & track their progress and ``randomize_samples`` to\ndetermine whether to generate the same synthetic data every time.\nSee the <a href=https://sdv.dev/SDV/api_reference/tabular/api/sdv.tabular.ctgan.TVAE.sample>API Section</a> \nfor more details.\n\n</div>\n\n### Save and Load the model\n\nIn many scenarios it will be convenient to generate synthetic versions\nof your data directly in systems that do not have access to the original\ndata source. For example, if you may want to generate testing data on\nthe fly inside a testing environment that does not have access to your\nproduction database. In these scenarios, fitting the model with real\ndata every time that you need to generate new data is feasible, so you\nwill need to fit a model in your production environment, save the fitted\nmodel into a file, send this file to the testing environment and then\nload it there to be able to `sample` from it.\n\nLet\\'s see how this process works.\n\n#### Save and share the model\n\nOnce you have fitted the model, all you need to do is call its `save`\nmethod passing the name of the file in which you want to save the model.\nNote that the extension of the filename is not relevant, but we will be\nusing the `.pkl` extension to highlight that the serialization protocol\nused is [pickle](https://docs.python.org/3/library/pickle.html).",
"_____no_output_____"
]
],
[
[
"model.save('my_model.pkl')",
"_____no_output_____"
]
],
[
[
"This will have created a file called `my_model.pkl` in the same\ndirectory in which you are running SDV.\n\n<div class=\"alert alert-info\">\n\n**Important**\n\nIf you inspect the generated file you will notice that its size is much\nsmaller than the size of the data that you used to generate it. This is\nbecause the serialized model contains **no information about the\noriginal data**, other than the parameters it needs to generate\nsynthetic versions of it. This means that you can safely share this\n`my_model.pkl` file without the risc of disclosing any of your real\ndata!\n\n</div>\n\n#### Load the model and generate new data\n\nThe file you just generated can be sent over to the system where the\nsynthetic data will be generated. Once it is there, you can load it\nusing the `TVAE.load` method, and then you are ready to sample new data\nfrom the loaded instance:",
"_____no_output_____"
]
],
[
[
"loaded = TVAE.load('my_model.pkl')\nnew_data = loaded.sample(num_rows=200)",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-warning\">\n\n**Warning**\n\nNotice that the system where the model is loaded needs to also have\n`sdv` and `tvae` installed, otherwise it will not be able to load the\nmodel and use it.\n\n</div>\n\n### Specifying the Primary Key of the table\n\nOne of the first things that you may have noticed when looking at the demo\ndata is that there is a `student_id` column which acts as the primary\nkey of the table, and which is supposed to have unique values. Indeed,\nif we look at the number of times that each value appears, we see that\nall of them appear at most once:",
"_____no_output_____"
]
],
[
[
"data.student_id.value_counts().max()",
"_____no_output_____"
]
],
[
[
"However, if we look at the synthetic data that we generated, we observe\nthat there are some values that appear more than once:",
"_____no_output_____"
]
],
[
[
"new_data[new_data.student_id == new_data.student_id.value_counts().index[0]]",
"_____no_output_____"
]
],
[
[
"This happens because the model was not notified at any point about the\nfact that the `student_id` had to be unique, so when it generates new\ndata it will provoke collisions sooner or later. In order to solve this,\nwe can pass the argument `primary_key` to our model when we create it,\nindicating the name of the column that is the index of the table.",
"_____no_output_____"
]
],
[
[
"model = TVAE(\n primary_key='student_id'\n)\nmodel.fit(data)\nnew_data = model.sample(200)\nnew_data.head()",
"_____no_output_____"
]
],
[
[
"As a result, the model will learn that this column must be unique and\ngenerate a unique sequence of values for the column:",
"_____no_output_____"
]
],
[
[
"new_data.student_id.value_counts().max()",
"_____no_output_____"
]
],
[
[
"### Anonymizing Personally Identifiable Information (PII)\n\nThere will be many cases where the data will contain Personally\nIdentifiable Information which we cannot disclose. In these cases, we\nwill want our Tabular Models to replace the information within these\nfields with fake, simulated data that looks similar to the real one but\ndoes not contain any of the original values.\n\nLet\\'s load a new dataset that contains a PII field, the\n`student_placements_pii` demo, and try to generate synthetic versions of\nit that do not contain any of the PII fields.\n\n<div class=\"alert alert-info\">\n\n**Note**\n\nThe `student_placements_pii` dataset is a modified version of the\n`student_placements` dataset with one new field, `address`, which\ncontains PII information about the students. Notice that this additional\n`address` field has been simulated and does not correspond to data from\nthe real users.\n\n</div>",
"_____no_output_____"
]
],
[
[
"data_pii = load_tabular_demo('student_placements_pii')\ndata_pii.head()",
"_____no_output_____"
]
],
[
[
"If we use our tabular model on this new data we will see how the\nsynthetic data that it generates discloses the addresses from the real\nstudents:",
"_____no_output_____"
]
],
[
[
"model = TVAE(\n primary_key='student_id',\n)\nmodel.fit(data_pii)\nnew_data_pii = model.sample(200)\nnew_data_pii.head()",
"_____no_output_____"
]
],
[
[
"More specifically, we can see how all the addresses that have been\ngenerated actually come from the original dataset:",
"_____no_output_____"
]
],
[
[
"new_data_pii.address.isin(data_pii.address).sum()",
"_____no_output_____"
]
],
[
[
"In order to solve this, we can pass an additional argument\n`anonymize_fields` to our model when we create the instance. This\n`anonymize_fields` argument will need to be a dictionary that contains:\n\n- The name of the field that we want to anonymize.\n- The category of the field that we want to use when we generate fake\n values for it.\n\nThe list complete list of possible categories can be seen in the [Faker\nProviders](https://faker.readthedocs.io/en/master/providers.html) page,\nand it contains a huge list of concepts such as:\n\n- name\n- address\n- country\n- city\n- ssn\n- credit_card_number\n- credit_card_expire\n- credit_card_security_code\n- email\n- telephone\n- \\...\n\nIn this case, since the field is an address, we will pass a\ndictionary indicating the category `address`",
"_____no_output_____"
]
],
[
[
"model = TVAE(\n primary_key='student_id',\n anonymize_fields={\n 'address': 'address'\n }\n)\nmodel.fit(data_pii)",
"_____no_output_____"
]
],
[
[
"As a result, we can see how the real `address` values have been replaced\nby other fake addresses:",
"_____no_output_____"
]
],
[
[
"new_data_pii = model.sample(200)\nnew_data_pii.head()",
"_____no_output_____"
]
],
[
[
"Which means that none of the original addresses can be found in the\nsampled data:",
"_____no_output_____"
]
],
[
[
"data_pii.address.isin(new_data_pii.address).sum()",
"_____no_output_____"
]
],
[
[
"As we can see, in this case these modifications changed the obtained\nresults slightly, but they did neither introduce dramatic changes in the\nperformance.",
"_____no_output_____"
],
[
"### Conditional Sampling\n\nAs the name implies, conditional sampling allows us to sample from a conditional distribution using the `TVAE` model, which means we can generate only values that satisfy certain conditions. These conditional values can be passed to the `sample_conditions` method as a list of `sdv.sampling.Condition` objects or to the `sample_remaining_columns` method as a dataframe. \n\nWhen specifying a `sdv.sampling.Condition` object, we can pass in the desired conditions as a dictionary, as well as specify the number of desired rows for that condition.",
"_____no_output_____"
]
],
[
[
"from sdv.sampling import Condition\n\ncondition = Condition({\n 'gender': 'M'\n}, num_rows=5)\n\nmodel.sample_conditions(conditions=[condition])",
"_____no_output_____"
]
],
[
[
"It's also possible to condition on multiple columns, such as `gender = M, 'experience_years': 0`.",
"_____no_output_____"
]
],
[
[
"condition = Condition({\n 'gender': 'M',\n 'experience_years': 0\n}, num_rows=5)\n\nmodel.sample_conditions(conditions=[condition])",
"_____no_output_____"
]
],
[
[
"In the `sample_remaining_columns` method, `conditions` is passed as a dataframe. In that case, the model will generate one sample for each row of the dataframe, sorted in the same order. Since the model already knows how many samples to generate, passing it as a parameter is unnecessary. For example, if we want to generate three samples where `gender = M` and three samples with `gender = F`, we can do the following: ",
"_____no_output_____"
]
],
[
[
"import pandas as pd \n\nconditions = pd.DataFrame({\n 'gender': ['M', 'M', 'M', 'F', 'F', 'F'],\n})\nmodel.sample_remaining_columns(conditions)",
"_____no_output_____"
]
],
[
[
"`TVAE` also supports conditioning on continuous values, as long as the values are within the range of seen numbers. For example, if all the values of the dataset are within 0 and 1, `TVAE` will not be able to set this value to 1000.",
"_____no_output_____"
]
],
[
[
"condition = Condition({\n 'degree_perc': 70.0\n}, num_rows=5)\n\nmodel.sample_conditions(conditions=[condition])",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-info\">\n\n**Note**\n \nCurrently, conditional sampling works through a rejection sampling process, where\nrows are sampled repeatedly until one that satisfies the conditions is found.\nIn case you are not able to sample enough valid rows, update the related parameters:\nincreasing ``max_tries`` or increasing ``batch_size_per_try``.\nMore information about these paramters can be found in the\n<a href=https://sdv.dev/SDV/api_reference/tabular/api/sdv.tabular.ctgan.TVAE.sample_conditions.html> API section</a>.\n\nIf you have many conditions that cannot easily be satisified, consider switching\nto the <a href=https://sdv.dev/SDV/user_guides/single_table/gaussian_copula.html>GaussianCopula model</a>, which is able to handle conditional\nsampling more efficiently.\n\n\n</div>",
"_____no_output_____"
],
[
"### How do I specify constraints?\n\nIf you look closely at the data you may notice that some properties were\nnot completely captured by the model. For example, you may have seen\nthat sometimes the model produces an `experience_years` number greater\nthan `0` while also indicating that `work_experience` is `False`. These\ntypes of properties are what we call `Constraints` and can also be\nhandled using `SDV`. For further details about them please visit the\n[Handling Constraints](04_Handling_Constraints.ipynb) guide.\n\n### Can I evaluate the Synthetic Data?\n\nA very common question when someone starts using **SDV** to generate\nsynthetic data is: *\\\"How good is the data that I just generated?\\\"*\n\nIn order to answer this question, **SDV** has a collection of metrics\nand tools that allow you to compare the *real* that you provided and the\n*synthetic* data that you generated using **SDV** or any other tool.\n\nYou can read more about this in the [Evaluating Synthetic Data Generators](\n05_Evaluating_Synthetic_Data_Generators.ipynb) guide.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d055a8aa3e2004627b93a5051da14cfd8707b344 | 535,600 | ipynb | Jupyter Notebook | Python4Scientists_Lesson1.ipynb | cordmaur/PythonForScientists | 5d1ea17e81a7828317d1773c803f1d8c02bd2e8d | [
"MIT"
] | 3 | 2021-11-05T01:59:26.000Z | 2022-03-11T17:51:05.000Z | Python4Scientists_Lesson1.ipynb | cordmaur/PythonForScientists | 5d1ea17e81a7828317d1773c803f1d8c02bd2e8d | [
"MIT"
] | null | null | null | Python4Scientists_Lesson1.ipynb | cordmaur/PythonForScientists | 5d1ea17e81a7828317d1773c803f1d8c02bd2e8d | [
"MIT"
] | null | null | null | 184.371773 | 397,198 | 0.897196 | [
[
[
"<a href=\"https://colab.research.google.com/github/cordmaur/PythonForScientists/blob/main/Python4Scientists_Lesson1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Lesson 1 - Introduction",
"_____no_output_____"
],
[
"## Getting to know the Notebook\n",
"_____no_output_____"
],
[
"Two types of cells:\n\n\n* Code cells\n* Text cells\n\n",
"_____no_output_____"
],
[
"Hi hello!\n",
"_____no_output_____"
],
[
"Hi hello. (shift + enter = executes)",
"_____no_output_____"
],
[
"This is a **text** cell. It can be formatted with **images**, **HTML**, **LaTeX**. <br> \nFor example **LaTeX**:<br>\n$Y_t - Y_{t-1} = \\rho Y_{t-1} - Y_{t-1} + \\epsilon $\n\n$\\Delta Y_t = (\\rho - 1) Y_{t-1} + \\epsilon$\n\n**Image**: <br>\n",
"_____no_output_____"
]
],
[
[
"# Header 1\n## Section 1.1\n### Sub-section 1.1.1\n#### And we can continue ",
"_____no_output_____"
],
[
"# this is number\n# comment\n5",
"_____no_output_____"
],
[
"6+2",
"_____no_output_____"
],
[
"2 + 2",
"_____no_output_____"
],
[
"5 + 2",
"_____no_output_____"
],
[
"# Pay attention to the execution order!\n5 / 2",
"_____no_output_____"
]
],
[
[
"<b>Note</b> It has some differences to the standard implementation of Jupyter Notebook (ex. shortcuts)\n\n---\n",
"_____no_output_____"
],
[
"## Basic Types (part 1)",
"_____no_output_____"
],
[
"### Numbers",
"_____no_output_____"
]
],
[
[
"# integers\n265",
"_____no_output_____"
],
[
"# Real (called float)\n235.45",
"_____no_output_____"
],
[
"# Binary (called Boolean)\nTrue, False",
"_____no_output_____"
],
[
"# complex\n2 + 4j",
"_____no_output_____"
],
[
"# function(123123)\ntype(2 + 4j)",
"_____no_output_____"
],
[
"type(2), type(2.)",
"_____no_output_____"
],
[
"type(3/2)",
"_____no_output_____"
],
[
"3/2",
"_____no_output_____"
]
],
[
[
"#### Operations\nAll arithmetic operators: \n* +, -, *, /\n* %, **, //\n\n\n",
"_____no_output_____"
]
],
[
[
"# % -> Modulus operator\nprint(13/5)\nprint(13%5)",
"2.6\n3\n"
],
[
"# // -> Floor division\n13//2",
"_____no_output_____"
],
[
"# ** -> expoent\n3**3",
"_____no_output_____"
],
[
"# operators precedence\nprint( 2*2**2 )\nprint( (2*2)**2 )",
"8\n16\n"
]
],
[
[
"**Note:** Don't use square brackets [ ].or curly brackets to write expressions ",
"_____no_output_____"
],
[
"#### Comparison Operators\n==, !=, >, <, >=, <=",
"_____no_output_____"
]
],
[
[
"# the result is always a boolean\n2 == 3",
"_____no_output_____"
],
[
"1>2",
"_____no_output_____"
],
[
"int(True)",
"_____no_output_____"
],
[
"float(2)",
"_____no_output_____"
],
[
"123 >= 122.99",
"_____no_output_____"
],
[
"# comparing two objects\n123 == \"123\", 123 != \"123\"",
"_____no_output_____"
],
[
"int(\"234\")",
"_____no_output_____"
],
[
"# Remove int to raise error\n123 <= int(\"234\")",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"#### Logical Operators\nAlways compare booleans <br>\nand, or, not",
"_____no_output_____"
]
],
[
[
"not True",
"_____no_output_____"
],
[
"2 < 5 and (3 < 4)",
"_____no_output_____"
],
[
"not (2 > 5) or (3 < 4)",
"_____no_output_____"
]
],
[
[
"https://www.programiz.com/python-programming/precedence-associativity",
"_____no_output_____"
]
],
[
[
"# it doesn't matter the precedence\n# True and True or False and not False",
"_____no_output_____"
]
],
[
[
"#### Bitwise operators\n\n* & - AND\n* | - OR\n* ^ - XOR\n* ~ - NOT\n* << Left shift\n* '>>' Right shift\n\nIf you thought you could skip this class....<br>\n\nhttps://medium.com/analytics-vidhya/python-for-geosciences-raster-bit-masks-explained-step-by-step-8620ed27141e\n\n",
"_____no_output_____"
],
[
"### Strings",
"_____no_output_____"
]
],
[
[
"\"Hello World!\"",
"_____no_output_____"
],
[
"\"5 + 2\"",
"_____no_output_____"
],
[
"\"Hello\" + \" world!\"",
"_____no_output_____"
],
[
"\"Hello\" == \"Hello!\"",
"_____no_output_____"
],
[
"# check alphabetical order\n\"Jean\" > \"Albin\"",
"_____no_output_____"
],
[
"3 == \"3\"",
"_____no_output_____"
],
[
"# Some operations are note defined\n# \"Hello\" - \"H\"\n\"Hello\" < str(3)",
"_____no_output_____"
],
[
"12/ 33333",
"_____no_output_____"
]
],
[
[
"### Variables\n",
"_____no_output_____"
]
],
[
[
"a = 23 \nb = 7.89\ntype(a), type(b)",
"_____no_output_____"
],
[
"a + b",
"_____no_output_____"
],
[
"s = \"Hello world!\"\nprint(s)\ntype(s)",
"Hello world!\n"
],
[
"a < b",
"_____no_output_____"
]
],
[
[
"### Lists\nUp to now, everything could be done with a good calculator... now things will get better.<br>\n\nOrdered, accepts duplicates (diff from set) and can contain different data types.",
"_____no_output_____"
]
],
[
[
"lst = [1, \"Hello\", 3.5, 4, [\"innerList_item1\", \"innerList_item2\"], 6]\nlst",
"_____no_output_____"
],
[
"len(lst)",
"_____no_output_____"
]
],
[
[
"### Indexing/Slicing\nIt's a way to refer to individual/subset of items within a list.<br>\nPython indexing is Zero-Based",
"_____no_output_____"
]
],
[
[
"# Examples of indexing\n# Get the first item and the last item\nlst[0], lst[-1]",
"_____no_output_____"
],
[
"lst[5]",
"_____no_output_____"
],
[
"# Get second and penultimate itens\nlst[1], lst[-2]",
"_____no_output_____"
],
[
"# Examples of slicing\n# OBS: The slicing don't include the last item. So, 0:3 will return the 3 first \n# elements\n\n# [1, 10) - > 1.....9\n\n# Syntax is: list[first index:last_index (excludent)]\nlst[0:3]",
"_____no_output_____"
],
[
"lst[3:6]",
"_____no_output_____"
],
[
"list2 = lst[-2]",
"_____no_output_____"
],
[
"lst[-2][0]",
"_____no_output_____"
],
[
"# It can work with strings, as well\nlst[-2][0][-5:]",
"_____no_output_____"
],
[
"lst[-2][0][:5]",
"_____no_output_____"
]
],
[
[
"### Acessing object members",
"_____no_output_____"
]
],
[
[
"type(lst)",
"_____no_output_____"
],
[
"# crtl+space\nlst.index?",
"_____no_output_____"
],
[
"lst.index(4)",
"_____no_output_____"
],
[
"help(lst.append)",
"Help on built-in function append:\n\nappend(object, /) method of builtins.list instance\n Append object to the end of the list.\n\n"
],
[
"lst.append?",
"_____no_output_____"
],
[
"lst.append('last element')\nlst",
"_____no_output_____"
],
[
"len(lst)",
"_____no_output_____"
],
[
"lst.index('Hello')",
"_____no_output_____"
],
[
"lst[-1] = 'last'",
"_____no_output_____"
],
[
"lst",
"_____no_output_____"
]
],
[
[
"### String Members",
"_____no_output_____"
]
],
[
[
"s.replace('Hello', 'Hi')",
"_____no_output_____"
],
[
"s.lower()",
"_____no_output_____"
],
[
"s.swapcase()",
"_____no_output_____"
],
[
"'234'.isnumeric()",
"_____no_output_____"
],
[
"s.isnumeric?",
"_____no_output_____"
]
],
[
[
"We will now see how to control the flow execution of a program. <br>There are important structures missing like **tuples**, **dictionaries**, **sets**, etc... <br>We will come back to them afterwards.",
"_____no_output_____"
],
[
"## Flow control",
"_____no_output_____"
],
[
"### If-statement (if-then-else)\n\n",
"_____no_output_____"
],
[
"**basic usage is:** <br>\nif condition:<br>\n> flow if condition is satisfied<br>\n\nelse:<br>\n> flow if condition is not satisfied<br>\n<br>\n\n**Extended version:**<br>\nif condition:<br>\n> flow if condition is satisfied<br>\n\nelif condition2:<br>\n> flow if condition2 is satisfied<br>\n\nelif condition3:<br>\n> flow if condition3 is satisfied<br>\n\nelse:<br>\n> flow if now condition is satisfied<br>\n\n<br>\n<br>\nCondition is always a <b>boolean</b>",
"_____no_output_____"
]
],
[
[
"# indent\n\nx = 18276748451\n\nif x % 2 == 0:\n print(x)\n print('This number is even')\n\nelse:\n print(x)\n print('This number is odd')\n\n",
"18276748451\nThis number is odd\n"
],
[
"x = input(\"Please, enter an integer:\")\n\n# The result of the input function is always a string.\n# We have to convert it to an integer before proceeding.\nx = int(x)\n\nif x < 0:\n print('Negative')\n\nelif x > 0:\n print('Positive')\n\nelse:\n print('Zero')\n\nprint('finished')",
"Please, enter an integer:2\nPositive\nfinished\n"
]
],
[
[
"### While statement\n",
"_____no_output_____"
],
[
"\nwhile condition (is met):\n> do something",
"_____no_output_____"
]
],
[
[
"# good to count\nstart = 1\nend = 1000\n\nwhile start <= end:\n print(start)\n start = start + 1",
"_____no_output_____"
],
[
"# combine flow control and loops (printing just numbers divisable by 3)\ni = 0\nwhile i <= 100:\n if i % 3 == 0:\n print(i)\n\n i = i + 1",
"_____no_output_____"
],
[
"# Create a list with number divisible by 3 from 0 to 100\ncurrent_number = 0\nlst = []\n\nwhile current_number < 100:\n if current_number%3 == 0:\n lst.append(current_number)\n\n current_number += 1\n\nstr(lst)",
"_____no_output_____"
],
[
"# Create a list with the 10 first odd numbers?\ncurrent_number = 0\nlst = []\n\nwhile len(lst) < 10:\n if current_number%2 != 0:\n lst.append(current_number)\n\n current_number += 1\n\nlst",
"_____no_output_____"
],
[
"# New we can iterate through a list (old-style)\n# Calculate the square\ni = 0\nwhile i < len(lst):\n print(lst[i]**2)\n i += 1",
"0\n9\n36\n81\n144\n225\n324\n441\n576\n729\n900\n1089\n1296\n1521\n1764\n2025\n2304\n2601\n2916\n3249\n3600\n3969\n4356\n4761\n5184\n5625\n6084\n6561\n7056\n7569\n8100\n8649\n9216\n9801\n"
]
],
[
[
"### For statement",
"_____no_output_____"
],
[
"**Basic usage:**<br>\nfor variable in \"list\" (Iterable):\n> do something",
"_____no_output_____"
]
],
[
[
"# to calculate the square of these...\nfor anything in lst:\n print(anything/2)",
"0.0\n1.5\n3.0\n4.5\n6.0\n7.5\n9.0\n10.5\n12.0\n13.5\n15.0\n16.5\n18.0\n19.5\n21.0\n22.5\n24.0\n25.5\n27.0\n28.5\n30.0\n31.5\n33.0\n34.5\n36.0\n37.5\n39.0\n40.5\n42.0\n43.5\n45.0\n46.5\n48.0\n49.5\n"
]
],
[
[
"That's something different from older (lower level) languages like C, C++, Pascal, Fortran, etc. <br>\n**Note: There is no condition in Python's `for statement`**\n\n",
"_____no_output_____"
]
],
[
[
"# range(start, end, step)\nfor i in range(10, 0, -2):\n print(i)",
"10\n8\n6\n4\n2\n"
]
],
[
[
"## Exercise",
"_____no_output_____"
],
[
"We have the precipitation for one month and corresponding days.",
"_____no_output_____"
]
],
[
[
"import random\nrandom.randint?",
"_____no_output_____"
],
[
"# create the days and daily rain\nrandom.seed(1)\n\ndaily_rain = []\nday_of_month = []\n\nfor i in range(1, 32, 1):\n day_of_month.append(i)\n daily_rain.append(random.randint(0, 100))\n\nstr(day_of_month), str(daily_rain)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nplt.figure(figsize=(18, 9))\nplt.bar(day_of_month, daily_rain)",
"_____no_output_____"
]
],
[
[
"Answer these questions:\n* number of days with rain\n* day of the maximum rain and day of the minimum rain\n* total rain\n* mean rain\n* <b>Challenge:</b> order the <b>days</b> according to the rain precipitation. Descending order (from highest to lowest). Ex: [12, 7, ...]",
"_____no_output_____"
],
[
"## Extra - n-dimensional matrices as combination of lists",
"_____no_output_____"
]
],
[
[
"# create a checkerboard\nl1 = [0, 1, 0, 1, 0, 1, 0, 1]\nl2 = [1, 0, 1, 0, 1, 0, 1, 0]\nl3 = [0, 1, 0, 1, 0, 1, 0, 1]\nl4 = [1, 0, 1, 0, 1, 0, 1, 0]\nl5 = [0, 1, 0, 1, 0, 1, 0, 1]\nl6 = [1, 0, 1, 0, 1, 0, 1, 0]\nl7 = [0, 1, 0, 1, 0, 1, 0, 1]\nl8 = [1, 0, 1, 0, 1, 0, 1, 0]\n",
"_____no_output_____"
],
[
"m = [l1, l2, l3, l4, l5, l6, l7, l8]",
"_____no_output_____"
],
[
"m",
"_____no_output_____"
],
[
"m[2][2]",
"_____no_output_____"
],
[
"type(m[2])",
"_____no_output_____"
],
[
"plt.imshow(m, cmap='hot')",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"size = 12\nm = []\nfor i in range(size): # lines\n line = []\n for j in range(size): # columns\n line.append(i%2 == j%2)\n m.append(line)\n\nplt.imshow(m, cmap='hot')",
"_____no_output_____"
],
[
"linha = []\ni = 0\n\nwhile i < 256:\n linha.append(i)\n i = i + 1\n\nstr(linha)",
"_____no_output_____"
],
[
"m = []\ni = 0\nwhile i < 256:\n m.append(linha)\n i = i + 1",
"_____no_output_____"
],
[
"plt.imshow(m, cmap='hot')",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d055ba97f1d6ce9acdb49cb0bba90580df3bf843 | 396,288 | ipynb | Jupyter Notebook | DARecNet-BS/Pytorch_Salinas.ipynb | Anysomeday/DARecNet-BS | b121d60e44bdadc2de98cd0d9b2672840fa96c62 | [
"MIT"
] | 29 | 2020-08-18T18:10:22.000Z | 2022-03-28T07:24:17.000Z | DARecNet-BS/Pytorch_Salinas.ipynb | Anysomeday/DARecNet-BS | b121d60e44bdadc2de98cd0d9b2672840fa96c62 | [
"MIT"
] | 1 | 2021-11-02T13:26:14.000Z | 2021-11-02T13:26:14.000Z | DARecNet-BS/Pytorch_Salinas.ipynb | Anysomeday/DARecNet-BS | b121d60e44bdadc2de98cd0d9b2672840fa96c62 | [
"MIT"
] | 3 | 2021-01-03T14:48:32.000Z | 2021-07-22T11:57:39.000Z | 311.058085 | 167,686 | 0.901304 | [
[
[
"!pip install kornia",
"Collecting kornia\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/37/41/baedba753124f9e2b1f716cb2924424dc91b6dc80a7532ec299c83708910/kornia-0.3.0-py2.py3-none-any.whl (158kB)\n\r\u001b[K |██ | 10kB 23.6MB/s eta 0:00:01\r\u001b[K |████▏ | 20kB 3.0MB/s eta 0:00:01\r\u001b[K |██████▏ | 30kB 4.0MB/s eta 0:00:01\r\u001b[K |████████▎ | 40kB 2.9MB/s eta 0:00:01\r\u001b[K |██████████▍ | 51kB 3.2MB/s eta 0:00:01\r\u001b[K |████████████▍ | 61kB 3.8MB/s eta 0:00:01\r\u001b[K |██████████████▌ | 71kB 4.1MB/s eta 0:00:01\r\u001b[K |████████████████▌ | 81kB 4.2MB/s eta 0:00:01\r\u001b[K |██████████████████▋ | 92kB 4.7MB/s eta 0:00:01\r\u001b[K |████████████████████▊ | 102kB 4.7MB/s eta 0:00:01\r\u001b[K |██████████████████████▊ | 112kB 4.7MB/s eta 0:00:01\r\u001b[K |████████████████████████▉ | 122kB 4.7MB/s eta 0:00:01\r\u001b[K |██████████████████████████▉ | 133kB 4.7MB/s eta 0:00:01\r\u001b[K |█████████████████████████████ | 143kB 4.7MB/s eta 0:00:01\r\u001b[K |███████████████████████████████ | 153kB 4.7MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 163kB 4.7MB/s \n\u001b[?25hRequirement already satisfied: torch==1.5.0 in /usr/local/lib/python3.6/dist-packages (from kornia) (1.5.0+cu101)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from kornia) (1.18.3)\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch==1.5.0->kornia) (0.16.0)\nInstalling collected packages: kornia\nSuccessfully installed kornia-0.3.0\n"
],
[
"import numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport sys\nimport os\nimport torch.optim as optim\nimport torchvision\nfrom torchvision import datasets, transforms\nfrom scipy import io \nimport torch.utils.data\nimport scipy\nfrom scipy.stats import entropy\nimport matplotlib.pyplot as plt\nfrom torch.utils.data import Dataset, DataLoader\nimport math\nfrom sklearn.metrics import mean_squared_error\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")",
"_____no_output_____"
],
[
"!pip install -U spectral\n!pip install pytorch_ssim\nfrom pytorch_ssim import ssim\n\nif not (os.path.isfile('/content/Salinas_corrected.mat')):\n !wget https://github.com/gokriznastic/HybridSN/raw/master/data/Salinas_corrected.mat\nif not (os.path.isfile('/content/Salinas_gt.mat')):\n !wget https://github.com/gokriznastic/HybridSN/raw/master/data/Salinas_gt.mat\n",
"Collecting spectral\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/f5/ff/f6e238a941ed55079526996fee315fbee5167aaa64de3e64980637ac8f38/spectral-0.21-py3-none-any.whl (187kB)\n\r\u001b[K |█▊ | 10kB 25.6MB/s eta 0:00:01\r\u001b[K |███▌ | 20kB 3.3MB/s eta 0:00:01\r\u001b[K |█████▎ | 30kB 4.2MB/s eta 0:00:01\r\u001b[K |███████ | 40kB 3.0MB/s eta 0:00:01\r\u001b[K |████████▊ | 51kB 3.5MB/s eta 0:00:01\r\u001b[K |██████████▌ | 61kB 4.1MB/s eta 0:00:01\r\u001b[K |████████████▎ | 71kB 4.4MB/s eta 0:00:01\r\u001b[K |██████████████ | 81kB 4.6MB/s eta 0:00:01\r\u001b[K |███████████████▊ | 92kB 5.1MB/s eta 0:00:01\r\u001b[K |█████████████████▌ | 102kB 5.0MB/s eta 0:00:01\r\u001b[K |███████████████████▎ | 112kB 5.0MB/s eta 0:00:01\r\u001b[K |█████████████████████ | 122kB 5.0MB/s eta 0:00:01\r\u001b[K |██████████████████████▊ | 133kB 5.0MB/s eta 0:00:01\r\u001b[K |████████████████████████▌ | 143kB 5.0MB/s eta 0:00:01\r\u001b[K |██████████████████████████▎ | 153kB 5.0MB/s eta 0:00:01\r\u001b[K |████████████████████████████ | 163kB 5.0MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▊ | 174kB 5.0MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▌| 184kB 5.0MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 194kB 5.0MB/s \n\u001b[?25hRequirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.6/dist-packages (from spectral) (1.18.3)\nInstalling collected packages: spectral\nSuccessfully installed spectral-0.21\nCollecting pytorch_ssim\n Downloading https://files.pythonhosted.org/packages/dc/78/f6cfa15ff7c66de5bb0873fb4bd699ff8024a0b00a94babbd216e64202b7/pytorch_ssim-0.1.tar.gz\nBuilding wheels for collected packages: pytorch-ssim\n Building wheel for pytorch-ssim (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for pytorch-ssim: filename=pytorch_ssim-0.1-cp36-none-any.whl size=2027 sha256=a0a3d1a8ecf577b3d1acecbf082b4c3c618efd8ee14dfdd35886627036c2354a\n Stored in directory: /root/.cache/pip/wheels/86/60/c8/85a73ea90dcf1d39d5d7f94d83988511f0370229dee641bb79\nSuccessfully built pytorch-ssim\nInstalling collected packages: pytorch-ssim\nSuccessfully installed pytorch-ssim-0.1\n--2020-04-30 19:38:29-- https://github.com/gokriznastic/HybridSN/raw/master/data/Salinas_corrected.mat\nResolving github.com (github.com)... 140.82.112.3\nConnecting to github.com (github.com)|140.82.112.3|:443... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://raw.githubusercontent.com/gokriznastic/HybridSN/master/data/Salinas_corrected.mat [following]\n--2020-04-30 19:38:30-- https://raw.githubusercontent.com/gokriznastic/HybridSN/master/data/Salinas_corrected.mat\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 26552770 (25M) [application/octet-stream]\nSaving to: ‘Salinas_corrected.mat’\n\nSalinas_corrected.m 100%[===================>] 25.32M --.-KB/s in 0.1s \n\n2020-04-30 19:38:30 (174 MB/s) - ‘Salinas_corrected.mat’ saved [26552770/26552770]\n\n--2020-04-30 19:38:33-- https://github.com/gokriznastic/HybridSN/raw/master/data/Salinas_gt.mat\nResolving github.com (github.com)... 140.82.112.3\nConnecting to github.com (github.com)|140.82.112.3|:443... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://raw.githubusercontent.com/gokriznastic/HybridSN/master/data/Salinas_gt.mat [following]\n--2020-04-30 19:38:33-- https://raw.githubusercontent.com/gokriznastic/HybridSN/master/data/Salinas_gt.mat\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 4277 (4.2K) [application/octet-stream]\nSaving to: ‘Salinas_gt.mat’\n\nSalinas_gt.mat 100%[===================>] 4.18K --.-KB/s in 0s \n\n2020-04-30 19:38:34 (68.1 MB/s) - ‘Salinas_gt.mat’ saved [4277/4277]\n\n"
],
[
"from torch.nn import Module, Sequential, Conv2d, ReLU,AdaptiveMaxPool2d, AdaptiveAvgPool2d, \\\n NLLLoss, BCELoss, CrossEntropyLoss, AvgPool2d, MaxPool2d, Parameter, Linear, Sigmoid, Softmax, Dropout, Embedding\nfrom torch.nn import functional as F",
"_____no_output_____"
],
[
"import scipy.io as sio\ndef loadData():\n \n data = sio.loadmat('Salinas_corrected.mat')['salinas_corrected']\n labels = sio.loadmat('Salinas_gt.mat')['salinas_gt']\n \n return data, labels",
"_____no_output_____"
],
[
"def padWithZeros(X, margin=2):\n\n ## From: https://github.com/gokriznastic/HybridSN/blob/master/Hybrid-Spectral-Net.ipynb\n newX = np.zeros((X.shape[0] + 2 * margin, X.shape[1] + 2* margin, X.shape[2]))\n x_offset = margin\n y_offset = margin\n newX[x_offset:X.shape[0] + x_offset, y_offset:X.shape[1] + y_offset, :] = X\n return newX\n\ndef createImageCubes(X, y, windowSize=5, removeZeroLabels = True):\n\n ## From: https://github.com/gokriznastic/HybridSN/blob/master/Hybrid-Spectral-Net.ipynb\n margin = int((windowSize - 1) / 2)\n zeroPaddedX = padWithZeros(X, margin=margin)\n # split patches\n patchesData = np.zeros((X.shape[0] * X.shape[1], windowSize, windowSize, X.shape[2]), dtype=np.uint8)\n patchesLabels = np.zeros((X.shape[0] * X.shape[1]), dtype=np.uint8)\n patchIndex = 0\n for r in range(margin, zeroPaddedX.shape[0] - margin):\n for c in range(margin, zeroPaddedX.shape[1] - margin):\n patch = zeroPaddedX[r - margin:r + margin + 1, c - margin:c + margin + 1] \n patchesData[patchIndex, :, :, :] = patch\n patchesLabels[patchIndex] = y[r-margin, c-margin]\n patchIndex = patchIndex + 1\n if removeZeroLabels:\n patchesData = patchesData[patchesLabels>0,:,:,:]\n patchesLabels = patchesLabels[patchesLabels>0]\n patchesLabels -= 1\n return patchesData, patchesLabels\n",
"_____no_output_____"
],
[
"class HyperSpectralDataset(Dataset):\n \"\"\"HyperSpectral dataset.\"\"\"\n\n def __init__(self,data_url,label_url):\n \n self.data = np.array(scipy.io.loadmat('/content/'+data_url.split('/')[-1])['salinas_corrected'])\n self.targets = np.array(scipy.io.loadmat('/content/'+label_url.split('/')[-1])['salinas_gt'])\n self.data, self.targets = createImageCubes(self.data,self.targets, windowSize=5)\n \n \n self.data = torch.Tensor(self.data)\n self.data = self.data.permute(0,3,1,2)\n print(self.data.shape)\n \n\n def __len__(self):\n return self.data.shape[0]\n \n def __getitem__(self, idx):\n \n return self.data[idx,:,:,:] , self.targets[idx]\n",
"_____no_output_____"
],
[
"data_train = HyperSpectralDataset('Salinas_corrected.mat','Salinas_gt.mat')\ntrain_loader = DataLoader(data_train, batch_size=16, shuffle=True)",
"torch.Size([54129, 204, 5, 5])\n"
],
[
"print(data_train.__getitem__(0)[0].shape)\nprint(data_train.__len__())",
"torch.Size([204, 5, 5])\n54129\n"
],
[
"class PAM_Module(Module):\n \"\"\" Position attention module https://github.com/junfu1115/DANet/blob/master/encoding/nn/attention.py\"\"\"\n #Ref from SAGAN\n def __init__(self, in_dim):\n super(PAM_Module, self).__init__()\n self.chanel_in = in_dim\n\n self.query_conv = Conv2d(in_channels=in_dim, out_channels=in_dim//8, kernel_size=1)\n self.key_conv = Conv2d(in_channels=in_dim, out_channels=in_dim//8, kernel_size=1)\n self.value_conv = Conv2d(in_channels=in_dim, out_channels=in_dim, kernel_size=1)\n \n self.gamma = Parameter(torch.zeros(1))\n\n self.softmax = Softmax(dim=-1)\n def forward(self, x):\n \"\"\"\n inputs :\n x : input feature maps( B X C X H X W)\n returns :\n out : attention value + input feature\n attention: B X (HxW) X (HxW)\n \"\"\"\n m_batchsize, C, height, width = x.size()\n proj_query = self.query_conv(x).view(m_batchsize, -1, width*height).permute(0, 2, 1)\n proj_key = self.key_conv(x).view(m_batchsize, -1, width*height)\n energy = torch.bmm(proj_query, proj_key)\n attention = self.softmax(energy)\n proj_value = self.value_conv(x).view(m_batchsize, -1, width*height)\n\n out = torch.bmm(proj_value, attention.permute(0, 2, 1))\n out = out.view(m_batchsize, C, height, width)\n\n out = self.gamma*out + x\n #out = F.avg_pool2d(out, out.size()[2:4])\n \n return out\n\n\nclass CAM_Module(Module):\n \"\"\" Channel attention module https://github.com/junfu1115/DANet/blob/master/encoding/nn/attention.py\"\"\"\n def __init__(self):\n super(CAM_Module, self).__init__()\n #self.chanel_in = in_dim\n \n\n\n self.gamma = Parameter(torch.zeros(1))\n self.softmax = Softmax(dim=-1)\n def forward(self,x):\n \"\"\"\n inputs :\n x : input feature maps( B X C X H X W)\n returns :\n out : attention value + input feature\n attention: B X C X C\n \"\"\"\n m_batchsize, C, height, width = x.size()\n proj_query = x.view(m_batchsize, C, -1)\n proj_key = x.view(m_batchsize, C, -1).permute(0, 2, 1)\n energy = torch.bmm(proj_query, proj_key)\n energy_new = torch.max(energy, -1, keepdim=True)[0].expand_as(energy)-energy\n attention = self.softmax(energy_new)\n proj_value = x.view(m_batchsize, C, -1)\n\n out = torch.bmm(attention, proj_value)\n out = out.view(m_batchsize, C, height, width)\n\n out = self.gamma*out + x\n #out = F.avg_pool2d(out, out.size()[2:4])\n \n \n return out\n",
"_____no_output_____"
],
[
"class RecNet(nn.Module):\n def __init__(self):\n super(RecNet, self).__init__()\n self.conv3d_1 = nn.Sequential(nn.Conv3d(1, 128, (1, 3, 3), 1), \n nn.BatchNorm3d(128),\n nn.PReLU())\n \n self.conv3d_2 = nn.Sequential(nn.Conv3d(128, 64, (1, 3, 3), 1),\n nn.BatchNorm3d(64),\n nn.PReLU())\n \n \n self.pool3d = nn.MaxPool3d((1, 1, 1), (1, 1, 1))\n \n self.deconv3d_1 = nn.Sequential(nn.ConvTranspose3d(64, 128, (1, 3, 3), 1),\n nn.BatchNorm3d(128),\n nn.PReLU())\n \n self.deconv3d_2 = nn.Sequential(nn.ConvTranspose3d(128, 1, (1, 3, 3), 1),\n nn.BatchNorm3d(1))\n \n\n def forward(self, x):\n x = self.conv3d_1(x)\n x = self.conv3d_2(x)\n \n x = self.pool3d(x)\n \n x = self.deconv3d_1(x)\n x = self.deconv3d_2(x)\n \n \n return x.squeeze(1)",
"_____no_output_____"
],
[
"class DANet(Module):\n def __init__(self):\n super(DANet,self).__init__()\n self.PAM_Module = PAM_Module(204)\n self.CAM_Module = CAM_Module()\n self.RecNet = RecNet()\n def forward(self,x):\n \n P = self.PAM_Module(x)\n C = self.CAM_Module(x)\n #B,Ch,H,W = P.size()\n J = P + C\n J = J.unsqueeze(1)\n ret = self.RecNet(J)\n \n \n \n return ret\n \n \ndanet_model = DANet().to(device)",
"_____no_output_____"
],
[
"\nfrom torchsummary import summary\nsummary(danet_model,input_size=(204,5,5))\n",
"----------------------------------------------------------------\n Layer (type) Output Shape Param #\n================================================================\n Conv2d-1 [-1, 25, 5, 5] 5,125\n Conv2d-2 [-1, 25, 5, 5] 5,125\n Softmax-3 [-1, 25, 25] 0\n Conv2d-4 [-1, 204, 5, 5] 41,820\n PAM_Module-5 [-1, 204, 5, 5] 0\n Softmax-6 [-1, 204, 204] 0\n CAM_Module-7 [-1, 204, 5, 5] 0\n Conv3d-8 [-1, 128, 204, 3, 3] 1,280\n BatchNorm3d-9 [-1, 128, 204, 3, 3] 256\n PReLU-10 [-1, 128, 204, 3, 3] 1\n Conv3d-11 [-1, 64, 204, 1, 1] 73,792\n BatchNorm3d-12 [-1, 64, 204, 1, 1] 128\n PReLU-13 [-1, 64, 204, 1, 1] 1\n MaxPool3d-14 [-1, 64, 204, 1, 1] 0\n ConvTranspose3d-15 [-1, 128, 204, 3, 3] 73,856\n BatchNorm3d-16 [-1, 128, 204, 3, 3] 256\n PReLU-17 [-1, 128, 204, 3, 3] 1\n ConvTranspose3d-18 [-1, 1, 204, 5, 5] 1,153\n BatchNorm3d-19 [-1, 1, 204, 5, 5] 2\n RecNet-20 [-1, 204, 5, 5] 0\n================================================================\nTotal params: 202,796\nTrainable params: 202,796\nNon-trainable params: 0\n----------------------------------------------------------------\nInput size (MB): 0.02\nForward/backward pass size (MB): 11.72\nParams size (MB): 0.77\nEstimated Total Size (MB): 12.51\n----------------------------------------------------------------\n"
],
[
"!nvidia-smi",
"Tue Apr 21 07:04:08 2020 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 440.64.00 Driver Version: 418.67 CUDA Version: 10.1 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n|===============================+======================+======================|\n| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |\n| N/A 73C P0 76W / 149W | 650MiB / 11441MiB | 0% Default |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: GPU Memory |\n| GPU PID Type Process name Usage |\n|=============================================================================|\n+-----------------------------------------------------------------------------+\n"
],
[
"#model = BSNET_Conv().to(device) \n\noptimizer = optim.SGD(danet_model.parameters(), lr=0.005, momentum=0.9)optimizer = optim.SGD(danet_model.parameters(), lr=0.005, momentum=0.9)",
"_____no_output_____"
],
[
"top = 20",
"_____no_output_____"
],
[
"import skimage\nimport kornia\nglobal bsnlist\nssim = kornia.losses.SSIM(5, reduction='none')\npsnr = kornia.losses.PSNRLoss(2500)\nfrom skimage import measure\nssim_list = []\npsnr_list = []\nl1_list = []\nchannel_weight_list = []\ndef train(epoch): \n danet_model.train()\n ENTROPY = torch.zeros(204)\n \n for batch_idx, (data, __) in enumerate(train_loader):\n data = data.to(device)\n optimizer.zero_grad()\n output = danet_model(data)\n loss = F.l1_loss(output,data)\n loss.backward()\n optimizer.step()\n D = output.detach().cpu().numpy()\n for i in range(0,204):\n\n ENTROPY[i]+=skimage.measure.shannon_entropy(D[:,i,:,:])\n \n if batch_idx % (0.5*len(train_loader)) == 0:\n\n\n\n L1 = loss.item()\n print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n epoch, batch_idx * len(data), len(train_loader.dataset),\n 100. * batch_idx / len(train_loader),L1))\n l1_list.append(L1)\n ssim_val = torch.mean(ssim(data,output))\n print(\"SSIM: {}\".format(ssim_val))\n ssim_list.append(ssim_val)\n psnr_val = psnr(data,output)\n print(\"PSNR: {}\".format(psnr_val))\n psnr_list.append(psnr_val)\n \n \n ENTROPY = np.array(ENTROPY)\n bsnlist = np.asarray(ENTROPY.argsort()[-top:][::-1])\n print('Top {} bands with Entropy ->'.format(top),list(bsnlist))\n \n \n\n\nfor epoch in range(0, 10):\n train(epoch)\n ",
"Train Epoch: 0 [0/54129 (0%)]\tLoss: 119.193359\nSSIM: 0.4998215436935425\nPSNR: 25.01896095275879\nTrain Epoch: 0 [27072/54129 (50%)]\tLoss: 53.677807\nSSIM: 0.28785812854766846\nPSNR: 30.45496940612793\nTop 20 bands with Entropy -> [35, 50, 107, 148, 119, 118, 200, 12, 30, 53, 201, 198, 149, 46, 33, 95, 63, 27, 86, 51]\nTrain Epoch: 1 [0/54129 (0%)]\tLoss: 6.626010\nSSIM: 0.030591454356908798\nPSNR: 47.226749420166016\nTrain Epoch: 1 [27072/54129 (50%)]\tLoss: 3.235692\nSSIM: 0.02992197312414646\nPSNR: 56.14051818847656\nTop 20 bands with Entropy -> [87, 119, 67, 31, 100, 47, 23, 12, 42, 27, 21, 55, 96, 26, 116, 40, 98, 22, 74, 123]\nTrain Epoch: 2 [0/54129 (0%)]\tLoss: 2.486887\nSSIM: 0.024928806349635124\nPSNR: 58.09257507324219\nTrain Epoch: 2 [27072/54129 (50%)]\tLoss: 2.271365\nSSIM: 0.0237718615680933\nPSNR: 59.152278900146484\nTop 20 bands with Entropy -> [38, 31, 41, 59, 36, 51, 60, 97, 123, 54, 22, 150, 39, 35, 69, 175, 15, 26, 96, 63]\nTrain Epoch: 3 [0/54129 (0%)]\tLoss: 4.208509\nSSIM: 0.02700652740895748\nPSNR: 54.05061721801758\nTrain Epoch: 3 [27072/54129 (50%)]\tLoss: 1.660342\nSSIM: 0.008402373641729355\nPSNR: 61.472469329833984\nTop 20 bands with Entropy -> [70, 35, 24, 63, 53, 122, 77, 73, 94, 25, 52, 47, 16, 9, 72, 87, 132, 37, 90, 39]\nTrain Epoch: 4 [0/54129 (0%)]\tLoss: 3.070846\nSSIM: 0.02174883708357811\nPSNR: 56.685020446777344\nTrain Epoch: 4 [27072/54129 (50%)]\tLoss: 4.509328\nSSIM: 0.042439140379428864\nPSNR: 52.87211608886719\nTop 20 bands with Entropy -> [94, 39, 86, 30, 38, 42, 48, 45, 102, 31, 58, 62, 34, 69, 68, 47, 16, 74, 53, 37]\nTrain Epoch: 5 [0/54129 (0%)]\tLoss: 2.989486\nSSIM: 0.023492004722356796\nPSNR: 56.89580535888672\nTrain Epoch: 5 [27072/54129 (50%)]\tLoss: 3.006964\nSSIM: 0.015914537012577057\nPSNR: 57.0530891418457\nTop 20 bands with Entropy -> [12, 71, 19, 69, 156, 84, 43, 66, 54, 44, 122, 64, 52, 24, 25, 38, 40, 48, 121, 30]\nTrain Epoch: 6 [0/54129 (0%)]\tLoss: 2.316022\nSSIM: 0.014700568281114101\nPSNR: 59.03305435180664\nTrain Epoch: 6 [27072/54129 (50%)]\tLoss: 1.733813\nSSIM: 0.01625089719891548\nPSNR: 61.26710891723633\nTop 20 bands with Entropy -> [25, 49, 11, 67, 76, 72, 12, 88, 46, 22, 63, 23, 43, 40, 137, 77, 37, 18, 34, 33]\nTrain Epoch: 7 [0/54129 (0%)]\tLoss: 4.902276\nSSIM: 0.040914036333560944\nPSNR: 51.99333953857422\nTrain Epoch: 7 [27072/54129 (50%)]\tLoss: 2.161563\nSSIM: 0.013207027688622475\nPSNR: 59.431915283203125\nTop 20 bands with Entropy -> [45, 24, 68, 31, 32, 100, 66, 53, 57, 72, 27, 58, 156, 160, 40, 18, 121, 92, 39, 77]\nTrain Epoch: 8 [0/54129 (0%)]\tLoss: 2.878366\nSSIM: 0.018673257902264595\nPSNR: 57.6588134765625\nTrain Epoch: 8 [27072/54129 (50%)]\tLoss: 3.873805\nSSIM: 0.018643297255039215\nPSNR: 55.06043243408203\nTop 20 bands with Entropy -> [42, 76, 28, 15, 69, 41, 52, 50, 58, 102, 63, 70, 48, 74, 46, 72, 89, 64, 73, 78]\nTrain Epoch: 9 [0/54129 (0%)]\tLoss: 1.735221\nSSIM: 0.017572268843650818\nPSNR: 60.67157745361328\nTrain Epoch: 9 [27072/54129 (50%)]\tLoss: 3.928280\nSSIM: 0.02520720288157463\nPSNR: 55.20056915283203\nTop 20 bands with Entropy -> [64, 75, 11, 62, 87, 59, 44, 61, 53, 66, 42, 33, 71, 50, 12, 95, 138, 63, 78, 29]\n"
],
[
"x,xx,xxx = psnr_list,ssim_list,l1_list\nprint(len(x)),print(len(xx)),print(len(xxx))\nimport matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"20\n20\n20\n"
],
[
"np.save('psnr_SV.npy',np.asarray(x))\nnp.save('ssim_SV.npy',np.asarray(xx))\nnp.save('l1_SV.npy',np.asarray(xxx))",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,10))\nplt.xlabel('Epoch',fontsize=50)\nplt.ylabel('PSNR',fontsize=50)\nplt.xticks(fontsize=40)\nplt.yticks(np.arange(0,100 , 10.0),fontsize=40)\nplt.ylim(10,100)\nplt.plot(x,linewidth=5.0)\nplt.savefig('PSNR-SV.pdf')\nplt.show()\n\n",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,10))\nplt.xlabel('Epoch',fontsize=50)\nplt.ylabel('SSIM',fontsize=50)\nplt.xticks(fontsize=40)\nplt.yticks(fontsize=40)\nplt.ylim(0,0.6)\nplt.plot(xx,linewidth=5.0)\nplt.savefig('SSIM-SV.pdf')\n\nplt.show()\n",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,10))\nplt.xlabel('Epoch',fontsize=50)\nplt.ylabel('L1 Reconstruction loss',fontsize=50)\nplt.xticks(fontsize=40)\nplt.yticks(fontsize=40)\nplt.ylim(0,160)\nplt.plot(xxx,linewidth=5.0)\nplt.savefig('L1-SV.pdf')\nplt.show()\n\n",
"_____no_output_____"
],
[
"from google.colab import files\nfiles.download('SSIM-SV.pdf')\nfiles.download('PSNR-SV.pdf')\nfiles.download('L1-SV.pdf')",
"_____no_output_____"
],
[
"!wget https://raw.githubusercontent.com/ucalyptus/Double-Branch-Dual-Attention-Mechanism-Network/master/SV.csv",
"_____no_output_____"
],
[
"dabsrecnet = [24, 42, 63, 77, 57, 49, 35, 68, 64, 69, 50, 44, 43, 15, 90, 37, 48, 72, 54, 79]\nbsnetconv = [116,153,19,189,97,179,171,141,95,144,142,46,104,203,91,18,176,108,150,194]\npca = \t[169,67,168,63,68,78,167,166,165,69,164,163,77,162,70,62,160,161,76,158]\nspabs = [0,79,166,80,203,78,77,76,55,81,97,5,23,75,2,82,56,74,143,85] \nsnmf = [24,1,105,196,203,0,39,116,38,60,89,104,198,147,158,3,146,4,93,88]\nissc = [141,182,106,147,107,146,108,202,203,109,145,148,112,201,110,113,144,149,105,154]\n",
"_____no_output_____"
],
[
"def MeanSpectralDivergence(band_subset):\n\n n_row, n_column, n_band = band_subset.shape\n N = n_row * n_column\n hist = []\n for i in range(n_band):\n hist_, _ = np.histogram(band_subset[:, :, i], 256)\n hist.append(hist_ / N)\n hist = np.asarray(hist)\n hist[np.nonzero(hist <= 0)] = 1e-20\n # entropy_lst = entropy(hist.transpose())\n info_div = 0\n # band_subset[np.nonzero(band_subset <= 0)] = 1e-20\n for b_i in range(n_band):\n for b_j in range(n_band):\n band_i = hist[b_i].reshape(-1)/np.sum(hist[b_i])\n band_j = hist[b_j].reshape(-1)/np.sum(hist[b_j])\n entr_ij = entropy(band_i, band_j)\n entr_ji = entropy(band_j, band_i)\n entr_sum = entr_ij + entr_ji\n info_div += entr_sum\n msd = info_div * 2 / (n_band * (n_band - 1))\n return msd\n",
"_____no_output_____"
],
[
"def MeanSpectralAngle(band_subset):\n \"\"\"\n Spectral Angle (SA) is defined as the angle between two bands.\n We use Mean SA (MSA) to quantify the redundancy among a band set.\n i-th band B_i, and j-th band B_j,\n SA = arccos [B_i^T * B_j / ||B_i|| * ||B_j||]\n MSA = 2/n*(n-1) * sum(SA_ij)\n Ref:\n [1]\tGONG MAOGUO, ZHANG MINGYANG, YUAN YUAN. Unsupervised Band Selection Based on Evolutionary Multiobjective\n Optimization for Hyperspectral Images [J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(1): 544-57.\n :param band_subset: with shape (n_row, n_clm, n_band)\n :return:\n \"\"\"\n n_row, n_column, n_band = band_subset.shape\n spectral_angle = 0\n for i in range(n_band):\n for j in range(n_band):\n band_i = band_subset[i].reshape(-1)\n band_j = band_subset[j].reshape(-1)\n lower = np.sum(band_i ** 2) ** 0.5 * np.sum(band_j ** 2) ** 0.5\n higher = np.dot(band_i, band_j)\n if higher / lower > 1.:\n angle_ij = np.arccos(1. - 1e-16)\n # print('1-higher-lower', higher - lower)\n # elif higher / lower < -1.:\n # angle_ij = np.arccos(1e-8 - 1.)\n # print('2-higher-lower', higher - lower)\n else:\n angle_ij = np.arccos(higher / lower)\n spectral_angle += angle_ij\n msa = spectral_angle * 2 / (n_band * (n_band - 1))\n return msa",
"_____no_output_____"
],
[
"def MSA(bsnlist):\n X, _ = loadData()\n print('[',end=\" \")\n for a in range(2,len(bsnlist)):\n band_subset_list = []\n for i in bsnlist[:a]:\n band_subset_list.append(X[:,:,i]) \n band_subset = np.array(band_subset_list)\n band_subset = np.stack(band_subset,axis =2)\n print(MeanSpectralAngle(band_subset),end=\" \")\n if a!= len(bsnlist)-1:\n print(\",\",end=\" \")\n print(']')\n \n\nMSA(dabsrecnet)\nMSA(bsnetconv)\nMSA(pca)\nMSA(spabs)\nMSA(snmf)\nMSA(issc)\n\n\n",
"_____no_output_____"
],
[
"def MSD(bsnlist):\n X, _ = loadData()\n print('[',end=\" \")\n for a in range(2,len(bsnlist)):\n band_subset_list = []\n for i in bsnlist[:a]:\n band_subset_list.append(X[:,:,i]) \n band_subset = np.array(band_subset_list)\n band_subset = np.stack(band_subset,axis =2)\n print(MeanSpectralDivergence(band_subset),end=\" \")\n if a!= len(bsnlist)-1:\n print(\",\",end=\" \")\n print(']')\n \n\nMSD(dabsrecnet)\nMSD(bsnetconv)\nMSD(pca)\nMSD(spabs)\nMSD(snmf)\nMSD(issc)\n\n\n",
"_____no_output_____"
],
[
"import skimage\nfrom skimage import measure\ndef sumentr(band_subset,X):\n nbands = len(band_subset)\n ENTROPY=np.ones(nbands)\n for i in range(0,len(band_subset)):\n ENTROPY[i]+=skimage.measure.shannon_entropy(X[:,:,band_subset[i]])\n return np.sum(ENTROPY)\n\n ",
"_____no_output_____"
],
[
"def EntropySum(bsnlist):\n X, _ = loadData()\n print('[',end=\" \")\n for a in range(2,len(bsnlist)):\n band_subset_list = []\n for i in bsnlist[:a]:\n band_subset_list.append(X[:,:,i]) \n band_subset = np.array(band_subset_list)\n band_subset = np.stack(band_subset,axis =2)\n print(sumentr(bsnlist[:a],X),end=\" \")\n if a!= len(bsnlist)-1:\n print(\",\",end=\" \")\n print(']')\n \nEntropySum(dabsrecnet)\nEntropySum(bsnetconv)\nEntropySum(pca)\nEntropySum(spabs)\nEntropySum(snmf)\nEntropySum(issc)\n\n",
"_____no_output_____"
],
[
"if not (os.path.isfile('/content/SV.csv')):\n !wget https://raw.githubusercontent.com/ucalyptus/Double-Branch-Dual-Attention-Mechanism-Network/master/SV.csv\nimport pandas as pd\nimport re\nimport warnings\nwarnings.filterwarnings('ignore')\ndf = pd.read_csv(\"/content/SV.csv\")\nimport matplotlib.pyplot as plt\nX, _ = loadData()\nn_row,n_column,n_band= X.shape\nN = n_row * n_column\nhist = []\nEntropy = []\nfor i in range(n_band):\n hist_, _ = np.histogram(X[:, :, i], 256)\n hist.append(hist_ / N)\n band_i = hist[i].reshape(-1)/np.sum(hist[i])\n entr_i = entropy(band_i)\n Entropy.append(entr_i)\n \nfor i in range(0,len(df['Selected Bands'])):\n df['Selected Bands'][i] = re.findall('[0-9]+', df['Selected Bands'][i])\n df['Selected Bands'][i] = [int(k) for k in df['Selected Bands'][i]]\nmeth = [\"BS-Net-Conv\",\"SpaBS\",\"PCA\",\"SNMF\",\"DARecNet-BS\"]\ncols = ['b','y','g','r','m']\nfig1,(ax1,ax2) = plt.subplots(2,sharex='col',figsize=(37,20))\nax1.grid(True)\nax1.yaxis.grid(False)\nax1.set_xticks([0,7,15,30,45,60,75,90,105,120,135,150,165,180,195,205])\nax1.yaxis.set_tick_params(labelsize=55)\nplt.ylabel(meth)\nscatar = []\nfor i in range(0,len(meth)):\n ax1.hlines(y = meth[i],xmin=min(df['Selected Bands'][i]),xmax=max(df['Selected Bands'][i]),colors=cols[i],linewidth=7)\n SCATTER = ax1.scatter(x=df['Selected Bands'][i],y = [i]*20,edgecolors=cols[i-1],linewidths=14)\n scatar.append(SCATTER)\nax2.grid(True)\nax2.yaxis.grid(False)\nax2.set_yticks([1,2,3,4,5])\nax2.set_ylabel(\"Value of Entropy\",fontsize=55)\nax2.set_xlabel(\"Spectral Band\",fontsize=55)\nax2.xaxis.set_tick_params(labelsize=55)\nax2.yaxis.set_tick_params(labelsize=55)\nax2.plot(Entropy,linewidth=7)\nplt.savefig('Entropy_SV.pdf')\n\n",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d055c03b6eb313c1ff5d687ea3ff474b8ad1f656 | 59,763 | ipynb | Jupyter Notebook | community/aqua/chemistry/h2_iqpe.ipynb | Chibikuri/qiskit-tutorials | 15c121b95249de17e311c869fbc455210b2fcf5e | [
"Apache-2.0"
] | 2 | 2017-11-09T16:33:14.000Z | 2018-02-26T00:42:17.000Z | community/aqua/chemistry/h2_iqpe.ipynb | Chibikuri/qiskit-tutorials | 15c121b95249de17e311c869fbc455210b2fcf5e | [
"Apache-2.0"
] | 1 | 2019-04-12T07:43:25.000Z | 2020-02-07T13:32:18.000Z | community/aqua/chemistry/h2_iqpe.ipynb | Chibikuri/qiskit-tutorials | 15c121b95249de17e311c869fbc455210b2fcf5e | [
"Apache-2.0"
] | 2 | 2019-03-24T21:00:25.000Z | 2019-03-24T21:57:10.000Z | 272.890411 | 30,584 | 0.916487 | [
[
[
"## _*H2 ground state energy computation using Iterative QPE*_\n\nThis notebook demonstrates using Qiskit Chemistry to plot graphs of the ground state energy of the Hydrogen (H2) molecule over a range of inter-atomic distances using IQPE (Iterative Quantum Phase Estimation) algorithm. It is compared to the same energies as computed by the ExactEigensolver\n\nThis notebook populates a dictionary, that is a progammatic representation of an input file, in order to drive the qiskit_chemistry stack. Such a dictionary can be manipulated programmatically and this is indeed the case here where we alter the molecule supplied to the driver in each loop.\n\nThis notebook has been written to use the PYSCF chemistry driver. See the PYSCF chemistry driver readme if you need to install the external PySCF library that this driver requires.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pylab\nfrom qiskit import LegacySimulators\nfrom qiskit_chemistry import QiskitChemistry\nimport time\n\n# Input dictionary to configure Qiskit Chemistry for the chemistry problem.\nqiskit_chemistry_dict = {\n 'driver': {'name': 'PYSCF'},\n 'PYSCF': {'atom': '', 'basis': 'sto3g'},\n 'operator': {'name': 'hamiltonian', 'transformation': 'full', 'qubit_mapping': 'parity'},\n 'algorithm': {'name': ''},\n 'initial_state': {'name': 'HartreeFock'},\n}\nmolecule = 'H .0 .0 -{0}; H .0 .0 {0}'\nalgorithms = [\n {\n 'name': 'IQPE',\n 'num_iterations': 16,\n 'num_time_slices': 3000,\n 'expansion_mode': 'trotter',\n 'expansion_order': 1,\n },\n {\n 'name': 'ExactEigensolver'\n }\n]\n\nbackends = [\n LegacySimulators.get_backend('qasm_simulator'),\n None\n]\n\nstart = 0.5 # Start distance\nby = 0.5 # How much to increase distance by\nsteps = 20 # Number of steps to increase by\nenergies = np.empty([len(algorithms), steps+1])\nhf_energies = np.empty(steps+1)\ndistances = np.empty(steps+1)",
"_____no_output_____"
],
[
"import concurrent.futures\nimport multiprocessing as mp\nimport copy\n\ndef subrountine(i, qiskit_chemistry_dict, d, backend, algorithm):\n solver = QiskitChemistry()\n qiskit_chemistry_dict['PYSCF']['atom'] = molecule.format(d/2) \n qiskit_chemistry_dict['algorithm'] = algorithm\n result = solver.run(qiskit_chemistry_dict, backend=backend)\n return i, d, result['energy'], result['hf_energy']",
"_____no_output_____"
],
[
"start_time = time.time()\nmax_workers = max(4, mp.cpu_count())\nwith concurrent.futures.ProcessPoolExecutor(max_workers=max_workers) as executor:\n futures = []\n for j in range(len(algorithms)):\n algorithm = algorithms[j]\n backend = backends[j]\n for i in range(steps+1):\n d = start + i*by/steps\n future = executor.submit(\n subrountine, \n i, \n copy.deepcopy(qiskit_chemistry_dict), \n d, \n backend, \n algorithm\n )\n futures.append(future)\n for future in concurrent.futures.as_completed(futures):\n i, d, energy, hf_energy = future.result()\n energies[j][i] = energy\n hf_energies[i] = hf_energy\n distances[i] = d\n \nprint(' --- complete')\n\nprint('Distances: ', distances)\nprint('Energies:', energies)\nprint('Hartree-Fock energies:', hf_energies)\n\nprint(\"--- %s seconds ---\" % (time.time() - start_time))",
" --- complete\nDistances: [0.5 0.525 0.55 0.575 0.6 0.625 0.65 0.675 0.7 0.725 0.75 0.775\n 0.8 0.825 0.85 0.875 0.9 0.925 0.95 0.975 1. ]\nEnergies: [[-1.05394029 -1.07537168 -1.09193522 -1.10534368 -1.11548918 -1.1232653\n -1.12869848 -1.13338114 -1.13493551 -1.13632972 -1.1364747 -1.13529234\n -1.13323618 -1.13012864 -1.12773585 -1.12335899 -1.11914159 -1.11450112\n -1.10994671 -1.10478822 -1.09957597]\n [-1.05515979 -1.07591366 -1.09262991 -1.10591805 -1.11628601 -1.12416092\n -1.12990478 -1.13382622 -1.13618945 -1.13722138 -1.13711707 -1.13604436\n -1.13414767 -1.13155121 -1.12836188 -1.12467175 -1.12056028 -1.11609624\n -1.11133942 -1.10634211 -1.10115033]]\nHartree-Fock energies: [-1.04299627 -1.06306214 -1.07905074 -1.0915705 -1.10112824 -1.10814999\n -1.11299655 -1.11597526 -1.11734903 -1.11734327 -1.11615145 -1.11393966\n -1.1108504 -1.10700581 -1.10251055 -1.09745432 -1.09191404 -1.08595587\n -1.07963693 -1.07300676 -1.06610865]\n--- 517.6182761192322 seconds ---\n"
],
[
"pylab.plot(distances, hf_energies, label='Hartree-Fock')\nfor j in range(len(algorithms)):\n pylab.plot(distances, energies[j], label=algorithms[j]['name'])\npylab.xlabel('Interatomic distance')\npylab.ylabel('Energy')\npylab.title('H2 Ground State Energy')\npylab.legend(loc='upper right')\npylab.show()",
"_____no_output_____"
],
[
"pylab.plot(distances, np.subtract(hf_energies, energies[1]), label='Hartree-Fock')\npylab.plot(distances, np.subtract(energies[0], energies[1]), label='IQPE')\npylab.xlabel('Interatomic distance')\npylab.ylabel('Energy')\npylab.title('Energy difference from ExactEigensolver')\npylab.legend(loc='upper right')\npylab.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d055c7e9bfdc7877863437b5201b8c8f0c30ac37 | 72,257 | ipynb | Jupyter Notebook | ML Pipeline Preparation.ipynb | Sanmilee/Disaster-Response-Pipeline | 6007a192b835188ae2a261376ce7bd5e323ed5f3 | [
"FTL",
"CNRI-Python"
] | 3 | 2020-04-13T18:05:14.000Z | 2022-02-14T13:31:24.000Z | ML Pipeline Preparation.ipynb | Sanmilee/Disaster-Response-Pipeline | 6007a192b835188ae2a261376ce7bd5e323ed5f3 | [
"FTL",
"CNRI-Python"
] | null | null | null | ML Pipeline Preparation.ipynb | Sanmilee/Disaster-Response-Pipeline | 6007a192b835188ae2a261376ce7bd5e323ed5f3 | [
"FTL",
"CNRI-Python"
] | null | null | null | 36.865816 | 337 | 0.38161 | [
[
[
"# ML Pipeline Preparation\nFollow the instructions below to help you create your ML pipeline.\n### 1. Import libraries and load data from database.\n- Import Python libraries\n- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)\n- Define feature and target variables X and Y",
"_____no_output_____"
]
],
[
[
"# import necessary libraries\nimport pandas as pd\nimport numpy as np\nimport os\nimport pickle\nimport nltk\nimport re\n\nfrom sqlalchemy import create_engine\nimport sqlite3\n\nfrom nltk.tokenize import word_tokenize, RegexpTokenizer\nfrom nltk.stem import WordNetLemmatizer\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer\nfrom sklearn.multioutput import MultiOutputClassifier\nfrom sklearn.pipeline import Pipeline, FeatureUnion\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.metrics import classification_report\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.base import BaseEstimator, TransformerMixin\nfrom sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier,AdaBoostClassifier\nfrom sklearn.pipeline import Pipeline, FeatureUnion\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.metrics import make_scorer, accuracy_score, f1_score, fbeta_score, classification_report\nfrom sklearn.metrics import precision_recall_fscore_support\nfrom scipy.stats import hmean\nfrom scipy.stats.mstats import gmean\nfrom nltk.corpus import stopwords\nnltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger', 'stopwords'])\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n",
"[nltk_data] Downloading package punkt to /root/nltk_data...\n[nltk_data] Package punkt is already up-to-date!\n[nltk_data] Downloading package wordnet to /root/nltk_data...\n[nltk_data] Package wordnet is already up-to-date!\n[nltk_data] Downloading package averaged_perceptron_tagger to\n[nltk_data] /root/nltk_data...\n[nltk_data] Package averaged_perceptron_tagger is already up-to-\n[nltk_data] date!\n[nltk_data] Downloading package stopwords to /root/nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n"
],
[
"# load data from database\nengine = create_engine('sqlite:///InsertDatabaseName.db')\ndf = pd.read_sql(\"SELECT * FROM InsertTableName\", engine)\ndf.head()",
"_____no_output_____"
],
[
"# View types of unque 'genre' attribute\ngenre_types = df.genre.value_counts()\ngenre_types",
"_____no_output_____"
],
[
"# check for attributes with missing values/elements\ndf.isnull().mean().head()",
"_____no_output_____"
],
[
"# drops attributes with missing values\ndf.dropna()\ndf.head()",
"_____no_output_____"
],
[
"# load data from database with 'X' as attributes for message column \nX = df[\"message\"]\n# load data from database with 'Y' attributes for the last 36 columns\nY = df.drop(['id', 'message', 'original', 'genre'], axis = 1)",
"_____no_output_____"
]
],
[
[
"### 2. Write a tokenization function to process your text data\n",
"_____no_output_____"
]
],
[
[
"# Proprocess text by removing unwanted properties\n\ndef tokenize(text):\n '''\n input:\n text: input text data containing attributes\n output:\n clean_tokens: cleaned text without unwanted texts\n '''\n \n url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\\(\\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'\n detected_urls = re.findall(url_regex, text)\n for url in detected_urls:\n text = text.replace(url, \"urlplaceholder\")\n \n # take out all punctuation while tokenizing\n tokenizer = RegexpTokenizer(r'\\w+')\n tokens = tokenizer.tokenize(text)\n \n # lemmatize as shown in the lesson\n lemmatizer = WordNetLemmatizer()\n clean_tokens = []\n for tok in tokens:\n clean_tok = lemmatizer.lemmatize(tok).lower().strip()\n clean_tokens.append(clean_tok)\n return clean_tokens",
"_____no_output_____"
]
],
[
[
"### 3. Build a machine learning pipeline\nThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.",
"_____no_output_____"
]
],
[
[
"pipeline = Pipeline([\n ('vect', CountVectorizer(tokenizer=tokenize)),\n ('tfidf', TfidfTransformer()),\n ('clf', MultiOutputClassifier(RandomForestClassifier())),\n ])\n",
"_____no_output_____"
],
[
"# Visualize model parameters\npipeline.get_params()",
"_____no_output_____"
]
],
[
[
"### 4. Train pipeline\n- Split data into train and test sets\n- Train pipeline",
"_____no_output_____"
]
],
[
[
"# use sklearn split function to split dataset into train and 20% test sets\nX_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2)",
"_____no_output_____"
],
[
"# Train pipeline using RandomForest Classifier algorithm\npipeline.fit(X_train, y_train)",
"_____no_output_____"
]
],
[
[
"### 5. Test your model\nReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's classification_report on each.",
"_____no_output_____"
]
],
[
[
"# Output result metrics of trained RandomForest Classifier algorithm\ndef evaluate_model(model, X_test, y_test):\n '''\n Input:\n model: RandomForest Classifier trained model\n X_test: Test training features\n Y_test: Test training response variable\n Output:\n None: \n Display model precision, recall, f1-score, support \n '''\n y_pred = model.predict(X_test)\n for item, col in enumerate(y_test):\n print(col)\n print(classification_report(y_test[col], y_pred[:, item]))",
"_____no_output_____"
],
[
"# classification_report to display model precision, recall, f1-score, support\nevaluate_model(pipeline, X_test, y_test)\n",
"related\n precision recall f1-score support\n\n 0 0.65 0.38 0.48 1193\n 1 0.83 0.94 0.88 4016\n 2 0.50 0.43 0.46 35\n\navg / total 0.79 0.81 0.79 5244\n\nrequest\n precision recall f1-score support\n\n 0 0.89 0.98 0.93 4361\n 1 0.82 0.39 0.53 883\n\navg / total 0.88 0.88 0.87 5244\n\noffer\n precision recall f1-score support\n\n 0 0.99 1.00 1.00 5210\n 1 0.00 0.00 0.00 34\n\navg / total 0.99 0.99 0.99 5244\n\naid_related\n precision recall f1-score support\n\n 0 0.72 0.88 0.79 3049\n 1 0.75 0.53 0.62 2195\n\navg / total 0.74 0.73 0.72 5244\n\nmedical_help\n precision recall f1-score support\n\n 0 0.92 1.00 0.96 4805\n 1 0.71 0.08 0.14 439\n\navg / total 0.90 0.92 0.89 5244\n\nmedical_products\n precision recall f1-score support\n\n 0 0.95 1.00 0.98 4984\n 1 0.60 0.07 0.12 260\n\navg / total 0.94 0.95 0.93 5244\n\nsearch_and_rescue\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5106\n 1 0.67 0.10 0.18 138\n\navg / total 0.97 0.98 0.97 5244\n\nsecurity\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5151\n 1 0.25 0.01 0.02 93\n\navg / total 0.97 0.98 0.97 5244\n\nmilitary\n precision recall f1-score support\n\n 0 0.97 1.00 0.98 5069\n 1 0.67 0.07 0.12 175\n\navg / total 0.96 0.97 0.95 5244\n\nchild_alone\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5244\n\navg / total 1.00 1.00 1.00 5244\n\nwater\n precision recall f1-score support\n\n 0 0.95 1.00 0.97 4897\n 1 0.82 0.30 0.44 347\n\navg / total 0.94 0.95 0.94 5244\n\nfood\n precision recall f1-score support\n\n 0 0.94 0.99 0.96 4655\n 1 0.83 0.46 0.59 589\n\navg / total 0.92 0.93 0.92 5244\n\nshelter\n precision recall f1-score support\n\n 0 0.93 0.99 0.96 4761\n 1 0.82 0.30 0.44 483\n\navg / total 0.92 0.93 0.91 5244\n\nclothing\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5150\n 1 1.00 0.05 0.10 94\n\navg / total 0.98 0.98 0.98 5244\n\nmoney\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5133\n 1 0.75 0.05 0.10 111\n\navg / total 0.98 0.98 0.97 5244\n\nmissing_people\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5181\n 1 0.75 0.05 0.09 63\n\navg / total 0.99 0.99 0.98 5244\n\nrefugees\n precision recall f1-score support\n\n 0 0.97 1.00 0.99 5091\n 1 0.82 0.06 0.11 153\n\navg / total 0.97 0.97 0.96 5244\n\ndeath\n precision recall f1-score support\n\n 0 0.96 1.00 0.98 5021\n 1 0.77 0.11 0.19 223\n\navg / total 0.95 0.96 0.95 5244\n\nother_aid\n precision recall f1-score support\n\n 0 0.87 0.99 0.93 4531\n 1 0.54 0.04 0.07 713\n\navg / total 0.82 0.86 0.81 5244\n\ninfrastructure_related\n precision recall f1-score support\n\n 0 0.94 1.00 0.97 4907\n 1 0.00 0.00 0.00 337\n\navg / total 0.88 0.93 0.90 5244\n\ntransport\n precision recall f1-score support\n\n 0 0.95 1.00 0.97 4977\n 1 0.61 0.06 0.12 267\n\navg / total 0.93 0.95 0.93 5244\n\nbuildings\n precision recall f1-score support\n\n 0 0.95 1.00 0.97 4966\n 1 0.87 0.07 0.13 278\n\navg / total 0.95 0.95 0.93 5244\n\nelectricity\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5138\n 1 0.83 0.09 0.17 106\n\navg / total 0.98 0.98 0.97 5244\n\ntools\n precision recall f1-score support\n\n 0 0.99 1.00 1.00 5209\n 1 0.00 0.00 0.00 35\n\navg / total 0.99 0.99 0.99 5244\n\nhospitals\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5189\n 1 0.00 0.00 0.00 55\n\navg / total 0.98 0.99 0.98 5244\n\nshops\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5218\n 1 0.00 0.00 0.00 26\n\navg / total 0.99 1.00 0.99 5244\n\naid_centers\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5185\n 1 0.00 0.00 0.00 59\n\navg / total 0.98 0.99 0.98 5244\n\nother_infrastructure\n precision recall f1-score support\n\n 0 0.96 1.00 0.98 5011\n 1 0.25 0.00 0.01 233\n\navg / total 0.92 0.96 0.93 5244\n\nweather_related\n precision recall f1-score support\n\n 0 0.85 0.97 0.90 3801\n 1 0.85 0.53 0.66 1443\n\navg / total 0.85 0.85 0.83 5244\n\nfloods\n precision recall f1-score support\n\n 0 0.93 1.00 0.96 4798\n 1 0.87 0.23 0.37 446\n\navg / total 0.93 0.93 0.91 5244\n\nstorm\n precision recall f1-score support\n\n 0 0.94 0.99 0.96 4758\n 1 0.77 0.35 0.48 486\n\navg / total 0.92 0.93 0.92 5244\n\nfire\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5186\n 1 1.00 0.02 0.03 58\n\navg / total 0.99 0.99 0.98 5244\n\nearthquake\n precision recall f1-score support\n\n 0 0.96 0.99 0.98 4769\n 1 0.90 0.61 0.73 475\n\navg / total 0.96 0.96 0.95 5244\n\ncold\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5150\n 1 0.90 0.10 0.17 94\n\navg / total 0.98 0.98 0.98 5244\n\nother_weather\n precision recall f1-score support\n\n 0 0.95 1.00 0.97 4958\n 1 0.46 0.04 0.08 286\n\navg / total 0.92 0.95 0.92 5244\n\ndirect_report\n precision recall f1-score support\n\n 0 0.85 0.98 0.91 4197\n 1 0.78 0.30 0.43 1047\n\navg / total 0.83 0.84 0.81 5244\n\n"
]
],
[
[
"### 6. Improve your model\nUse grid search to find better parameters.",
"_____no_output_____"
]
],
[
[
"parameters = {'clf__estimator__max_depth': [10, 50, None],\n 'clf__estimator__min_samples_leaf':[2, 5, 10]}\n\ncv = GridSearchCV(pipeline, parameters)",
"_____no_output_____"
]
],
[
[
"### 7. Test your model\nShow the accuracy, precision, and recall of the tuned model.\n\nSince this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!",
"_____no_output_____"
]
],
[
[
"# Train pipeline using the improved model\ncv.fit(X_train, y_train)",
"_____no_output_____"
],
[
"# # classification_report to display model precision, recall, f1-score, support\nevaluate_model(cv, X_test, y_test)\n",
"related\n precision recall f1-score support\n\n 0 0.73 0.25 0.37 1193\n 1 0.81 0.97 0.88 4016\n 2 1.00 0.14 0.25 35\n\navg / total 0.79 0.80 0.76 5244\n\nrequest\n precision recall f1-score support\n\n 0 0.88 0.99 0.93 4361\n 1 0.88 0.34 0.49 883\n\navg / total 0.88 0.88 0.86 5244\n\noffer\n precision recall f1-score support\n\n 0 0.99 1.00 1.00 5210\n 1 0.00 0.00 0.00 34\n\navg / total 0.99 0.99 0.99 5244\n\naid_related\n precision recall f1-score support\n\n 0 0.75 0.85 0.80 3049\n 1 0.74 0.62 0.67 2195\n\navg / total 0.75 0.75 0.75 5244\n\nmedical_help\n precision recall f1-score support\n\n 0 0.92 1.00 0.96 4805\n 1 0.50 0.03 0.06 439\n\navg / total 0.88 0.92 0.88 5244\n\nmedical_products\n precision recall f1-score support\n\n 0 0.95 1.00 0.98 4984\n 1 0.83 0.07 0.13 260\n\navg / total 0.95 0.95 0.93 5244\n\nsearch_and_rescue\n precision recall f1-score support\n\n 0 0.97 1.00 0.99 5106\n 1 0.80 0.03 0.06 138\n\navg / total 0.97 0.97 0.96 5244\n\nsecurity\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5151\n 1 0.00 0.00 0.00 93\n\navg / total 0.96 0.98 0.97 5244\n\nmilitary\n precision recall f1-score support\n\n 0 0.97 1.00 0.98 5069\n 1 0.62 0.05 0.09 175\n\navg / total 0.96 0.97 0.95 5244\n\nchild_alone\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5244\n\navg / total 1.00 1.00 1.00 5244\n\nwater\n precision recall f1-score support\n\n 0 0.95 1.00 0.97 4897\n 1 0.84 0.30 0.44 347\n\navg / total 0.95 0.95 0.94 5244\n\nfood\n precision recall f1-score support\n\n 0 0.92 0.99 0.96 4655\n 1 0.88 0.31 0.46 589\n\navg / total 0.91 0.92 0.90 5244\n\nshelter\n precision recall f1-score support\n\n 0 0.92 1.00 0.96 4761\n 1 0.86 0.09 0.16 483\n\navg / total 0.91 0.91 0.88 5244\n\nclothing\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5150\n 1 0.50 0.01 0.02 94\n\navg / total 0.97 0.98 0.97 5244\n\nmoney\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5133\n 1 1.00 0.01 0.02 111\n\navg / total 0.98 0.98 0.97 5244\n\nmissing_people\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5181\n 1 0.00 0.00 0.00 63\n\navg / total 0.98 0.99 0.98 5244\n\nrefugees\n precision recall f1-score support\n\n 0 0.97 1.00 0.99 5091\n 1 1.00 0.01 0.01 153\n\navg / total 0.97 0.97 0.96 5244\n\ndeath\n precision recall f1-score support\n\n 0 0.96 1.00 0.98 5021\n 1 0.75 0.04 0.08 223\n\navg / total 0.95 0.96 0.94 5244\n\nother_aid\n precision recall f1-score support\n\n 0 0.86 1.00 0.93 4531\n 1 1.00 0.00 0.01 713\n\navg / total 0.88 0.86 0.80 5244\n\ninfrastructure_related\n precision recall f1-score support\n\n 0 0.94 1.00 0.97 4907\n 1 0.50 0.00 0.01 337\n\navg / total 0.91 0.94 0.91 5244\n\ntransport\n precision recall f1-score support\n\n 0 0.95 1.00 0.97 4977\n 1 0.77 0.04 0.07 267\n\navg / total 0.94 0.95 0.93 5244\n\nbuildings\n precision recall f1-score support\n\n 0 0.95 1.00 0.97 4966\n 1 0.92 0.04 0.08 278\n\navg / total 0.95 0.95 0.93 5244\n\nelectricity\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5138\n 1 1.00 0.01 0.02 106\n\navg / total 0.98 0.98 0.97 5244\n\ntools\n precision recall f1-score support\n\n 0 0.99 1.00 1.00 5209\n 1 0.00 0.00 0.00 35\n\navg / total 0.99 0.99 0.99 5244\n\nhospitals\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5189\n 1 0.00 0.00 0.00 55\n\navg / total 0.98 0.99 0.98 5244\n\nshops\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5218\n 1 0.00 0.00 0.00 26\n\navg / total 0.99 1.00 0.99 5244\n\naid_centers\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5185\n 1 0.00 0.00 0.00 59\n\navg / total 0.98 0.99 0.98 5244\n\nother_infrastructure\n precision recall f1-score support\n\n 0 0.96 1.00 0.98 5011\n 1 0.00 0.00 0.00 233\n\navg / total 0.91 0.96 0.93 5244\n\nweather_related\n precision recall f1-score support\n\n 0 0.86 0.96 0.91 3801\n 1 0.85 0.59 0.70 1443\n\navg / total 0.86 0.86 0.85 5244\n\nfloods\n precision recall f1-score support\n\n 0 0.93 1.00 0.96 4798\n 1 0.91 0.18 0.30 446\n\navg / total 0.93 0.93 0.91 5244\n\nstorm\n precision recall f1-score support\n\n 0 0.93 0.99 0.96 4758\n 1 0.75 0.32 0.45 486\n\navg / total 0.92 0.93 0.91 5244\n\nfire\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5186\n 1 0.00 0.00 0.00 58\n\navg / total 0.98 0.99 0.98 5244\n\nearthquake\n precision recall f1-score support\n\n 0 0.94 0.99 0.97 4769\n 1 0.87 0.39 0.53 475\n\navg / total 0.94 0.94 0.93 5244\n\ncold\n precision recall f1-score support\n\n 0 0.98 1.00 0.99 5150\n 1 0.83 0.05 0.10 94\n\navg / total 0.98 0.98 0.98 5244\n\nother_weather\n precision recall f1-score support\n\n 0 0.95 1.00 0.97 4958\n 1 0.83 0.02 0.03 286\n\navg / total 0.94 0.95 0.92 5244\n\ndirect_report\n precision recall f1-score support\n\n 0 0.85 0.99 0.91 4197\n 1 0.85 0.29 0.43 1047\n\navg / total 0.85 0.85 0.82 5244\n\n"
],
[
"cv.best_estimator_",
"_____no_output_____"
]
],
[
[
"### 8. Try improving your model further. Here are a few ideas:\n* try other machine learning algorithms\n* add other features besides the TF-IDF",
"_____no_output_____"
]
],
[
[
"# Improve model using DecisionTree Classifier\n\nnew_pipeline = Pipeline([\n ('vect', CountVectorizer(tokenizer=tokenize)),\n ('tfidf', TfidfTransformer()),\n ('clf', MultiOutputClassifier(DecisionTreeClassifier()))\n ])\n",
"_____no_output_____"
],
[
"# Train improved model\nnew_pipeline.fit(X_train, y_train)\n",
"_____no_output_____"
],
[
"# Run result metric score display function\nevaluate_model(new_pipeline, X_test, y_test)\n",
"related\n precision recall f1-score support\n\n 0 0.47 0.45 0.46 1193\n 1 0.84 0.85 0.84 4016\n 2 0.31 0.40 0.35 35\n\navg / total 0.75 0.75 0.75 5244\n\nrequest\n precision recall f1-score support\n\n 0 0.92 0.92 0.92 4361\n 1 0.60 0.61 0.60 883\n\navg / total 0.87 0.87 0.87 5244\n\noffer\n precision recall f1-score support\n\n 0 0.99 1.00 1.00 5210\n 1 0.00 0.00 0.00 34\n\navg / total 0.99 0.99 0.99 5244\n\naid_related\n precision recall f1-score support\n\n 0 0.75 0.75 0.75 3049\n 1 0.65 0.65 0.65 2195\n\navg / total 0.71 0.71 0.71 5244\n\nmedical_help\n precision recall f1-score support\n\n 0 0.94 0.95 0.94 4805\n 1 0.33 0.30 0.31 439\n\navg / total 0.89 0.89 0.89 5244\n\nmedical_products\n precision recall f1-score support\n\n 0 0.97 0.97 0.97 4984\n 1 0.40 0.35 0.37 260\n\navg / total 0.94 0.94 0.94 5244\n\nsearch_and_rescue\n precision recall f1-score support\n\n 0 0.98 0.98 0.98 5106\n 1 0.22 0.20 0.21 138\n\navg / total 0.96 0.96 0.96 5244\n\nsecurity\n precision recall f1-score support\n\n 0 0.98 0.99 0.98 5151\n 1 0.04 0.03 0.03 93\n\navg / total 0.97 0.97 0.97 5244\n\nmilitary\n precision recall f1-score support\n\n 0 0.98 0.98 0.98 5069\n 1 0.39 0.37 0.38 175\n\navg / total 0.96 0.96 0.96 5244\n\nchild_alone\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5244\n\navg / total 1.00 1.00 1.00 5244\n\nwater\n precision recall f1-score support\n\n 0 0.98 0.98 0.98 4897\n 1 0.67 0.67 0.67 347\n\navg / total 0.96 0.96 0.96 5244\n\nfood\n precision recall f1-score support\n\n 0 0.96 0.96 0.96 4655\n 1 0.72 0.71 0.71 589\n\navg / total 0.94 0.94 0.94 5244\n\nshelter\n precision recall f1-score support\n\n 0 0.96 0.96 0.96 4761\n 1 0.62 0.59 0.61 483\n\navg / total 0.93 0.93 0.93 5244\n\nclothing\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5150\n 1 0.62 0.40 0.49 94\n\navg / total 0.98 0.98 0.98 5244\n\nmoney\n precision recall f1-score support\n\n 0 0.99 0.99 0.99 5133\n 1 0.40 0.38 0.39 111\n\navg / total 0.97 0.97 0.97 5244\n\nmissing_people\n precision recall f1-score support\n\n 0 0.99 0.99 0.99 5181\n 1 0.27 0.21 0.23 63\n\navg / total 0.98 0.98 0.98 5244\n\nrefugees\n precision recall f1-score support\n\n 0 0.98 0.98 0.98 5091\n 1 0.24 0.25 0.25 153\n\navg / total 0.96 0.95 0.96 5244\n\ndeath\n precision recall f1-score support\n\n 0 0.98 0.98 0.98 5021\n 1 0.49 0.53 0.51 223\n\navg / total 0.96 0.96 0.96 5244\n\nother_aid\n precision recall f1-score support\n\n 0 0.89 0.90 0.89 4531\n 1 0.29 0.27 0.28 713\n\navg / total 0.81 0.81 0.81 5244\n\ninfrastructure_related\n precision recall f1-score support\n\n 0 0.94 0.95 0.95 4907\n 1 0.18 0.16 0.17 337\n\navg / total 0.89 0.90 0.90 5244\n\ntransport\n precision recall f1-score support\n\n 0 0.96 0.97 0.97 4977\n 1 0.36 0.29 0.32 267\n\navg / total 0.93 0.94 0.93 5244\n\nbuildings\n precision recall f1-score support\n\n 0 0.97 0.97 0.97 4966\n 1 0.43 0.40 0.42 278\n\navg / total 0.94 0.94 0.94 5244\n\nelectricity\n precision recall f1-score support\n\n 0 0.99 0.99 0.99 5138\n 1 0.39 0.31 0.35 106\n\navg / total 0.97 0.98 0.97 5244\n\ntools\n precision recall f1-score support\n\n 0 0.99 1.00 0.99 5209\n 1 0.05 0.03 0.04 35\n\navg / total 0.99 0.99 0.99 5244\n\nhospitals\n precision recall f1-score support\n\n 0 0.99 0.99 0.99 5189\n 1 0.22 0.18 0.20 55\n\navg / total 0.98 0.98 0.98 5244\n\nshops\n precision recall f1-score support\n\n 0 1.00 1.00 1.00 5218\n 1 0.00 0.00 0.00 26\n\navg / total 0.99 0.99 0.99 5244\n\naid_centers\n precision recall f1-score support\n\n 0 0.99 0.99 0.99 5185\n 1 0.08 0.08 0.08 59\n\navg / total 0.98 0.98 0.98 5244\n\nother_infrastructure\n precision recall f1-score support\n\n 0 0.96 0.97 0.96 5011\n 1 0.15 0.13 0.14 233\n\navg / total 0.92 0.93 0.93 5244\n\nweather_related\n precision recall f1-score support\n\n 0 0.89 0.91 0.90 3801\n 1 0.74 0.71 0.72 1443\n\navg / total 0.85 0.85 0.85 5244\n\nfloods\n precision recall f1-score support\n\n 0 0.96 0.96 0.96 4798\n 1 0.59 0.54 0.57 446\n\navg / total 0.93 0.93 0.93 5244\n\nstorm\n precision recall f1-score support\n\n 0 0.96 0.97 0.97 4758\n 1 0.66 0.65 0.65 486\n\navg / total 0.94 0.94 0.94 5244\n\nfire\n precision recall f1-score support\n\n 0 0.99 0.99 0.99 5186\n 1 0.31 0.29 0.30 58\n\navg / total 0.98 0.99 0.98 5244\n\nearthquake\n precision recall f1-score support\n\n 0 0.98 0.98 0.98 4769\n 1 0.80 0.78 0.79 475\n\navg / total 0.96 0.96 0.96 5244\n\ncold\n precision recall f1-score support\n\n 0 0.99 0.99 0.99 5150\n 1 0.34 0.38 0.36 94\n\navg / total 0.98 0.98 0.98 5244\n\nother_weather\n precision recall f1-score support\n\n 0 0.96 0.96 0.96 4958\n 1 0.26 0.22 0.24 286\n\navg / total 0.92 0.92 0.92 5244\n\ndirect_report\n precision recall f1-score support\n\n 0 0.88 0.89 0.88 4197\n 1 0.54 0.50 0.52 1047\n\navg / total 0.81 0.81 0.81 5244\n\n"
]
],
[
[
"### 9. Export your model as a pickle file",
"_____no_output_____"
]
],
[
[
"# save a copy file of the the trained model to disk\n\ntrained_model_file = 'trained_model.sav'\npickle.dump(cv, open(trained_model_file, 'wb'))",
"_____no_output_____"
]
],
[
[
"### 10. Use this notebook to complete `train.py`\nUse the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d055d04f59fa4053e64d9851680ee8ccea505ab1 | 816,995 | ipynb | Jupyter Notebook | WaymoNewtoCOCO.ipynb | lkk688/WaymoObjectDetection | c470f2648de69ec8a547269f16bb2f2868d9e05e | [
"MIT"
] | 6 | 2020-10-01T20:50:46.000Z | 2021-12-06T13:52:41.000Z | .ipynb_checkpoints/WaymoNewtoCOCO-checkpoint.ipynb | lkk688/WaymoObjectDetection | c470f2648de69ec8a547269f16bb2f2868d9e05e | [
"MIT"
] | null | null | null | .ipynb_checkpoints/WaymoNewtoCOCO-checkpoint.ipynb | lkk688/WaymoObjectDetection | c470f2648de69ec8a547269f16bb2f2868d9e05e | [
"MIT"
] | 4 | 2020-12-14T06:51:04.000Z | 2021-11-13T11:12:37.000Z | 59.835579 | 2,875 | 0.666644 | [
[
[
"#using tensorflow kernel\nimport tensorflow as tf\nprint(tf.__version__)\n!pip list | grep waymo\n!pip list | grep torch",
"2.3.0\nwaymo-open-dataset-tf-2-0-0 1.2.0\nwaymo-open-dataset-tf-2-1-0 1.2.0\n\u001b[33mWARNING: You are using pip version 20.2.2; however, version 20.2.3 is available.\nYou should consider upgrading via the '/home/010796032/newvenv2/bin/python3.6 -m pip install --upgrade pip' command.\u001b[0m\ntorch 1.5.1+cu101\ntorchvision 0.6.1+cu101\n\u001b[33mWARNING: You are using pip version 20.2.2; however, version 20.2.3 is available.\nYou should consider upgrading via the '/home/010796032/newvenv2/bin/python3.6 -m pip install --upgrade pip' command.\u001b[0m\n"
],
[
"!nvidia-smi",
"Sun Sep 13 11:41:01 2020 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 418.87.00 Driver Version: 418.87.00 CUDA Version: 10.1 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n|===============================+======================+======================|\n| 0 Tesla P100-PCIE... On | 00000000:03:00.0 Off | 0 |\n| N/A 30C P0 25W / 250W | 0MiB / 12198MiB | 0% Default |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: GPU Memory |\n| GPU PID Type Process name Usage |\n|=============================================================================|\n| No running processes found |\n+-----------------------------------------------------------------------------+\n"
],
[
"import tensorflow.compat.v1 as tf\nimport math\nimport numpy as np\nimport itertools\n\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as patches\n\nfrom waymo_open_dataset.utils import range_image_utils\nfrom waymo_open_dataset.utils import transform_utils\nfrom waymo_open_dataset.utils import frame_utils\nfrom waymo_open_dataset import dataset_pb2 as open_dataset\n#tf.enable_eager_execution()",
"_____no_output_____"
],
[
"import os\nimport argparse\nfrom pathlib import Path\nimport cv2\nimport json\nimport utils\nfrom PIL import Image\nfrom glob import glob\nimport sys\nimport datetime\nimport os",
"_____no_output_____"
],
[
"WAYMO_CLASSES = ['unknown', 'vehicle', 'pedestrian', 'sign', 'cyclist']\n\ndef get_camera_labels(frame):\n if frame.camera_labels:\n return frame.camera_labels\n return frame.projected_lidar_labels\n\ndef extract_segment_frontcamera(tfrecord_files, out_dir, step):\n \n images = []\n annotations = []\n categories = [{'id': i, 'name': n} for i, n in enumerate(WAYMO_CLASSES)][1:]\n image_globeid=0\n \n for segment_path in tfrecord_files:\n\n print(f'extracting {segment_path}')\n segment_path=Path(segment_path)#convert str to Path object\n segment_name = segment_path.name\n print(segment_name)\n segment_out_dir = out_dir # remove segment_name as one folder, duplicate with image name\n # segment_out_dir = out_dir / segment_name \n # print(segment_out_dir)#output path + segment_name(with tfrecord)\n # segment_out_dir.mkdir(parents=True, exist_ok=True)\n\n dataset = tf.data.TFRecordDataset(str(segment_path), compression_type='')\n \n for i, data in enumerate(dataset):\n if i % step != 0:\n continue\n\n print('.', end='', flush=True)\n frame = open_dataset.Frame()\n frame.ParseFromString(bytearray(data.numpy()))\n #get one frame\n\n context_name = frame.context.name\n frame_timestamp_micros = str(frame.timestamp_micros)\n\n for index, image in enumerate(frame.images):\n if image.name != 1: #Only use front camera\n continue\n camera_name = open_dataset.CameraName.Name.Name(image.name)\n image_globeid = image_globeid + 1\n #print(\"camera name:\", camera_name)\n\n img = tf.image.decode_jpeg(image.image).numpy()\n image_name='_'.join([frame_timestamp_micros, camera_name])#image name\n image_id = '/'.join([context_name, image_name]) #using \"/\" join, context_name is the folder\n #New: do not use sub-folder\n image_id = '_'.join([context_name, image_name])\n #image_id = '/'.join([context_name, frame_timestamp_micros, camera_name]) #using \"/\" join\n file_name = image_id + '.jpg'\n #print(file_name)\n filepath = out_dir / file_name\n #filepath = segment_out_dir / file_name\n #print('Image output path',filepath)\n filepath.parent.mkdir(parents=True, exist_ok=True)\n\n #images.append(dict(file_name=file_name, id=image_id, height=img.shape[0], width=img.shape[1], camera_name=camera_name))#new add camera_name\n images.append(dict(file_name=file_name, id=image_globeid, height=img.shape[0], width=img.shape[1], camera_name=camera_name))#new add camera_name\n print(\"current image id: \", image_globeid)\n cv2.imwrite(str(filepath), img)\n\n for camera_labels in get_camera_labels(frame):\n # Ignore camera labels that do not correspond to this camera.\n if camera_labels.name == image.name:\n # Iterate over the individual labels.\n for label in camera_labels.labels:\n # object bounding box.\n width = int(label.box.length)\n height = int(label.box.width)\n x = int(label.box.center_x - 0.5 * width)\n y = int(label.box.center_y - 0.5 * height)\n area = width * height\n annotations.append(dict(image_id=image_globeid,\n bbox=[x, y, width, height], area=area, category_id=label.type,\n object_id=label.id,\n tracking_difficulty_level=2 if label.tracking_difficulty_level == 2 else 1,\n detection_difficulty_level=2 if label.detection_difficulty_level == 2 else 1))\n\n with (segment_out_dir / 'annotations.json').open('w') as f:\n for i, anno in enumerate(annotations):\n anno['id'] = i #set as image frame ID\n json.dump(dict(images=images, annotations=annotations, categories=categories), f)\n\ndef extract_segment_allcamera(tfrecord_files, out_dir, step):\n \n images = []\n annotations = []\n categories = [{'id': i, 'name': n} for i, n in enumerate(WAYMO_CLASSES)][1:]\n image_globeid=0\n \n for segment_path in tfrecord_files:\n\n print(f'extracting {segment_path}')\n segment_path=Path(segment_path)#convert str to Path object\n segment_name = segment_path.name\n print(segment_name)\n segment_out_dir = out_dir # remove segment_name as one folder, duplicate with image name\n # segment_out_dir = out_dir / segment_name \n # print(segment_out_dir)#output path + segment_name(with tfrecord)\n # segment_out_dir.mkdir(parents=True, exist_ok=True)\n\n dataset = tf.data.TFRecordDataset(str(segment_path), compression_type='')\n \n for i, data in enumerate(dataset):\n if i % step != 0:\n continue\n\n print('.', end='', flush=True)\n frame = open_dataset.Frame()\n frame.ParseFromString(bytearray(data.numpy()))\n #get one frame\n\n context_name = frame.context.name\n frame_timestamp_micros = str(frame.timestamp_micros)\n\n for index, image in enumerate(frame.images):\n camera_name = open_dataset.CameraName.Name.Name(image.name)\n image_globeid = image_globeid + 1\n #print(\"camera name:\", camera_name)\n\n img = tf.image.decode_jpeg(image.image).numpy()\n image_name='_'.join([frame_timestamp_micros, camera_name])#image name\n image_id = '/'.join([context_name, image_name]) #using \"/\" join, context_name is the folder\n #New: use sub-folder\n #image_id = '_'.join([context_name, image_name])\n image_id = '/'.join([context_name, frame_timestamp_micros, camera_name]) #using \"/\" join\n file_name = image_id + '.jpg'\n #print(file_name)\n filepath = out_dir / file_name\n #filepath = segment_out_dir / file_name\n #print('Image output path',filepath)\n filepath.parent.mkdir(parents=True, exist_ok=True)\n\n #images.append(dict(file_name=file_name, id=image_id, height=img.shape[0], width=img.shape[1], camera_name=camera_name))#new add camera_name\n images.append(dict(file_name=file_name, id=image_globeid, height=img.shape[0], width=img.shape[1], camera_name=camera_name))#new add camera_name\n print(\"current image id: \", image_globeid)\n cv2.imwrite(str(filepath), img)\n\n for camera_labels in get_camera_labels(frame):\n # Ignore camera labels that do not correspond to this camera.\n if camera_labels.name == image.name:\n # Iterate over the individual labels.\n for label in camera_labels.labels:\n # object bounding box.\n width = int(label.box.length)\n height = int(label.box.width)\n x = int(label.box.center_x - 0.5 * width)\n y = int(label.box.center_y - 0.5 * height)\n area = width * height\n annotations.append(dict(image_id=image_globeid,\n bbox=[x, y, width, height], area=area, category_id=label.type,\n object_id=label.id,\n tracking_difficulty_level=2 if label.tracking_difficulty_level == 2 else 1,\n detection_difficulty_level=2 if label.detection_difficulty_level == 2 else 1))\n\n with (segment_out_dir / 'annotations.json').open('w') as f:\n for i, anno in enumerate(annotations):\n anno['id'] = i #set as image frame ID\n json.dump(dict(images=images, annotations=annotations, categories=categories), f)\n\ndef extract_segment_allfrontcamera(PATH,folderslist, out_dir, step):\n \n #folderslist = [\"training_0031\",\"training_0030\",\"training_0029\",\"training_0028\",\"training_0027\",\"training_0026\"]\n #PATH='/data/cmpe295-liu/Waymo'\n images = []\n annotations = []\n categories = [{'id': i, 'name': n} for i, n in enumerate(WAYMO_CLASSES)][1:]\n image_globeid=0\n \n for index in range(len(folderslist)):\n foldername=folderslist[index]\n print(\"Folder name:\", foldername)\n tfrecord_files = glob(os.path.join(PATH, foldername, \"*.tfrecord\")) #[path for path in glob(os.path.join(PATH, foldername, \"*.tfrecord\"))]\n print(\"Num of tfrecord file:\", len(tfrecord_files))\n #print(tfrecord_files)\n \n for segment_path in tfrecord_files:\n\n print(f'extracting {segment_path}')\n segment_path=Path(segment_path)#convert str to Path object\n segment_name = segment_path.name\n print(segment_name)\n segment_out_dir = out_dir # remove segment_name as one folder, duplicate with image name\n # segment_out_dir = out_dir / segment_name \n # print(segment_out_dir)#output path + segment_name(with tfrecord)\n # segment_out_dir.mkdir(parents=True, exist_ok=True)\n\n dataset = tf.data.TFRecordDataset(str(segment_path), compression_type='')\n\n for i, data in enumerate(dataset):\n if i % step != 0:\n continue\n\n print('.', end='', flush=True)\n frame = open_dataset.Frame()\n frame.ParseFromString(bytearray(data.numpy()))\n #get one frame\n\n context_name = frame.context.name\n frame_timestamp_micros = str(frame.timestamp_micros)\n\n for index, image in enumerate(frame.images):\n if image.name != 1: #Only use front camera\n continue\n camera_name = open_dataset.CameraName.Name.Name(image.name)\n image_globeid = image_globeid + 1\n #print(\"camera name:\", camera_name)\n\n img = tf.image.decode_jpeg(image.image).numpy()\n image_name='_'.join([frame_timestamp_micros, camera_name])#image name\n #image_id = '/'.join([context_name, image_name]) #using \"/\" join, context_name is the folder\n #New: do not use sub-folder\n image_id = '_'.join([context_name, image_name])\n #image_id = '/'.join([context_name, frame_timestamp_micros, camera_name]) #using \"/\" join\n \n file_name = image_id + '.jpg'\n #print(file_name)\n file_name = '/'.join([foldername, file_name])\n filepath = out_dir / file_name\n #filepath = segment_out_dir / file_name\n #print('Image output path',filepath)\n filepath.parent.mkdir(parents=True, exist_ok=True)\n\n #images.append(dict(file_name=file_name, id=image_id, height=img.shape[0], width=img.shape[1], camera_name=camera_name))#new add camera_name\n images.append(dict(file_name=file_name, id=image_globeid, height=img.shape[0], width=img.shape[1], camera_name=camera_name))#new add camera_name\n #print(\"current image id: \", image_globeid)\n cv2.imwrite(str(filepath), img)\n\n for camera_labels in get_camera_labels(frame):\n # Ignore camera labels that do not correspond to this camera.\n if camera_labels.name == image.name:\n # Iterate over the individual labels.\n for label in camera_labels.labels:\n # object bounding box.\n width = int(label.box.length)\n height = int(label.box.width)\n x = int(label.box.center_x - 0.5 * width)\n y = int(label.box.center_y - 0.5 * height)\n area = width * height\n annotations.append(dict(image_id=image_globeid,\n bbox=[x, y, width, height], area=area, category_id=label.type,\n object_id=label.id,\n tracking_difficulty_level=2 if label.tracking_difficulty_level == 2 else 1,\n detection_difficulty_level=2 if label.detection_difficulty_level == 2 else 1))\n\n with (segment_out_dir / 'annotations.json').open('w') as f:\n for i, anno in enumerate(annotations):\n anno['id'] = i #set as image frame ID\n json.dump(dict(images=images, annotations=annotations, categories=categories), f)",
"_____no_output_____"
],
[
"!rm -r /data/cmpe295-liu/WaymoExport",
"_____no_output_____"
],
[
"!rm -r /data/cmpe295-liu/WaymoExportAll/",
"_____no_output_____"
],
[
"!mkdir /data/cmpe295-liu/Waymo/WaymoCOCOsmall",
"_____no_output_____"
],
[
"!rm -r /data/cmpe295-liu/Waymo/WaymoCOCOsmall/Training",
"_____no_output_____"
],
[
"folderslist = [\"training_0031\",\"training_0030\",\"training_0029\",\"training_0028\",\"training_0027\",\"training_0026\"]\nPATH='/data/cmpe295-liu/Waymo'\nfor index in range(len(folderslist)):\n foldername=folderslist[index]\n print(foldername)\n tfrecord_files = glob(os.path.join(PATH, foldername, \"*.tfrecord\")) #[path for path in glob(os.path.join(PATH, foldername, \"*.tfrecord\"))]\n print(tfrecord_files)",
"training_0031\n['/data/cmpe295-liu/Waymo/training_0031/segment-9288629315134424745_4360_000_4380_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9295161125729168140_1270_000_1290_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9311322119128915594_5285_000_5305_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9320169289978396279_1040_000_1060_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9325580606626376787_4509_140_4529_140_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9334364225104959137_661_000_681_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9350921499281634194_2403_251_2423_251_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9385013624094020582_2547_650_2567_650_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9415086857375798767_4760_000_4780_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9465500459680839281_1100_000_1120_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9509506420470671704_4049_100_4069_100_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9521653920958139982_940_000_960_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9529958888589376527_640_000_660_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9547911055204230158_1567_950_1587_950_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9568394837328971633_466_365_486_365_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9653249092275997647_980_000_1000_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9654060644653474834_3905_000_3925_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9696413700515401320_1690_000_1710_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-972142630887801133_642_740_662_740_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9747453753779078631_940_000_960_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9758342966297863572_875_230_895_230_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9820553434532681355_2820_000_2840_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9907794657177651763_1126_570_1146_570_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-990914685337955114_980_000_1000_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9985243312780923024_3049_720_3069_720_with_camera_labels.tfrecord']\ntraining_0030\n['/data/cmpe295-liu/Waymo/training_0030/segment-8722413665055769182_2840_000_2860_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-8745106945249251942_1207_000_1227_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-8763126149209091146_1843_320_1863_320_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-8796914080594559459_4284_170_4304_170_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-8806931859563747931_1160_000_1180_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-8811210064692949185_3066_770_3086_770_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-8822503619482926605_1080_000_1100_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-8859409804103625626_2760_000_2780_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-8938046348067069210_3800_000_3820_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-8965112222692085704_4860_000_4880_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-898816942644052013_20_000_40_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-9015546800913584551_4431_180_4451_180_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-9016865488168499365_4780_000_4800_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-9058545212382992974_5236_200_5256_200_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-9062286840846668802_31_000_51_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-9105380625923157726_4420_000_4440_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-9110125340505914899_380_000_400_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-9123867659877264673_3569_950_3589_950_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-912496333665446669_1680_000_1700_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-913274067754539885_913_000_933_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-9142545919543484617_86_000_106_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-915935412356143375_1740_030_1760_030_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-9175749307679169289_5933_260_5953_260_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-9179922063516210200_157_000_177_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0030/segment-9250355398701464051_4166_132_4186_132_with_camera_labels.tfrecord']\ntraining_0029\n['/data/cmpe295-liu/Waymo/training_0029/segment-8099457465580871094_4764_380_4784_380_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8120716761799622510_862_120_882_120_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8123909110537564436_7220_000_7240_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8126606965364870152_985_090_1005_090_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8148053503558757176_4240_000_4260_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8158128948493708501_7477_230_7497_230_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8207498713503609786_3005_450_3025_450_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8222208340265444449_1400_000_1420_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8323028393459455521_2105_000_2125_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8327447186504415549_5200_000_5220_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8345535260120974350_1980_000_2000_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8399876466981146110_2560_000_2580_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8424573439186068308_3460_000_3480_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8454755173123314088_3202_000_3222_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8487809726845917818_4779_870_4799_870_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8494653877777333091_540_000_560_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8513241054672631743_115_960_135_960_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8543158371164842559_4131_530_4151_530_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-857746300435138193_1869_000_1889_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8582923946352460474_2360_000_2380_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8603916601243187272_540_000_560_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8633296376655504176_514_000_534_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8659567063494726263_2480_000_2500_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8663006751916427679_1520_000_1540_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0029/segment-8700094808505895018_7272_488_7292_488_with_camera_labels.tfrecord']\ntraining_0028\n['/data/cmpe295-liu/Waymo/training_0028/segment-759208896257112298_184_000_204_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7643597152739318064_3979_000_3999_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7670103006580549715_360_000_380_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7727809428114700355_2960_000_2980_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7741361323303179462_1230_310_1250_310_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7761658966964621355_1000_000_1020_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7768517933263896280_1120_000_1140_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7799671367768576481_260_000_280_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7837172662136597262_1140_000_1160_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7850521592343484282_4576_090_4596_090_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7861168750216313148_1305_290_1325_290_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-786582060300383668_2944_060_2964_060_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7885161619764516373_289_280_309_280_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7890808800227629086_6162_700_6182_700_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7912728502266478772_1202_200_1222_200_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7920326980177504058_2454_310_2474_310_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7921369793217703814_1060_000_1080_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7934693355186591404_73_000_93_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7940496892864900543_4783_540_4803_540_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7950869827763684964_8685_000_8705_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7996500550445322129_2333_304_2353_304_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-7999729608823422351_1483_600_1503_600_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-8031709558315183746_491_220_511_220_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-80599353855279550_2604_480_2624_480_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0028/segment-809159138284604331_3355_840_3375_840_with_camera_labels.tfrecord']\ntraining_0027\n['/data/cmpe295-liu/Waymo/training_0027/segment-7000927478052605119_1052_330_1072_330_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7007702792982559244_4400_000_4420_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7019385869759035132_4270_850_4290_850_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7038362761309539946_4207_130_4227_130_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7089765864827567005_1020_000_1040_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7101099554331311287_5320_000_5340_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7120839653809570957_1060_000_1080_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7187601925763611197_4384_300_4404_300_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7189996641300362130_3360_000_3380_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7239123081683545077_4044_370_4064_370_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7290499689576448085_3960_000_3980_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7313718849795510302_280_000_300_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7324192826315818756_620_000_640_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7331965392247645851_1005_940_1025_940_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7344536712079322768_1360_000_1380_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7373597180370847864_6020_000_6040_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-744006317457557752_2080_000_2100_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7440437175443450101_94_000_114_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7447927974619745860_820_000_840_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7458568461947999548_700_000_720_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7466751345307077932_585_000_605_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7517545172000568481_2325_000_2345_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7543690094688232666_4945_350_4965_350_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7554208726220851641_380_000_400_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0027/segment-7566697458525030390_1440_000_1460_000_with_camera_labels.tfrecord']\ntraining_0026\n['/data/cmpe295-liu/Waymo/training_0026/segment-6390847454531723238_6000_000_6020_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6410495600874495447_5287_500_5307_500_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6417523992887712896_1180_000_1200_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6433401807220119698_4560_000_4580_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6456165750159303330_1770_080_1790_080_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6559997992780479765_1039_000_1059_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6561206763751799279_2348_600_2368_600_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6606076833441976341_1340_000_1360_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6625150143263637936_780_000_800_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6638427309837298695_220_000_240_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6674547510992884047_1560_000_1580_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6694593639447385226_1040_000_1060_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6722602826685649765_2280_000_2300_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6740694556948402155_3040_000_3060_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6742105013468660925_3645_000_3665_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6763005717101083473_3880_000_3900_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6771783338734577946_6105_840_6125_840_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6771922013310347577_4249_290_4269_290_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6791933003490312185_2607_000_2627_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6792191642931213648_1522_000_1542_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6799055159715949496_2503_000_2523_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6813611334239274394_535_000_555_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6814918034011049245_134_170_154_170_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6904827860701329567_960_000_980_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0026/segment-6935841224766931310_2770_310_2790_310_with_camera_labels.tfrecord']\n"
],
[
"len(folderslist)",
"_____no_output_____"
],
[
"folderslist[1]",
"_____no_output_____"
],
[
"foldername=\"training_0031\"\ntfrecord_files = glob(os.path.join(PATH, foldername, \"*.tfrecord\")) #[path for path in glob(os.path.join(PATH, foldername, \"*.tfrecord\"))]\nprint(tfrecord_files)",
"['/data/cmpe295-liu/Waymo/training_0031/segment-9288629315134424745_4360_000_4380_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9295161125729168140_1270_000_1290_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9311322119128915594_5285_000_5305_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9320169289978396279_1040_000_1060_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9325580606626376787_4509_140_4529_140_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9334364225104959137_661_000_681_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9350921499281634194_2403_251_2423_251_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9385013624094020582_2547_650_2567_650_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9415086857375798767_4760_000_4780_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9465500459680839281_1100_000_1120_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9509506420470671704_4049_100_4069_100_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9521653920958139982_940_000_960_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9529958888589376527_640_000_660_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9547911055204230158_1567_950_1587_950_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9568394837328971633_466_365_486_365_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9653249092275997647_980_000_1000_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9654060644653474834_3905_000_3925_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9696413700515401320_1690_000_1710_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-972142630887801133_642_740_662_740_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9747453753779078631_940_000_960_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9758342966297863572_875_230_895_230_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9820553434532681355_2820_000_2840_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9907794657177651763_1126_570_1146_570_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-990914685337955114_980_000_1000_000_with_camera_labels.tfrecord', '/data/cmpe295-liu/Waymo/training_0031/segment-9985243312780923024_3049_720_3069_720_with_camera_labels.tfrecord']\n"
],
[
"PATH='/data/cmpe295-liu/Waymo'\nfolderslist = [\"training_0031\",\"training_0030\",\"training_0029\",\"training_0028\",\"training_0027\",\"training_0026\"]\n#folderslist = [\"training_0031\",\"training_0030\",\"training_0029\",\"training_0028\",\"training_0027\",\"training_0026\",\"training_0025\", \"training_0024\", \"training_0023\",\"training_0022\",\"training_0021\",\"training_0020\",\"training_0019\",\"training_0018\",\"training_0017\",\"training_0016\",\"training_0015\",\"training_0014\",\"training_0013\",\"training_0012\",\"training_0011\",\"training_0010\",\"training_0009\",\"training_0008\",\"training_0007\",\"training_0006\",\"training_0005\",\"training_0004\",\"training_0003\",\"training_0002\",\"training_0001\",\"training_0000\"]\ntfrecord_files = [path for x in folderslist for path in glob(os.path.join(PATH, x, \"*.tfrecord\"))]\nprint(len(tfrecord_files))#total number of tfrecord files\n\nout_dir='/data/cmpe295-liu/Waymo/WaymoCOCOsmall/Training'\nstep=5 #downsample\nout_dir = Path(out_dir)\n\nextract_segment_frontcamera(tfrecord_files, out_dir, step)",
"150\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9288629315134424745_4360_000_4380_000_with_camera_labels.tfrecord\nsegment-9288629315134424745_4360_000_4380_000_with_camera_labels.tfrecord\n.current image id: 1\n.current image id: 2\n.current image id: 3\n.current image id: 4\n.current image id: 5\n.current image id: 6\n.current image id: 7\n.current image id: 8\n.current image id: 9\n.current image id: 10\n.current image id: 11\n.current image id: 12\n.current image id: 13\n.current image id: 14\n.current image id: 15\n.current image id: 16\n.current image id: 17\n.current image id: 18\n.current image id: 19\n.current image id: 20\n.current image id: 21\n.current image id: 22\n.current image id: 23\n.current image id: 24\n.current image id: 25\n.current image id: 26\n.current image id: 27\n.current image id: 28\n.current image id: 29\n.current image id: 30\n.current image id: 31\n.current image id: 32\n.current image id: 33\n.current image id: 34\n.current image id: 35\n.current image id: 36\n.current image id: 37\n.current image id: 38\n.current image id: 39\n.current image id: 40\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9295161125729168140_1270_000_1290_000_with_camera_labels.tfrecord\nsegment-9295161125729168140_1270_000_1290_000_with_camera_labels.tfrecord\n.current image id: 41\n.current image id: 42\n.current image id: 43\n.current image id: 44\n.current image id: 45\n.current image id: 46\n.current image id: 47\n.current image id: 48\n.current image id: 49\n.current image id: 50\n.current image id: 51\n.current image id: 52\n.current image id: 53\n.current image id: 54\n.current image id: 55\n.current image id: 56\n.current image id: 57\n.current image id: 58\n.current image id: 59\n.current image id: 60\n.current image id: 61\n.current image id: 62\n.current image id: 63\n.current image id: 64\n.current image id: 65\n.current image id: 66\n.current image id: 67\n.current image id: 68\n.current image id: 69\n.current image id: 70\n.current image id: 71\n.current image id: 72\n.current image id: 73\n.current image id: 74\n.current image id: 75\n.current image id: 76\n.current image id: 77\n.current image id: 78\n.current image id: 79\n.current image id: 80\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9311322119128915594_5285_000_5305_000_with_camera_labels.tfrecord\nsegment-9311322119128915594_5285_000_5305_000_with_camera_labels.tfrecord\n.current image id: 81\n.current image id: 82\n.current image id: 83\n.current image id: 84\n.current image id: 85\n.current image id: 86\n.current image id: 87\n.current image id: 88\n.current image id: 89\n.current image id: 90\n.current image id: 91\n.current image id: 92\n.current image id: 93\n.current image id: 94\n.current image id: 95\n.current image id: 96\n.current image id: 97\n.current image id: 98\n.current image id: 99\n.current image id: 100\n.current image id: 101\n.current image id: 102\n.current image id: 103\n.current image id: 104\n.current image id: 105\n.current image id: 106\n.current image id: 107\n.current image id: 108\n.current image id: 109\n.current image id: 110\n.current image id: 111\n.current image id: 112\n.current image id: 113\n.current image id: 114\n.current image id: 115\n.current image id: 116\n.current image id: 117\n.current image id: 118\n.current image id: 119\n.current image id: 120\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9320169289978396279_1040_000_1060_000_with_camera_labels.tfrecord\nsegment-9320169289978396279_1040_000_1060_000_with_camera_labels.tfrecord\n.current image id: 121\n.current image id: 122\n.current image id: 123\n.current image id: 124\n.current image id: 125\n.current image id: 126\n.current image id: 127\n.current image id: 128\n.current image id: 129\n.current image id: 130\n.current image id: 131\n.current image id: 132\n.current image id: 133\n.current image id: 134\n.current image id: 135\n.current image id: 136\n.current image id: 137\n.current image id: 138\n.current image id: 139\n.current image id: 140\n.current image id: 141\n.current image id: 142\n.current image id: 143\n.current image id: 144\n.current image id: 145\n.current image id: 146\n.current image id: 147\n.current image id: 148\n.current image id: 149\n.current image id: 150\n.current image id: 151\n.current image id: 152\n.current image id: 153\n.current image id: 154\n.current image id: 155\n.current image id: 156\n.current image id: 157\n.current image id: 158\n.current image id: 159\n.current image id: 160\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9325580606626376787_4509_140_4529_140_with_camera_labels.tfrecord\nsegment-9325580606626376787_4509_140_4529_140_with_camera_labels.tfrecord\n.current image id: 161\n.current image id: 162\n.current image id: 163\n.current image id: 164\n.current image id: 165\n.current image id: 166\n.current image id: 167\n.current image id: 168\n.current image id: 169\n.current image id: 170\n.current image id: 171\n.current image id: 172\n.current image id: 173\n.current image id: 174\n.current image id: 175\n.current image id: 176\n.current image id: 177\n.current image id: 178\n.current image id: 179\n.current image id: 180\n.current image id: 181\n.current image id: 182\n.current image id: 183\n.current image id: 184\n.current image id: 185\n.current image id: 186\n.current image id: 187\n.current image id: 188\n.current image id: 189\n.current image id: 190\n.current image id: 191\n.current image id: 192\n.current image id: 193\n.current image id: 194\n.current image id: 195\n.current image id: 196\n.current image id: 197\n.current image id: 198\n.current image id: 199\n.current image id: 200\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9334364225104959137_661_000_681_000_with_camera_labels.tfrecord\nsegment-9334364225104959137_661_000_681_000_with_camera_labels.tfrecord\n.current image id: 201\n.current image id: 202\n.current image id: 203\n.current image id: 204\n.current image id: 205\n.current image id: 206\n.current image id: 207\n.current image id: 208\n.current image id: 209\n.current image id: 210\n.current image id: 211\n.current image id: 212\n.current image id: 213\n.current image id: 214\n.current image id: 215\n.current image id: 216\n.current image id: 217\n.current image id: 218\n.current image id: 219\n.current image id: 220\n.current image id: 221\n.current image id: 222\n.current image id: 223\n.current image id: 224\n.current image id: 225\n.current image id: 226\n.current image id: 227\n.current image id: 228\n.current image id: 229\n.current image id: 230\n.current image id: 231\n.current image id: 232\n.current image id: 233\n.current image id: 234\n.current image id: 235\n.current image id: 236\n.current image id: 237\n.current image id: 238\n.current image id: 239\n.current image id: 240\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9350921499281634194_2403_251_2423_251_with_camera_labels.tfrecord\nsegment-9350921499281634194_2403_251_2423_251_with_camera_labels.tfrecord\n.current image id: 241\n.current image id: 242\n.current image id: 243\n.current image id: 244\n.current image id: 245\n.current image id: 246\n.current image id: 247\n.current image id: 248\n.current image id: 249\n.current image id: 250\n.current image id: 251\n.current image id: 252\n.current image id: 253\n.current image id: 254\n.current image id: 255\n.current image id: 256\n.current image id: 257\n.current image id: 258\n.current image id: 259\n.current image id: 260\n.current image id: 261\n.current image id: 262\n.current image id: 263\n.current image id: 264\n.current image id: 265\n.current image id: 266\n.current image id: 267\n.current image id: 268\n.current image id: 269\n.current image id: 270\n.current image id: 271\n.current image id: 272\n.current image id: 273\n.current image id: 274\n.current image id: 275\n.current image id: 276\n.current image id: 277\n.current image id: 278\n.current image id: 279\n.current image id: 280\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9385013624094020582_2547_650_2567_650_with_camera_labels.tfrecord\nsegment-9385013624094020582_2547_650_2567_650_with_camera_labels.tfrecord\n.current image id: 281\n.current image id: 282\n.current image id: 283\n.current image id: 284\n.current image id: 285\n.current image id: 286\n.current image id: 287\n.current image id: 288\n.current image id: 289\n.current image id: 290\n.current image id: 291\n.current image id: 292\n.current image id: 293\n.current image id: 294\n.current image id: 295\n.current image id: 296\n.current image id: 297\n.current image id: 298\n.current image id: 299\n.current image id: 300\n.current image id: 301\n.current image id: 302\n.current image id: 303\n.current image id: 304\n.current image id: 305\n.current image id: 306\n.current image id: 307\n.current image id: 308\n.current image id: 309\n.current image id: 310\n.current image id: 311\n.current image id: 312\n.current image id: 313\n.current image id: 314\n.current image id: 315\n.current image id: 316\n.current image id: 317\n.current image id: 318\n.current image id: 319\n.current image id: 320\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9415086857375798767_4760_000_4780_000_with_camera_labels.tfrecord\nsegment-9415086857375798767_4760_000_4780_000_with_camera_labels.tfrecord\n.current image id: 321\n.current image id: 322\n.current image id: 323\n.current image id: 324\n.current image id: 325\n.current image id: 326\n.current image id: 327\n.current image id: 328\n.current image id: 329\n.current image id: 330\n.current image id: 331\n.current image id: 332\n.current image id: 333\n.current image id: 334\n.current image id: 335\n.current image id: 336\n.current image id: 337\n.current image id: 338\n.current image id: 339\n.current image id: 340\n.current image id: 341\n.current image id: 342\n.current image id: 343\n.current image id: 344\n.current image id: 345\n.current image id: 346\n.current image id: 347\n.current image id: 348\n.current image id: 349\n.current image id: 350\n.current image id: 351\n.current image id: 352\n.current image id: 353\n.current image id: 354\n.current image id: 355\n.current image id: 356\n.current image id: 357\n.current image id: 358\n.current image id: 359\n.current image id: 360\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9465500459680839281_1100_000_1120_000_with_camera_labels.tfrecord\nsegment-9465500459680839281_1100_000_1120_000_with_camera_labels.tfrecord\n.current image id: 361\n.current image id: 362\n.current image id: 363\n.current image id: 364\n.current image id: 365\n.current image id: 366\n.current image id: 367\n.current image id: 368\n.current image id: 369\n.current image id: 370\n.current image id: 371\n.current image id: 372\n.current image id: 373\n.current image id: 374\n.current image id: 375\n.current image id: 376\n.current image id: 377\n.current image id: 378\n.current image id: 379\n.current image id: 380\n.current image id: 381\n.current image id: 382\n.current image id: 383\n.current image id: 384\n.current image id: 385\n.current image id: 386\n.current image id: 387\n.current image id: 388\n.current image id: 389\n.current image id: 390\n.current image id: 391\n.current image id: 392\n.current image id: 393\n.current image id: 394\n.current image id: 395\n.current image id: 396\n.current image id: 397\n.current image id: 398\n.current image id: 399\n.current image id: 400\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9509506420470671704_4049_100_4069_100_with_camera_labels.tfrecord\nsegment-9509506420470671704_4049_100_4069_100_with_camera_labels.tfrecord\n.current image id: 401\n.current image id: 402\n.current image id: 403\n.current image id: 404\n.current image id: 405\n.current image id: 406\n.current image id: 407\n.current image id: 408\n.current image id: 409\n.current image id: 410\n.current image id: 411\n.current image id: 412\n.current image id: 413\n.current image id: 414\n.current image id: 415\n.current image id: 416\n.current image id: 417\n.current image id: 418\n.current image id: 419\n.current image id: 420\n.current image id: 421\n.current image id: 422\n.current image id: 423\n.current image id: 424\n.current image id: 425\n.current image id: 426\n.current image id: 427\n.current image id: 428\n.current image id: 429\n.current image id: 430\n.current image id: 431\n.current image id: 432\n.current image id: 433\n.current image id: 434\n.current image id: 435\n.current image id: 436\n.current image id: 437\n.current image id: 438\n.current image id: 439\n.current image id: 440\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9521653920958139982_940_000_960_000_with_camera_labels.tfrecord\nsegment-9521653920958139982_940_000_960_000_with_camera_labels.tfrecord\n.current image id: 441\n.current image id: 442\n.current image id: 443\n.current image id: 444\n.current image id: 445\n.current image id: 446\n.current image id: 447\n.current image id: 448\n.current image id: 449\n.current image id: 450\n.current image id: 451\n.current image id: 452\n.current image id: 453\n.current image id: 454\n.current image id: 455\n.current image id: 456\n.current image id: 457\n.current image id: 458\n.current image id: 459\n.current image id: 460\n.current image id: 461\n.current image id: 462\n.current image id: 463\n.current image id: 464\n.current image id: 465\n.current image id: 466\n.current image id: 467\n.current image id: 468\n.current image id: 469\n.current image id: 470\n.current image id: 471\n.current image id: 472\n.current image id: 473\n.current image id: 474\n.current image id: 475\n.current image id: 476\n.current image id: 477\n.current image id: 478\n.current image id: 479\n.current image id: 480\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9529958888589376527_640_000_660_000_with_camera_labels.tfrecord\nsegment-9529958888589376527_640_000_660_000_with_camera_labels.tfrecord\n.current image id: 481\n.current image id: 482\n.current image id: 483\n.current image id: 484\n.current image id: 485\n.current image id: 486\n.current image id: 487\n.current image id: 488\n.current image id: 489\n.current image id: 490\n.current image id: 491\n.current image id: 492\n.current image id: 493\n.current image id: 494\n.current image id: 495\n.current image id: 496\n.current image id: 497\n.current image id: 498\n.current image id: 499\n.current image id: 500\n.current image id: 501\n.current image id: 502\n.current image id: 503\n.current image id: 504\n.current image id: 505\n.current image id: 506\n.current image id: 507\n.current image id: 508\n.current image id: 509\n.current image id: 510\n.current image id: 511\n.current image id: 512\n.current image id: 513\n.current image id: 514\n.current image id: 515\n.current image id: 516\n.current image id: 517\n.current image id: 518\n.current image id: 519\n.current image id: 520\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9547911055204230158_1567_950_1587_950_with_camera_labels.tfrecord\nsegment-9547911055204230158_1567_950_1587_950_with_camera_labels.tfrecord\n.current image id: 521\n.current image id: 522\n.current image id: 523\n.current image id: 524\n.current image id: 525\n.current image id: 526\n.current image id: 527\n.current image id: 528\n.current image id: 529\n.current image id: 530\n.current image id: 531\n.current image id: 532\n.current image id: 533\n.current image id: 534\n.current image id: 535\n.current image id: 536\n.current image id: 537\n.current image id: 538\n.current image id: 539\n.current image id: 540\n.current image id: 541\n.current image id: 542\n.current image id: 543\n.current image id: 544\n.current image id: 545\n.current image id: 546\n.current image id: 547\n.current image id: 548\n.current image id: 549\n.current image id: 550\n.current image id: 551\n.current image id: 552\n.current image id: 553\n.current image id: 554\n.current image id: 555\n.current image id: 556\n.current image id: 557\n.current image id: 558\n.current image id: 559\n.current image id: 560\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9568394837328971633_466_365_486_365_with_camera_labels.tfrecord\nsegment-9568394837328971633_466_365_486_365_with_camera_labels.tfrecord\n.current image id: 561\n.current image id: 562\n.current image id: 563\n.current image id: 564\n.current image id: 565\n.current image id: 566\n.current image id: 567\n.current image id: 568\n.current image id: 569\n.current image id: 570\n.current image id: 571\n.current image id: 572\n.current image id: 573\n.current image id: 574\n.current image id: 575\n.current image id: 576\n.current image id: 577\n.current image id: 578\n.current image id: 579\n.current image id: 580\n.current image id: 581\n.current image id: 582\n.current image id: 583\n.current image id: 584\n.current image id: 585\n.current image id: 586\n.current image id: 587\n.current image id: 588\n.current image id: 589\n.current image id: 590\n.current image id: 591\n.current image id: 592\n.current image id: 593\n.current image id: 594\n.current image id: 595\n.current image id: 596\n.current image id: 597\n.current image id: 598\n.current image id: 599\n.current image id: 600\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9653249092275997647_980_000_1000_000_with_camera_labels.tfrecord\nsegment-9653249092275997647_980_000_1000_000_with_camera_labels.tfrecord\n.current image id: 601\n.current image id: 602\n.current image id: 603\n.current image id: 604\n.current image id: 605\n.current image id: 606\n.current image id: 607\n.current image id: 608\n.current image id: 609\n.current image id: 610\n.current image id: 611\n.current image id: 612\n.current image id: 613\n.current image id: 614\n.current image id: 615\n.current image id: 616\n.current image id: 617\n.current image id: 618\n.current image id: 619\n.current image id: 620\n.current image id: 621\n.current image id: 622\n.current image id: 623\n.current image id: 624\n.current image id: 625\n.current image id: 626\n.current image id: 627\n.current image id: 628\n.current image id: 629\n.current image id: 630\n.current image id: 631\n.current image id: 632\n.current image id: 633\n.current image id: 634\n.current image id: 635\n.current image id: 636\n.current image id: 637\n.current image id: 638\n.current image id: 639\n.current image id: 640\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9654060644653474834_3905_000_3925_000_with_camera_labels.tfrecord\nsegment-9654060644653474834_3905_000_3925_000_with_camera_labels.tfrecord\n.current image id: 641\n.current image id: 642\n.current image id: 643\n.current image id: 644\n.current image id: 645\n.current image id: 646\n.current image id: 647\n.current image id: 648\n.current image id: 649\n.current image id: 650\n.current image id: 651\n.current image id: 652\n.current image id: 653\n.current image id: 654\n.current image id: 655\n.current image id: 656\n.current image id: 657\n.current image id: 658\n.current image id: 659\n.current image id: 660\n.current image id: 661\n.current image id: 662\n.current image id: 663\n.current image id: 664\n.current image id: 665\n.current image id: 666\n.current image id: 667\n.current image id: 668\n.current image id: 669\n.current image id: 670\n.current image id: 671\n.current image id: 672\n.current image id: 673\n.current image id: 674\n.current image id: 675\n.current image id: 676\n.current image id: 677\n.current image id: 678\n.current image id: 679\n.current image id: 680\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9696413700515401320_1690_000_1710_000_with_camera_labels.tfrecord\nsegment-9696413700515401320_1690_000_1710_000_with_camera_labels.tfrecord\n.current image id: 681\n.current image id: 682\n.current image id: 683\n.current image id: 684\n.current image id: 685\n.current image id: 686\n.current image id: 687\n.current image id: 688\n.current image id: 689\n.current image id: 690\n.current image id: 691\n.current image id: 692\n.current image id: 693\n.current image id: 694\n.current image id: 695\n.current image id: 696\n.current image id: 697\n.current image id: 698\n.current image id: 699\n.current image id: 700\n.current image id: 701\n.current image id: 702\n.current image id: 703\n.current image id: 704\n.current image id: 705\n.current image id: 706\n.current image id: 707\n.current image id: 708\n.current image id: 709\n.current image id: 710\n.current image id: 711\n.current image id: 712\n.current image id: 713\n.current image id: 714\n.current image id: 715\n.current image id: 716\n.current image id: 717\n.current image id: 718\n.current image id: 719\n.current image id: 720\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-972142630887801133_642_740_662_740_with_camera_labels.tfrecord\nsegment-972142630887801133_642_740_662_740_with_camera_labels.tfrecord\n.current image id: 721\n.current image id: 722\n.current image id: 723\n.current image id: 724\n.current image id: 725\n.current image id: 726\n.current image id: 727\n.current image id: 728\n.current image id: 729\n.current image id: 730\n.current image id: 731\n.current image id: 732\n.current image id: 733\n.current image id: 734\n.current image id: 735\n.current image id: 736\n.current image id: 737\n.current image id: 738\n.current image id: 739\n.current image id: 740\n.current image id: 741\n.current image id: 742\n.current image id: 743\n.current image id: 744\n.current image id: 745\n.current image id: 746\n.current image id: 747\n.current image id: 748\n.current image id: 749\n.current image id: 750\n.current image id: 751\n.current image id: 752\n.current image id: 753\n.current image id: 754\n.current image id: 755\n.current image id: 756\n.current image id: 757\n.current image id: 758\n.current image id: 759\n.current image id: 760\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9747453753779078631_940_000_960_000_with_camera_labels.tfrecord\nsegment-9747453753779078631_940_000_960_000_with_camera_labels.tfrecord\n.current image id: 761\n.current image id: 762\n.current image id: 763\n.current image id: 764\n.current image id: 765\n.current image id: 766\n.current image id: 767\n.current image id: 768\n.current image id: 769\n.current image id: 770\n.current image id: 771\n.current image id: 772\n.current image id: 773\n.current image id: 774\n.current image id: 775\n.current image id: 776\n.current image id: 777\n.current image id: 778\n.current image id: 779\n.current image id: 780\n.current image id: 781\n.current image id: 782\n.current image id: 783\n.current image id: 784\n.current image id: 785\n.current image id: 786\n.current image id: 787\n.current image id: 788\n.current image id: 789\n.current image id: 790\n.current image id: 791\n.current image id: 792\n.current image id: 793\n.current image id: 794\n.current image id: 795\n.current image id: 796\n.current image id: 797\n.current image id: 798\n.current image id: 799\n.current image id: 800\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9758342966297863572_875_230_895_230_with_camera_labels.tfrecord\nsegment-9758342966297863572_875_230_895_230_with_camera_labels.tfrecord\n.current image id: 801\n.current image id: 802\n.current image id: 803\n.current image id: 804\n.current image id: 805\n.current image id: 806\n.current image id: 807\n.current image id: 808\n.current image id: 809\n.current image id: 810\n.current image id: 811\n.current image id: 812\n.current image id: 813\n.current image id: 814\n.current image id: 815\n.current image id: 816\n.current image id: 817\n.current image id: 818\n.current image id: 819\n.current image id: 820\n.current image id: 821\n.current image id: 822\n.current image id: 823\n.current image id: 824\n.current image id: 825\n.current image id: 826\n.current image id: 827\n.current image id: 828\n.current image id: 829\n.current image id: 830\n.current image id: 831\n.current image id: 832\n.current image id: 833\n.current image id: 834\n.current image id: 835\n.current image id: 836\n.current image id: 837\n.current image id: 838\n.current image id: 839\n.current image id: 840\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9820553434532681355_2820_000_2840_000_with_camera_labels.tfrecord\nsegment-9820553434532681355_2820_000_2840_000_with_camera_labels.tfrecord\n.current image id: 841\n.current image id: 842\n.current image id: 843\n.current image id: 844\n.current image id: 845\n.current image id: 846\n.current image id: 847\n.current image id: 848\n.current image id: 849\n.current image id: 850\n.current image id: 851\n.current image id: 852\n.current image id: 853\n.current image id: 854\n.current image id: 855\n.current image id: 856\n.current image id: 857\n.current image id: 858\n.current image id: 859\n.current image id: 860\n.current image id: 861\n.current image id: 862\n.current image id: 863\n.current image id: 864\n.current image id: 865\n.current image id: 866\n.current image id: 867\n.current image id: 868\n.current image id: 869\n.current image id: 870\n.current image id: 871\n.current image id: 872\n.current image id: 873\n.current image id: 874\n.current image id: 875\n.current image id: 876\n.current image id: 877\n.current image id: 878\n.current image id: 879\n.current image id: 880\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9907794657177651763_1126_570_1146_570_with_camera_labels.tfrecord\nsegment-9907794657177651763_1126_570_1146_570_with_camera_labels.tfrecord\n.current image id: 881\n.current image id: 882\n.current image id: 883\n.current image id: 884\n.current image id: 885\n.current image id: 886\n.current image id: 887\n.current image id: 888\n.current image id: 889\n.current image id: 890\n.current image id: 891\n.current image id: 892\n.current image id: 893\n.current image id: 894\n.current image id: 895\n.current image id: 896\n.current image id: 897\n.current image id: 898\n.current image id: 899\n.current image id: 900\n.current image id: 901\n.current image id: 902\n.current image id: 903\n.current image id: 904\n.current image id: 905\n.current image id: 906\n.current image id: 907\n.current image id: 908\n.current image id: 909\n.current image id: 910\n.current image id: 911\n.current image id: 912\n.current image id: 913\n.current image id: 914\n.current image id: 915\n.current image id: 916\n.current image id: 917\n.current image id: 918\n.current image id: 919\n.current image id: 920\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-990914685337955114_980_000_1000_000_with_camera_labels.tfrecord\nsegment-990914685337955114_980_000_1000_000_with_camera_labels.tfrecord\n.current image id: 921\n.current image id: 922\n.current image id: 923\n.current image id: 924\n.current image id: 925\n.current image id: 926\n.current image id: 927\n.current image id: 928\n.current image id: 929\n.current image id: 930\n.current image id: 931\n.current image id: 932\n.current image id: 933\n.current image id: 934\n.current image id: 935\n.current image id: 936\n.current image id: 937\n.current image id: 938\n.current image id: 939\n.current image id: 940\n.current image id: 941\n.current image id: 942\n.current image id: 943\n.current image id: 944\n.current image id: 945\n.current image id: 946\n.current image id: 947\n.current image id: 948\n.current image id: 949\n.current image id: 950\n.current image id: 951\n.current image id: 952\n.current image id: 953\n.current image id: 954\n.current image id: 955\n.current image id: 956\n.current image id: 957\n.current image id: 958\n.current image id: 959\n.current image id: 960\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9985243312780923024_3049_720_3069_720_with_camera_labels.tfrecord\nsegment-9985243312780923024_3049_720_3069_720_with_camera_labels.tfrecord\n.current image id: 961\n.current image id: 962\n.current image id: 963\n.current image id: 964\n.current image id: 965\n.current image id: 966\n.current image id: 967\n.current image id: 968\n.current image id: 969\n.current image id: 970\n.current image id: 971\n.current image id: 972\n.current image id: 973\n.current image id: 974\n.current image id: 975\n.current image id: 976\n.current image id: 977\n.current image id: 978\n.current image id: 979\n.current image id: 980\n.current image id: 981\n.current image id: 982\n.current image id: 983\n.current image id: 984\n.current image id: 985\n.current image id: 986\n.current image id: 987\n.current image id: 988\n.current image id: 989\n.current image id: 990\n.current image id: 991\n.current image id: 992\n.current image id: 993\n.current image id: 994\n.current image id: 995\n.current image id: 996\n.current image id: 997\n.current image id: 998\n.current image id: 999\n.current image id: 1000\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-8722413665055769182_2840_000_2860_000_with_camera_labels.tfrecord\nsegment-8722413665055769182_2840_000_2860_000_with_camera_labels.tfrecord\n.current image id: 1001\n.current image id: 1002\n.current image id: 1003\n.current image id: 1004\n.current image id: 1005\n.current image id: 1006\n.current image id: 1007\n.current image id: 1008\n.current image id: 1009\n.current image id: 1010\n.current image id: 1011\n.current image id: 1012\n.current image id: 1013\n.current image id: 1014\n.current image id: 1015\n.current image id: 1016\n.current image id: 1017\n.current image id: 1018\n.current image id: 1019\n.current image id: 1020\n.current image id: 1021\n.current image id: 1022\n.current image id: 1023\n.current image id: 1024\n.current image id: 1025\n.current image id: 1026\n.current image id: 1027\n.current image id: 1028\n.current image id: 1029\n.current image id: 1030\n.current image id: 1031\n.current image id: 1032\n.current image id: 1033\n.current image id: 1034\n.current image id: 1035\n.current image id: 1036\n.current image id: 1037\n.current image id: 1038\n.current image id: 1039\n.current image id: 1040\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-8745106945249251942_1207_000_1227_000_with_camera_labels.tfrecord\nsegment-8745106945249251942_1207_000_1227_000_with_camera_labels.tfrecord\n.current image id: 1041\n.current image id: 1042\n.current image id: 1043\n.current image id: 1044\n.current image id: 1045\n.current image id: 1046\n.current image id: 1047\n.current image id: 1048\n.current image id: 1049\n.current image id: 1050\n.current image id: 1051\n.current image id: 1052\n.current image id: 1053\n.current image id: 1054\n.current image id: 1055\n.current image id: 1056\n.current image id: 1057\n.current image id: 1058\n.current image id: 1059\n.current image id: 1060\n.current image id: 1061\n.current image id: 1062\n.current image id: 1063\n.current image id: 1064\n.current image id: 1065\n.current image id: 1066\n.current image id: 1067\n.current image id: 1068\n.current image id: 1069\n.current image id: 1070\n.current image id: 1071\n.current image id: 1072\n.current image id: 1073\n.current image id: 1074\n.current image id: 1075\n.current image id: 1076\n.current image id: 1077\n.current image id: 1078\n.current image id: 1079\n.current image id: 1080\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-8763126149209091146_1843_320_1863_320_with_camera_labels.tfrecord\nsegment-8763126149209091146_1843_320_1863_320_with_camera_labels.tfrecord\n.current image id: 1081\n.current image id: 1082\n.current image id: 1083\n.current image id: 1084\n.current image id: 1085\n.current image id: 1086\n.current image id: 1087\n.current image id: 1088\n.current image id: 1089\n.current image id: 1090\n.current image id: 1091\n.current image id: 1092\n.current image id: 1093\n.current image id: 1094\n.current image id: 1095\n.current image id: 1096\n.current image id: 1097\n.current image id: 1098\n.current image id: 1099\n.current image id: 1100\n.current image id: 1101\n.current image id: 1102\n.current image id: 1103\n.current image id: 1104\n.current image id: 1105\n.current image id: 1106\n.current image id: 1107\n.current image id: 1108\n.current image id: 1109\n.current image id: 1110\n.current image id: 1111\n.current image id: 1112\n.current image id: 1113\n.current image id: 1114\n.current image id: 1115\n.current image id: 1116\n.current image id: 1117\n.current image id: 1118\n.current image id: 1119\n.current image id: 1120\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-8796914080594559459_4284_170_4304_170_with_camera_labels.tfrecord\nsegment-8796914080594559459_4284_170_4304_170_with_camera_labels.tfrecord\n.current image id: 1121\n.current image id: 1122\n.current image id: 1123\n.current image id: 1124\n.current image id: 1125\n.current image id: 1126\n.current image id: 1127\n.current image id: 1128\n.current image id: 1129\n.current image id: 1130\n.current image id: 1131\n.current image id: 1132\n.current image id: 1133\n.current image id: 1134\n.current image id: 1135\n.current image id: 1136\n.current image id: 1137\n.current image id: 1138\n.current image id: 1139\n.current image id: 1140\n.current image id: 1141\n.current image id: 1142\n.current image id: 1143\n.current image id: 1144\n.current image id: 1145\n.current image id: 1146\n.current image id: 1147\n.current image id: 1148\n.current image id: 1149\n.current image id: 1150\n.current image id: 1151\n.current image id: 1152\n.current image id: 1153\n.current image id: 1154\n.current image id: 1155\n.current image id: 1156\n.current image id: 1157\n.current image id: 1158\n.current image id: 1159\n.current image id: 1160\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-8806931859563747931_1160_000_1180_000_with_camera_labels.tfrecord\nsegment-8806931859563747931_1160_000_1180_000_with_camera_labels.tfrecord\n.current image id: 1161\n.current image id: 1162\n.current image id: 1163\n.current image id: 1164\n.current image id: 1165\n.current image id: 1166\n.current image id: 1167\n.current image id: 1168\n.current image id: 1169\n.current image id: 1170\n.current image id: 1171\n.current image id: 1172\n.current image id: 1173\n.current image id: 1174\n.current image id: 1175\n.current image id: 1176\n.current image id: 1177\n.current image id: 1178\n.current image id: 1179\n.current image id: 1180\n.current image id: 1181\n.current image id: 1182\n.current image id: 1183\n.current image id: 1184\n.current image id: 1185\n.current image id: 1186\n.current image id: 1187\n.current image id: 1188\n.current image id: 1189\n.current image id: 1190\n.current image id: 1191\n.current image id: 1192\n.current image id: 1193\n.current image id: 1194\n.current image id: 1195\n.current image id: 1196\n.current image id: 1197\n.current image id: 1198\n.current image id: 1199\n.current image id: 1200\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-8811210064692949185_3066_770_3086_770_with_camera_labels.tfrecord\nsegment-8811210064692949185_3066_770_3086_770_with_camera_labels.tfrecord\n.current image id: 1201\n.current image id: 1202\n.current image id: 1203\n.current image id: 1204\n.current image id: 1205\n.current image id: 1206\n.current image id: 1207\n.current image id: 1208\n.current image id: 1209\n.current image id: 1210\n.current image id: 1211\n.current image id: 1212\n.current image id: 1213\n.current image id: 1214\n.current image id: 1215\n.current image id: 1216\n.current image id: 1217\n.current image id: 1218\n.current image id: 1219\n.current image id: 1220\n.current image id: 1221\n.current image id: 1222\n.current image id: 1223\n.current image id: 1224\n.current image id: 1225\n.current image id: 1226\n.current image id: 1227\n.current image id: 1228\n.current image id: 1229\n.current image id: 1230\n.current image id: 1231\n.current image id: 1232\n.current image id: 1233\n.current image id: 1234\n.current image id: 1235\n.current image id: 1236\n.current image id: 1237\n.current image id: 1238\n.current image id: 1239\n.current image id: 1240\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-8822503619482926605_1080_000_1100_000_with_camera_labels.tfrecord\nsegment-8822503619482926605_1080_000_1100_000_with_camera_labels.tfrecord\n.current image id: 1241\n.current image id: 1242\n.current image id: 1243\n.current image id: 1244\n.current image id: 1245\n.current image id: 1246\n.current image id: 1247\n.current image id: 1248\n.current image id: 1249\n.current image id: 1250\n.current image id: 1251\n.current image id: 1252\n.current image id: 1253\n.current image id: 1254\n.current image id: 1255\n.current image id: 1256\n.current image id: 1257\n.current image id: 1258\n.current image id: 1259\n.current image id: 1260\n.current image id: 1261\n.current image id: 1262\n.current image id: 1263\n.current image id: 1264\n.current image id: 1265\n.current image id: 1266\n.current image id: 1267\n.current image id: 1268\n.current image id: 1269\n.current image id: 1270\n.current image id: 1271\n.current image id: 1272\n.current image id: 1273\n.current image id: 1274\n.current image id: 1275\n.current image id: 1276\n.current image id: 1277\n.current image id: 1278\n.current image id: 1279\n.current image id: 1280\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-8859409804103625626_2760_000_2780_000_with_camera_labels.tfrecord\nsegment-8859409804103625626_2760_000_2780_000_with_camera_labels.tfrecord\n.current image id: 1281\n.current image id: 1282\n.current image id: 1283\n.current image id: 1284\n.current image id: 1285\n.current image id: 1286\n.current image id: 1287\n.current image id: 1288\n.current image id: 1289\n.current image id: 1290\n.current image id: 1291\n.current image id: 1292\n.current image id: 1293\n.current image id: 1294\n.current image id: 1295\n.current image id: 1296\n.current image id: 1297\n.current image id: 1298\n.current image id: 1299\n.current image id: 1300\n.current image id: 1301\n.current image id: 1302\n.current image id: 1303\n.current image id: 1304\n.current image id: 1305\n.current image id: 1306\n.current image id: 1307\n.current image id: 1308\n.current image id: 1309\n.current image id: 1310\n.current image id: 1311\n.current image id: 1312\n.current image id: 1313\n.current image id: 1314\n.current image id: 1315\n.current image id: 1316\n.current image id: 1317\n.current image id: 1318\n.current image id: 1319\n.current image id: 1320\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-8938046348067069210_3800_000_3820_000_with_camera_labels.tfrecord\nsegment-8938046348067069210_3800_000_3820_000_with_camera_labels.tfrecord\n.current image id: 1321\n.current image id: 1322\n.current image id: 1323\n.current image id: 1324\n.current image id: 1325\n.current image id: 1326\n.current image id: 1327\n.current image id: 1328\n.current image id: 1329\n.current image id: 1330\n.current image id: 1331\n.current image id: 1332\n.current image id: 1333\n.current image id: 1334\n.current image id: 1335\n.current image id: 1336\n.current image id: 1337\n.current image id: 1338\n.current image id: 1339\n.current image id: 1340\n.current image id: 1341\n.current image id: 1342\n.current image id: 1343\n.current image id: 1344\n.current image id: 1345\n.current image id: 1346\n.current image id: 1347\n.current image id: 1348\n.current image id: 1349\n.current image id: 1350\n.current image id: 1351\n.current image id: 1352\n.current image id: 1353\n.current image id: 1354\n.current image id: 1355\n.current image id: 1356\n.current image id: 1357\n.current image id: 1358\n.current image id: 1359\n.current image id: 1360\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-8965112222692085704_4860_000_4880_000_with_camera_labels.tfrecord\nsegment-8965112222692085704_4860_000_4880_000_with_camera_labels.tfrecord\n.current image id: 1361\n.current image id: 1362\n.current image id: 1363\n.current image id: 1364\n.current image id: 1365\n.current image id: 1366\n.current image id: 1367\n.current image id: 1368\n.current image id: 1369\n.current image id: 1370\n.current image id: 1371\n.current image id: 1372\n.current image id: 1373\n.current image id: 1374\n.current image id: 1375\n.current image id: 1376\n.current image id: 1377\n.current image id: 1378\n.current image id: 1379\n.current image id: 1380\n.current image id: 1381\n.current image id: 1382\n.current image id: 1383\n.current image id: 1384\n.current image id: 1385\n.current image id: 1386\n.current image id: 1387\n.current image id: 1388\n.current image id: 1389\n.current image id: 1390\n.current image id: 1391\n.current image id: 1392\n.current image id: 1393\n.current image id: 1394\n.current image id: 1395\n.current image id: 1396\n.current image id: 1397\n.current image id: 1398\n.current image id: 1399\n.current image id: 1400\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-898816942644052013_20_000_40_000_with_camera_labels.tfrecord\nsegment-898816942644052013_20_000_40_000_with_camera_labels.tfrecord\n.current image id: 1401\n.current image id: 1402\n.current image id: 1403\n.current image id: 1404\n.current image id: 1405\n.current image id: 1406\n.current image id: 1407\n.current image id: 1408\n.current image id: 1409\n.current image id: 1410\n.current image id: 1411\n.current image id: 1412\n.current image id: 1413\n.current image id: 1414\n.current image id: 1415\n.current image id: 1416\n.current image id: 1417\n.current image id: 1418\n.current image id: 1419\n.current image id: 1420\n.current image id: 1421\n.current image id: 1422\n.current image id: 1423\n.current image id: 1424\n.current image id: 1425\n.current image id: 1426\n.current image id: 1427\n.current image id: 1428\n.current image id: 1429\n.current image id: 1430\n.current image id: 1431\n.current image id: 1432\n.current image id: 1433\n.current image id: 1434\n.current image id: 1435\n.current image id: 1436\n.current image id: 1437\n.current image id: 1438\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-9015546800913584551_4431_180_4451_180_with_camera_labels.tfrecord\nsegment-9015546800913584551_4431_180_4451_180_with_camera_labels.tfrecord\n.current image id: 1439\n.current image id: 1440\n.current image id: 1441\n.current image id: 1442\n.current image id: 1443\n.current image id: 1444\n.current image id: 1445\n.current image id: 1446\n.current image id: 1447\n.current image id: 1448\n.current image id: 1449\n.current image id: 1450\n.current image id: 1451\n.current image id: 1452\n.current image id: 1453\n.current image id: 1454\n.current image id: 1455\n.current image id: 1456\n.current image id: 1457\n.current image id: 1458\n.current image id: 1459\n.current image id: 1460\n.current image id: 1461\n.current image id: 1462\n.current image id: 1463\n.current image id: 1464\n.current image id: 1465\n.current image id: 1466\n.current image id: 1467\n.current image id: 1468\n.current image id: 1469\n.current image id: 1470\n.current image id: 1471\n.current image id: 1472\n.current image id: 1473\n.current image id: 1474\n.current image id: 1475\n.current image id: 1476\n.current image id: 1477\n.current image id: 1478\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-9016865488168499365_4780_000_4800_000_with_camera_labels.tfrecord\nsegment-9016865488168499365_4780_000_4800_000_with_camera_labels.tfrecord\n.current image id: 1479\n.current image id: 1480\n.current image id: 1481\n.current image id: 1482\n.current image id: 1483\n.current image id: 1484\n.current image id: 1485\n.current image id: 1486\n.current image id: 1487\n.current image id: 1488\n.current image id: 1489\n.current image id: 1490\n.current image id: 1491\n.current image id: 1492\n.current image id: 1493\n.current image id: 1494\n.current image id: 1495\n.current image id: 1496\n.current image id: 1497\n.current image id: 1498\n.current image id: 1499\n.current image id: 1500\n.current image id: 1501\n.current image id: 1502\n.current image id: 1503\n.current image id: 1504\n.current image id: 1505\n.current image id: 1506\n.current image id: 1507\n.current image id: 1508\n.current image id: 1509\n.current image id: 1510\n.current image id: 1511\n.current image id: 1512\n.current image id: 1513\n.current image id: 1514\n.current image id: 1515\n.current image id: 1516\n.current image id: 1517\n.current image id: 1518\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-9058545212382992974_5236_200_5256_200_with_camera_labels.tfrecord\nsegment-9058545212382992974_5236_200_5256_200_with_camera_labels.tfrecord\n.current image id: 1519\n.current image id: 1520\n.current image id: 1521\n.current image id: 1522\n.current image id: 1523\n.current image id: 1524\n.current image id: 1525\n.current image id: 1526\n.current image id: 1527\n.current image id: 1528\n.current image id: 1529\n.current image id: 1530\n.current image id: 1531\n.current image id: 1532\n.current image id: 1533\n.current image id: 1534\n.current image id: 1535\n.current image id: 1536\n.current image id: 1537\n.current image id: 1538\n.current image id: 1539\n.current image id: 1540\n.current image id: 1541\n.current image id: 1542\n.current image id: 1543\n.current image id: 1544\n.current image id: 1545\n.current image id: 1546\n.current image id: 1547\n.current image id: 1548\n.current image id: 1549\n.current image id: 1550\n.current image id: 1551\n.current image id: 1552\n.current image id: 1553\n.current image id: 1554\n.current image id: 1555\n.current image id: 1556\n.current image id: 1557\n.current image id: 1558\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-9062286840846668802_31_000_51_000_with_camera_labels.tfrecord\nsegment-9062286840846668802_31_000_51_000_with_camera_labels.tfrecord\n.current image id: 1559\n.current image id: 1560\n.current image id: 1561\n.current image id: 1562\n.current image id: 1563\n.current image id: 1564\n.current image id: 1565\n.current image id: 1566\n.current image id: 1567\n.current image id: 1568\n.current image id: 1569\n.current image id: 1570\n.current image id: 1571\n.current image id: 1572\n.current image id: 1573\n.current image id: 1574\n.current image id: 1575\n.current image id: 1576\n.current image id: 1577\n.current image id: 1578\n.current image id: 1579\n.current image id: 1580\n.current image id: 1581\n.current image id: 1582\n.current image id: 1583\n.current image id: 1584\n.current image id: 1585\n.current image id: 1586\n.current image id: 1587\n.current image id: 1588\n.current image id: 1589\n.current image id: 1590\n.current image id: 1591\n.current image id: 1592\n.current image id: 1593\n.current image id: 1594\n.current image id: 1595\n.current image id: 1596\n.current image id: 1597\n.current image id: 1598\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-9105380625923157726_4420_000_4440_000_with_camera_labels.tfrecord\nsegment-9105380625923157726_4420_000_4440_000_with_camera_labels.tfrecord\n.current image id: 1599\n.current image id: 1600\n.current image id: 1601\n.current image id: 1602\n.current image id: 1603\n.current image id: 1604\n.current image id: 1605\n.current image id: 1606\n.current image id: 1607\n.current image id: 1608\n.current image id: 1609\n.current image id: 1610\n.current image id: 1611\n.current image id: 1612\n.current image id: 1613\n.current image id: 1614\n.current image id: 1615\n.current image id: 1616\n.current image id: 1617\n.current image id: 1618\n.current image id: 1619\n.current image id: 1620\n.current image id: 1621\n.current image id: 1622\n.current image id: 1623\n.current image id: 1624\n.current image id: 1625\n.current image id: 1626\n.current image id: 1627\n.current image id: 1628\n.current image id: 1629\n.current image id: 1630\n.current image id: 1631\n.current image id: 1632\n.current image id: 1633\n.current image id: 1634\n.current image id: 1635\n.current image id: 1636\n.current image id: 1637\n.current image id: 1638\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-9110125340505914899_380_000_400_000_with_camera_labels.tfrecord\nsegment-9110125340505914899_380_000_400_000_with_camera_labels.tfrecord\n.current image id: 1639\n.current image id: 1640\n.current image id: 1641\n.current image id: 1642\n.current image id: 1643\n.current image id: 1644\n.current image id: 1645\n.current image id: 1646\n.current image id: 1647\n.current image id: 1648\n.current image id: 1649\n.current image id: 1650\n.current image id: 1651\n.current image id: 1652\n.current image id: 1653\n.current image id: 1654\n.current image id: 1655\n.current image id: 1656\n.current image id: 1657\n.current image id: 1658\n.current image id: 1659\n.current image id: 1660\n.current image id: 1661\n.current image id: 1662\n.current image id: 1663\n.current image id: 1664\n.current image id: 1665\n.current image id: 1666\n.current image id: 1667\n.current image id: 1668\n.current image id: 1669\n.current image id: 1670\n.current image id: 1671\n.current image id: 1672\n.current image id: 1673\n.current image id: 1674\n.current image id: 1675\n.current image id: 1676\n.current image id: 1677\n.current image id: 1678\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-9123867659877264673_3569_950_3589_950_with_camera_labels.tfrecord\nsegment-9123867659877264673_3569_950_3589_950_with_camera_labels.tfrecord\n.current image id: 1679\n.current image id: 1680\n.current image id: 1681\n.current image id: 1682\n.current image id: 1683\n.current image id: 1684\n.current image id: 1685\n.current image id: 1686\n.current image id: 1687\n.current image id: 1688\n.current image id: 1689\n.current image id: 1690\n.current image id: 1691\n.current image id: 1692\n.current image id: 1693\n.current image id: 1694\n.current image id: 1695\n.current image id: 1696\n.current image id: 1697\n.current image id: 1698\n.current image id: 1699\n.current image id: 1700\n.current image id: 1701\n.current image id: 1702\n.current image id: 1703\n.current image id: 1704\n.current image id: 1705\n.current image id: 1706\n.current image id: 1707\n.current image id: 1708\n.current image id: 1709\n.current image id: 1710\n.current image id: 1711\n.current image id: 1712\n.current image id: 1713\n.current image id: 1714\n.current image id: 1715\n.current image id: 1716\n.current image id: 1717\n.current image id: 1718\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-912496333665446669_1680_000_1700_000_with_camera_labels.tfrecord\nsegment-912496333665446669_1680_000_1700_000_with_camera_labels.tfrecord\n.current image id: 1719\n.current image id: 1720\n.current image id: 1721\n.current image id: 1722\n.current image id: 1723\n.current image id: 1724\n.current image id: 1725\n.current image id: 1726\n.current image id: 1727\n.current image id: 1728\n.current image id: 1729\n.current image id: 1730\n.current image id: 1731\n.current image id: 1732\n.current image id: 1733\n.current image id: 1734\n.current image id: 1735\n.current image id: 1736\n.current image id: 1737\n.current image id: 1738\n.current image id: 1739\n.current image id: 1740\n.current image id: 1741\n.current image id: 1742\n.current image id: 1743\n.current image id: 1744\n.current image id: 1745\n.current image id: 1746\n.current image id: 1747\n.current image id: 1748\n.current image id: 1749\n.current image id: 1750\n.current image id: 1751\n.current image id: 1752\n.current image id: 1753\n.current image id: 1754\n.current image id: 1755\n.current image id: 1756\n.current image id: 1757\n.current image id: 1758\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-913274067754539885_913_000_933_000_with_camera_labels.tfrecord\nsegment-913274067754539885_913_000_933_000_with_camera_labels.tfrecord\n.current image id: 1759\n.current image id: 1760\n.current image id: 1761\n.current image id: 1762\n.current image id: 1763\n.current image id: 1764\n.current image id: 1765\n.current image id: 1766\n.current image id: 1767\n.current image id: 1768\n.current image id: 1769\n.current image id: 1770\n.current image id: 1771\n.current image id: 1772\n.current image id: 1773\n.current image id: 1774\n.current image id: 1775\n.current image id: 1776\n.current image id: 1777\n.current image id: 1778\n.current image id: 1779\n.current image id: 1780\n.current image id: 1781\n.current image id: 1782\n.current image id: 1783\n.current image id: 1784\n.current image id: 1785\n.current image id: 1786\n.current image id: 1787\n.current image id: 1788\n.current image id: 1789\n.current image id: 1790\n.current image id: 1791\n.current image id: 1792\n.current image id: 1793\n.current image id: 1794\n.current image id: 1795\n.current image id: 1796\n.current image id: 1797\n.current image id: 1798\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-9142545919543484617_86_000_106_000_with_camera_labels.tfrecord\nsegment-9142545919543484617_86_000_106_000_with_camera_labels.tfrecord\n.current image id: 1799\n.current image id: 1800\n.current image id: 1801\n.current image id: 1802\n.current image id: 1803\n.current image id: 1804\n.current image id: 1805\n.current image id: 1806\n.current image id: 1807\n.current image id: 1808\n.current image id: 1809\n.current image id: 1810\n.current image id: 1811\n.current image id: 1812\n.current image id: 1813\n.current image id: 1814\n.current image id: 1815\n.current image id: 1816\n.current image id: 1817\n.current image id: 1818\n.current image id: 1819\n.current image id: 1820\n.current image id: 1821\n.current image id: 1822\n.current image id: 1823\n.current image id: 1824\n.current image id: 1825\n.current image id: 1826\n.current image id: 1827\n.current image id: 1828\n.current image id: 1829\n.current image id: 1830\n.current image id: 1831\n.current image id: 1832\n.current image id: 1833\n.current image id: 1834\n.current image id: 1835\n.current image id: 1836\n.current image id: 1837\n.current image id: 1838\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-915935412356143375_1740_030_1760_030_with_camera_labels.tfrecord\nsegment-915935412356143375_1740_030_1760_030_with_camera_labels.tfrecord\n.current image id: 1839\n.current image id: 1840\n.current image id: 1841\n.current image id: 1842\n.current image id: 1843\n.current image id: 1844\n.current image id: 1845\n.current image id: 1846\n.current image id: 1847\n.current image id: 1848\n.current image id: 1849\n.current image id: 1850\n.current image id: 1851\n.current image id: 1852\n.current image id: 1853\n.current image id: 1854\n.current image id: 1855\n.current image id: 1856\n.current image id: 1857\n.current image id: 1858\n.current image id: 1859\n.current image id: 1860\n.current image id: 1861\n.current image id: 1862\n.current image id: 1863\n.current image id: 1864\n.current image id: 1865\n.current image id: 1866\n.current image id: 1867\n.current image id: 1868\n.current image id: 1869\n.current image id: 1870\n.current image id: 1871\n.current image id: 1872\n.current image id: 1873\n.current image id: 1874\n.current image id: 1875\n.current image id: 1876\n.current image id: 1877\n.current image id: 1878\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-9175749307679169289_5933_260_5953_260_with_camera_labels.tfrecord\nsegment-9175749307679169289_5933_260_5953_260_with_camera_labels.tfrecord\n.current image id: 1879\n.current image id: 1880\n.current image id: 1881\n.current image id: 1882\n.current image id: 1883\n.current image id: 1884\n.current image id: 1885\n.current image id: 1886\n.current image id: 1887\n.current image id: 1888\n.current image id: 1889\n.current image id: 1890\n.current image id: 1891\n.current image id: 1892\n.current image id: 1893\n.current image id: 1894\n.current image id: 1895\n.current image id: 1896\n.current image id: 1897\n.current image id: 1898\n.current image id: 1899\n.current image id: 1900\n.current image id: 1901\n.current image id: 1902\n.current image id: 1903\n.current image id: 1904\n.current image id: 1905\n.current image id: 1906\n.current image id: 1907\n.current image id: 1908\n.current image id: 1909\n.current image id: 1910\n.current image id: 1911\n.current image id: 1912\n.current image id: 1913\n.current image id: 1914\n.current image id: 1915\n.current image id: 1916\n.current image id: 1917\n.current image id: 1918\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-9179922063516210200_157_000_177_000_with_camera_labels.tfrecord\nsegment-9179922063516210200_157_000_177_000_with_camera_labels.tfrecord\n.current image id: 1919\n.current image id: 1920\n.current image id: 1921\n.current image id: 1922\n.current image id: 1923\n.current image id: 1924\n.current image id: 1925\n.current image id: 1926\n.current image id: 1927\n.current image id: 1928\n.current image id: 1929\n.current image id: 1930\n.current image id: 1931\n.current image id: 1932\n.current image id: 1933\n.current image id: 1934\n.current image id: 1935\n.current image id: 1936\n.current image id: 1937\n.current image id: 1938\n.current image id: 1939\n.current image id: 1940\n.current image id: 1941\n.current image id: 1942\n.current image id: 1943\n.current image id: 1944\n.current image id: 1945\n.current image id: 1946\n.current image id: 1947\n.current image id: 1948\n.current image id: 1949\n.current image id: 1950\n.current image id: 1951\n.current image id: 1952\n.current image id: 1953\n.current image id: 1954\n.current image id: 1955\n.current image id: 1956\n.current image id: 1957\n.current image id: 1958\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-9250355398701464051_4166_132_4186_132_with_camera_labels.tfrecord\nsegment-9250355398701464051_4166_132_4186_132_with_camera_labels.tfrecord\n.current image id: 1959\n.current image id: 1960\n.current image id: 1961\n.current image id: 1962\n.current image id: 1963\n.current image id: 1964\n.current image id: 1965\n.current image id: 1966\n.current image id: 1967\n.current image id: 1968\n.current image id: 1969\n.current image id: 1970\n.current image id: 1971\n.current image id: 1972\n.current image id: 1973\n.current image id: 1974\n.current image id: 1975\n.current image id: 1976\n.current image id: 1977\n.current image id: 1978\n.current image id: 1979\n.current image id: 1980\n.current image id: 1981\n.current image id: 1982\n.current image id: 1983\n.current image id: 1984\n.current image id: 1985\n.current image id: 1986\n.current image id: 1987\n.current image id: 1988\n.current image id: 1989\n.current image id: 1990\n.current image id: 1991\n.current image id: 1992\n.current image id: 1993\n.current image id: 1994\n.current image id: 1995\n.current image id: 1996\n.current image id: 1997\n.current image id: 1998\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8099457465580871094_4764_380_4784_380_with_camera_labels.tfrecord\nsegment-8099457465580871094_4764_380_4784_380_with_camera_labels.tfrecord\n.current image id: 1999\n.current image id: 2000\n.current image id: 2001\n.current image id: 2002\n.current image id: 2003\n.current image id: 2004\n.current image id: 2005\n.current image id: 2006\n.current image id: 2007\n.current image id: 2008\n.current image id: 2009\n.current image id: 2010\n.current image id: 2011\n.current image id: 2012\n.current image id: 2013\n.current image id: 2014\n.current image id: 2015\n.current image id: 2016\n.current image id: 2017\n.current image id: 2018\n.current image id: 2019\n.current image id: 2020\n.current image id: 2021\n.current image id: 2022\n.current image id: 2023\n.current image id: 2024\n.current image id: 2025\n.current image id: 2026\n.current image id: 2027\n.current image id: 2028\n.current image id: 2029\n.current image id: 2030\n.current image id: 2031\n.current image id: 2032\n.current image id: 2033\n.current image id: 2034\n.current image id: 2035\n.current image id: 2036\n.current image id: 2037\n.current image id: 2038\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8120716761799622510_862_120_882_120_with_camera_labels.tfrecord\nsegment-8120716761799622510_862_120_882_120_with_camera_labels.tfrecord\n.current image id: 2039\n.current image id: 2040\n.current image id: 2041\n.current image id: 2042\n.current image id: 2043\n.current image id: 2044\n.current image id: 2045\n.current image id: 2046\n.current image id: 2047\n.current image id: 2048\n.current image id: 2049\n.current image id: 2050\n.current image id: 2051\n.current image id: 2052\n.current image id: 2053\n.current image id: 2054\n.current image id: 2055\n.current image id: 2056\n.current image id: 2057\n.current image id: 2058\n.current image id: 2059\n.current image id: 2060\n.current image id: 2061\n.current image id: 2062\n.current image id: 2063\n.current image id: 2064\n.current image id: 2065\n.current image id: 2066\n.current image id: 2067\n.current image id: 2068\n.current image id: 2069\n.current image id: 2070\n.current image id: 2071\n.current image id: 2072\n.current image id: 2073\n.current image id: 2074\n.current image id: 2075\n.current image id: 2076\n.current image id: 2077\n.current image id: 2078\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8123909110537564436_7220_000_7240_000_with_camera_labels.tfrecord\nsegment-8123909110537564436_7220_000_7240_000_with_camera_labels.tfrecord\n.current image id: 2079\n.current image id: 2080\n.current image id: 2081\n.current image id: 2082\n.current image id: 2083\n.current image id: 2084\n.current image id: 2085\n.current image id: 2086\n.current image id: 2087\n.current image id: 2088\n.current image id: 2089\n.current image id: 2090\n.current image id: 2091\n.current image id: 2092\n.current image id: 2093\n.current image id: 2094\n.current image id: 2095\n.current image id: 2096\n.current image id: 2097\n.current image id: 2098\n.current image id: 2099\n.current image id: 2100\n.current image id: 2101\n.current image id: 2102\n.current image id: 2103\n.current image id: 2104\n.current image id: 2105\n.current image id: 2106\n.current image id: 2107\n.current image id: 2108\n.current image id: 2109\n.current image id: 2110\n.current image id: 2111\n.current image id: 2112\n.current image id: 2113\n.current image id: 2114\n.current image id: 2115\n.current image id: 2116\n.current image id: 2117\n.current image id: 2118\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8126606965364870152_985_090_1005_090_with_camera_labels.tfrecord\nsegment-8126606965364870152_985_090_1005_090_with_camera_labels.tfrecord\n.current image id: 2119\n.current image id: 2120\n.current image id: 2121\n.current image id: 2122\n.current image id: 2123\n.current image id: 2124\n.current image id: 2125\n.current image id: 2126\n.current image id: 2127\n.current image id: 2128\n.current image id: 2129\n.current image id: 2130\n.current image id: 2131\n.current image id: 2132\n.current image id: 2133\n.current image id: 2134\n.current image id: 2135\n.current image id: 2136\n.current image id: 2137\n.current image id: 2138\n.current image id: 2139\n.current image id: 2140\n.current image id: 2141\n.current image id: 2142\n.current image id: 2143\n.current image id: 2144\n.current image id: 2145\n.current image id: 2146\n.current image id: 2147\n.current image id: 2148\n.current image id: 2149\n.current image id: 2150\n.current image id: 2151\n.current image id: 2152\n.current image id: 2153\n.current image id: 2154\n.current image id: 2155\n.current image id: 2156\n.current image id: 2157\n.current image id: 2158\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8148053503558757176_4240_000_4260_000_with_camera_labels.tfrecord\nsegment-8148053503558757176_4240_000_4260_000_with_camera_labels.tfrecord\n.current image id: 2159\n.current image id: 2160\n.current image id: 2161\n.current image id: 2162\n.current image id: 2163\n.current image id: 2164\n.current image id: 2165\n.current image id: 2166\n.current image id: 2167\n.current image id: 2168\n.current image id: 2169\n.current image id: 2170\n.current image id: 2171\n.current image id: 2172\n.current image id: 2173\n.current image id: 2174\n.current image id: 2175\n.current image id: 2176\n.current image id: 2177\n.current image id: 2178\n.current image id: 2179\n.current image id: 2180\n.current image id: 2181\n.current image id: 2182\n.current image id: 2183\n.current image id: 2184\n.current image id: 2185\n.current image id: 2186\n.current image id: 2187\n.current image id: 2188\n.current image id: 2189\n.current image id: 2190\n.current image id: 2191\n.current image id: 2192\n.current image id: 2193\n.current image id: 2194\n.current image id: 2195\n.current image id: 2196\n.current image id: 2197\n.current image id: 2198\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8158128948493708501_7477_230_7497_230_with_camera_labels.tfrecord\nsegment-8158128948493708501_7477_230_7497_230_with_camera_labels.tfrecord\n.current image id: 2199\n.current image id: 2200\n.current image id: 2201\n.current image id: 2202\n.current image id: 2203\n.current image id: 2204\n.current image id: 2205\n.current image id: 2206\n.current image id: 2207\n.current image id: 2208\n.current image id: 2209\n.current image id: 2210\n.current image id: 2211\n.current image id: 2212\n.current image id: 2213\n.current image id: 2214\n.current image id: 2215\n.current image id: 2216\n.current image id: 2217\n.current image id: 2218\n.current image id: 2219\n.current image id: 2220\n.current image id: 2221\n.current image id: 2222\n.current image id: 2223\n.current image id: 2224\n.current image id: 2225\n.current image id: 2226\n.current image id: 2227\n.current image id: 2228\n.current image id: 2229\n.current image id: 2230\n.current image id: 2231\n.current image id: 2232\n.current image id: 2233\n.current image id: 2234\n.current image id: 2235\n.current image id: 2236\n.current image id: 2237\n.current image id: 2238\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8207498713503609786_3005_450_3025_450_with_camera_labels.tfrecord\nsegment-8207498713503609786_3005_450_3025_450_with_camera_labels.tfrecord\n.current image id: 2239\n.current image id: 2240\n.current image id: 2241\n.current image id: 2242\n.current image id: 2243\n.current image id: 2244\n.current image id: 2245\n.current image id: 2246\n.current image id: 2247\n.current image id: 2248\n.current image id: 2249\n.current image id: 2250\n.current image id: 2251\n.current image id: 2252\n.current image id: 2253\n.current image id: 2254\n.current image id: 2255\n.current image id: 2256\n.current image id: 2257\n.current image id: 2258\n.current image id: 2259\n.current image id: 2260\n.current image id: 2261\n.current image id: 2262\n.current image id: 2263\n.current image id: 2264\n.current image id: 2265\n.current image id: 2266\n.current image id: 2267\n.current image id: 2268\n.current image id: 2269\n.current image id: 2270\n.current image id: 2271\n.current image id: 2272\n.current image id: 2273\n.current image id: 2274\n.current image id: 2275\n.current image id: 2276\n.current image id: 2277\n.current image id: 2278\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8222208340265444449_1400_000_1420_000_with_camera_labels.tfrecord\nsegment-8222208340265444449_1400_000_1420_000_with_camera_labels.tfrecord\n.current image id: 2279\n.current image id: 2280\n.current image id: 2281\n.current image id: 2282\n.current image id: 2283\n.current image id: 2284\n.current image id: 2285\n.current image id: 2286\n.current image id: 2287\n.current image id: 2288\n.current image id: 2289\n.current image id: 2290\n.current image id: 2291\n.current image id: 2292\n.current image id: 2293\n.current image id: 2294\n.current image id: 2295\n.current image id: 2296\n.current image id: 2297\n.current image id: 2298\n.current image id: 2299\n.current image id: 2300\n.current image id: 2301\n.current image id: 2302\n.current image id: 2303\n.current image id: 2304\n.current image id: 2305\n.current image id: 2306\n.current image id: 2307\n.current image id: 2308\n.current image id: 2309\n.current image id: 2310\n.current image id: 2311\n.current image id: 2312\n.current image id: 2313\n.current image id: 2314\n.current image id: 2315\n.current image id: 2316\n.current image id: 2317\n.current image id: 2318\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8323028393459455521_2105_000_2125_000_with_camera_labels.tfrecord\nsegment-8323028393459455521_2105_000_2125_000_with_camera_labels.tfrecord\n.current image id: 2319\n.current image id: 2320\n.current image id: 2321\n.current image id: 2322\n.current image id: 2323\n.current image id: 2324\n.current image id: 2325\n.current image id: 2326\n.current image id: 2327\n.current image id: 2328\n.current image id: 2329\n.current image id: 2330\n.current image id: 2331\n.current image id: 2332\n.current image id: 2333\n.current image id: 2334\n.current image id: 2335\n.current image id: 2336\n.current image id: 2337\n.current image id: 2338\n.current image id: 2339\n.current image id: 2340\n.current image id: 2341\n.current image id: 2342\n.current image id: 2343\n.current image id: 2344\n.current image id: 2345\n.current image id: 2346\n.current image id: 2347\n.current image id: 2348\n.current image id: 2349\n.current image id: 2350\n.current image id: 2351\n.current image id: 2352\n.current image id: 2353\n.current image id: 2354\n.current image id: 2355\n.current image id: 2356\n.current image id: 2357\n.current image id: 2358\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8327447186504415549_5200_000_5220_000_with_camera_labels.tfrecord\nsegment-8327447186504415549_5200_000_5220_000_with_camera_labels.tfrecord\n.current image id: 2359\n.current image id: 2360\n.current image id: 2361\n.current image id: 2362\n.current image id: 2363\n.current image id: 2364\n.current image id: 2365\n.current image id: 2366\n.current image id: 2367\n.current image id: 2368\n.current image id: 2369\n.current image id: 2370\n.current image id: 2371\n.current image id: 2372\n.current image id: 2373\n.current image id: 2374\n.current image id: 2375\n.current image id: 2376\n.current image id: 2377\n.current image id: 2378\n.current image id: 2379\n.current image id: 2380\n.current image id: 2381\n.current image id: 2382\n.current image id: 2383\n.current image id: 2384\n.current image id: 2385\n.current image id: 2386\n.current image id: 2387\n.current image id: 2388\n.current image id: 2389\n.current image id: 2390\n.current image id: 2391\n.current image id: 2392\n.current image id: 2393\n.current image id: 2394\n.current image id: 2395\n.current image id: 2396\n.current image id: 2397\n.current image id: 2398\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8345535260120974350_1980_000_2000_000_with_camera_labels.tfrecord\nsegment-8345535260120974350_1980_000_2000_000_with_camera_labels.tfrecord\n.current image id: 2399\n.current image id: 2400\n.current image id: 2401\n.current image id: 2402\n.current image id: 2403\n.current image id: 2404\n.current image id: 2405\n.current image id: 2406\n.current image id: 2407\n.current image id: 2408\n.current image id: 2409\n.current image id: 2410\n.current image id: 2411\n.current image id: 2412\n.current image id: 2413\n.current image id: 2414\n.current image id: 2415\n.current image id: 2416\n.current image id: 2417\n.current image id: 2418\n.current image id: 2419\n.current image id: 2420\n.current image id: 2421\n.current image id: 2422\n.current image id: 2423\n.current image id: 2424\n.current image id: 2425\n.current image id: 2426\n.current image id: 2427\n.current image id: 2428\n.current image id: 2429\n.current image id: 2430\n.current image id: 2431\n.current image id: 2432\n.current image id: 2433\n.current image id: 2434\n.current image id: 2435\n.current image id: 2436\n.current image id: 2437\n.current image id: 2438\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8399876466981146110_2560_000_2580_000_with_camera_labels.tfrecord\nsegment-8399876466981146110_2560_000_2580_000_with_camera_labels.tfrecord\n.current image id: 2439\n.current image id: 2440\n.current image id: 2441\n.current image id: 2442\n.current image id: 2443\n.current image id: 2444\n.current image id: 2445\n.current image id: 2446\n.current image id: 2447\n.current image id: 2448\n.current image id: 2449\n.current image id: 2450\n.current image id: 2451\n.current image id: 2452\n.current image id: 2453\n.current image id: 2454\n.current image id: 2455\n.current image id: 2456\n.current image id: 2457\n.current image id: 2458\n.current image id: 2459\n.current image id: 2460\n.current image id: 2461\n.current image id: 2462\n.current image id: 2463\n.current image id: 2464\n.current image id: 2465\n.current image id: 2466\n.current image id: 2467\n.current image id: 2468\n.current image id: 2469\n.current image id: 2470\n.current image id: 2471\n.current image id: 2472\n.current image id: 2473\n.current image id: 2474\n.current image id: 2475\n.current image id: 2476\n.current image id: 2477\n.current image id: 2478\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8424573439186068308_3460_000_3480_000_with_camera_labels.tfrecord\nsegment-8424573439186068308_3460_000_3480_000_with_camera_labels.tfrecord\n.current image id: 2479\n.current image id: 2480\n.current image id: 2481\n.current image id: 2482\n.current image id: 2483\n.current image id: 2484\n.current image id: 2485\n.current image id: 2486\n.current image id: 2487\n.current image id: 2488\n.current image id: 2489\n.current image id: 2490\n.current image id: 2491\n.current image id: 2492\n.current image id: 2493\n.current image id: 2494\n.current image id: 2495\n.current image id: 2496\n.current image id: 2497\n.current image id: 2498\n.current image id: 2499\n.current image id: 2500\n.current image id: 2501\n.current image id: 2502\n.current image id: 2503\n.current image id: 2504\n.current image id: 2505\n.current image id: 2506\n.current image id: 2507\n.current image id: 2508\n.current image id: 2509\n.current image id: 2510\n.current image id: 2511\n.current image id: 2512\n.current image id: 2513\n.current image id: 2514\n.current image id: 2515\n.current image id: 2516\n.current image id: 2517\n.current image id: 2518\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8454755173123314088_3202_000_3222_000_with_camera_labels.tfrecord\nsegment-8454755173123314088_3202_000_3222_000_with_camera_labels.tfrecord\n.current image id: 2519\n.current image id: 2520\n.current image id: 2521\n.current image id: 2522\n.current image id: 2523\n.current image id: 2524\n.current image id: 2525\n.current image id: 2526\n.current image id: 2527\n.current image id: 2528\n.current image id: 2529\n.current image id: 2530\n.current image id: 2531\n.current image id: 2532\n.current image id: 2533\n.current image id: 2534\n.current image id: 2535\n.current image id: 2536\n.current image id: 2537\n.current image id: 2538\n.current image id: 2539\n.current image id: 2540\n.current image id: 2541\n.current image id: 2542\n.current image id: 2543\n.current image id: 2544\n.current image id: 2545\n.current image id: 2546\n.current image id: 2547\n.current image id: 2548\n.current image id: 2549\n.current image id: 2550\n.current image id: 2551\n.current image id: 2552\n.current image id: 2553\n.current image id: 2554\n.current image id: 2555\n.current image id: 2556\n.current image id: 2557\n.current image id: 2558\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8487809726845917818_4779_870_4799_870_with_camera_labels.tfrecord\nsegment-8487809726845917818_4779_870_4799_870_with_camera_labels.tfrecord\n.current image id: 2559\n.current image id: 2560\n.current image id: 2561\n.current image id: 2562\n.current image id: 2563\n.current image id: 2564\n.current image id: 2565\n.current image id: 2566\n.current image id: 2567\n.current image id: 2568\n.current image id: 2569\n.current image id: 2570\n.current image id: 2571\n.current image id: 2572\n.current image id: 2573\n.current image id: 2574\n.current image id: 2575\n.current image id: 2576\n.current image id: 2577\n.current image id: 2578\n.current image id: 2579\n.current image id: 2580\n.current image id: 2581\n.current image id: 2582\n.current image id: 2583\n.current image id: 2584\n.current image id: 2585\n.current image id: 2586\n.current image id: 2587\n.current image id: 2588\n.current image id: 2589\n.current image id: 2590\n.current image id: 2591\n.current image id: 2592\n.current image id: 2593\n.current image id: 2594\n.current image id: 2595\n.current image id: 2596\n.current image id: 2597\n.current image id: 2598\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8494653877777333091_540_000_560_000_with_camera_labels.tfrecord\nsegment-8494653877777333091_540_000_560_000_with_camera_labels.tfrecord\n.current image id: 2599\n.current image id: 2600\n.current image id: 2601\n.current image id: 2602\n.current image id: 2603\n.current image id: 2604\n.current image id: 2605\n.current image id: 2606\n.current image id: 2607\n.current image id: 2608\n.current image id: 2609\n.current image id: 2610\n.current image id: 2611\n.current image id: 2612\n.current image id: 2613\n.current image id: 2614\n.current image id: 2615\n.current image id: 2616\n.current image id: 2617\n.current image id: 2618\n.current image id: 2619\n.current image id: 2620\n.current image id: 2621\n.current image id: 2622\n.current image id: 2623\n.current image id: 2624\n.current image id: 2625\n.current image id: 2626\n.current image id: 2627\n.current image id: 2628\n.current image id: 2629\n.current image id: 2630\n.current image id: 2631\n.current image id: 2632\n.current image id: 2633\n.current image id: 2634\n.current image id: 2635\n.current image id: 2636\n.current image id: 2637\n.current image id: 2638\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8513241054672631743_115_960_135_960_with_camera_labels.tfrecord\nsegment-8513241054672631743_115_960_135_960_with_camera_labels.tfrecord\n.current image id: 2639\n.current image id: 2640\n.current image id: 2641\n.current image id: 2642\n.current image id: 2643\n.current image id: 2644\n.current image id: 2645\n.current image id: 2646\n.current image id: 2647\n.current image id: 2648\n.current image id: 2649\n.current image id: 2650\n.current image id: 2651\n.current image id: 2652\n.current image id: 2653\n.current image id: 2654\n.current image id: 2655\n.current image id: 2656\n.current image id: 2657\n.current image id: 2658\n.current image id: 2659\n.current image id: 2660\n.current image id: 2661\n.current image id: 2662\n.current image id: 2663\n.current image id: 2664\n.current image id: 2665\n.current image id: 2666\n.current image id: 2667\n.current image id: 2668\n.current image id: 2669\n.current image id: 2670\n.current image id: 2671\n.current image id: 2672\n.current image id: 2673\n.current image id: 2674\n.current image id: 2675\n.current image id: 2676\n.current image id: 2677\n.current image id: 2678\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8543158371164842559_4131_530_4151_530_with_camera_labels.tfrecord\nsegment-8543158371164842559_4131_530_4151_530_with_camera_labels.tfrecord\n.current image id: 2679\n.current image id: 2680\n.current image id: 2681\n.current image id: 2682\n.current image id: 2683\n.current image id: 2684\n.current image id: 2685\n.current image id: 2686\n.current image id: 2687\n.current image id: 2688\n.current image id: 2689\n.current image id: 2690\n.current image id: 2691\n.current image id: 2692\n.current image id: 2693\n.current image id: 2694\n.current image id: 2695\n.current image id: 2696\n.current image id: 2697\n.current image id: 2698\n.current image id: 2699\n.current image id: 2700\n.current image id: 2701\n.current image id: 2702\n.current image id: 2703\n.current image id: 2704\n.current image id: 2705\n.current image id: 2706\n.current image id: 2707\n.current image id: 2708\n.current image id: 2709\n.current image id: 2710\n.current image id: 2711\n.current image id: 2712\n.current image id: 2713\n.current image id: 2714\n.current image id: 2715\n.current image id: 2716\n.current image id: 2717\n.current image id: 2718\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-857746300435138193_1869_000_1889_000_with_camera_labels.tfrecord\nsegment-857746300435138193_1869_000_1889_000_with_camera_labels.tfrecord\n.current image id: 2719\n.current image id: 2720\n.current image id: 2721\n.current image id: 2722\n.current image id: 2723\n.current image id: 2724\n.current image id: 2725\n.current image id: 2726\n.current image id: 2727\n.current image id: 2728\n.current image id: 2729\n.current image id: 2730\n.current image id: 2731\n.current image id: 2732\n.current image id: 2733\n.current image id: 2734\n.current image id: 2735\n.current image id: 2736\n.current image id: 2737\n.current image id: 2738\n.current image id: 2739\n.current image id: 2740\n.current image id: 2741\n.current image id: 2742\n.current image id: 2743\n.current image id: 2744\n.current image id: 2745\n.current image id: 2746\n.current image id: 2747\n.current image id: 2748\n.current image id: 2749\n.current image id: 2750\n.current image id: 2751\n.current image id: 2752\n.current image id: 2753\n.current image id: 2754\n.current image id: 2755\n.current image id: 2756\n.current image id: 2757\n.current image id: 2758\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8582923946352460474_2360_000_2380_000_with_camera_labels.tfrecord\nsegment-8582923946352460474_2360_000_2380_000_with_camera_labels.tfrecord\n.current image id: 2759\n.current image id: 2760\n.current image id: 2761\n.current image id: 2762\n.current image id: 2763\n.current image id: 2764\n.current image id: 2765\n.current image id: 2766\n.current image id: 2767\n.current image id: 2768\n.current image id: 2769\n.current image id: 2770\n.current image id: 2771\n.current image id: 2772\n.current image id: 2773\n.current image id: 2774\n.current image id: 2775\n.current image id: 2776\n.current image id: 2777\n.current image id: 2778\n.current image id: 2779\n.current image id: 2780\n.current image id: 2781\n.current image id: 2782\n.current image id: 2783\n.current image id: 2784\n.current image id: 2785\n.current image id: 2786\n.current image id: 2787\n.current image id: 2788\n.current image id: 2789\n.current image id: 2790\n.current image id: 2791\n.current image id: 2792\n.current image id: 2793\n.current image id: 2794\n.current image id: 2795\n.current image id: 2796\n.current image id: 2797\n.current image id: 2798\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8603916601243187272_540_000_560_000_with_camera_labels.tfrecord\nsegment-8603916601243187272_540_000_560_000_with_camera_labels.tfrecord\n.current image id: 2799\n.current image id: 2800\n.current image id: 2801\n.current image id: 2802\n.current image id: 2803\n.current image id: 2804\n.current image id: 2805\n.current image id: 2806\n.current image id: 2807\n.current image id: 2808\n.current image id: 2809\n.current image id: 2810\n.current image id: 2811\n.current image id: 2812\n.current image id: 2813\n.current image id: 2814\n.current image id: 2815\n.current image id: 2816\n.current image id: 2817\n.current image id: 2818\n.current image id: 2819\n.current image id: 2820\n.current image id: 2821\n.current image id: 2822\n.current image id: 2823\n.current image id: 2824\n.current image id: 2825\n.current image id: 2826\n.current image id: 2827\n.current image id: 2828\n.current image id: 2829\n.current image id: 2830\n.current image id: 2831\n.current image id: 2832\n.current image id: 2833\n.current image id: 2834\n.current image id: 2835\n.current image id: 2836\n.current image id: 2837\n.current image id: 2838\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8633296376655504176_514_000_534_000_with_camera_labels.tfrecord\nsegment-8633296376655504176_514_000_534_000_with_camera_labels.tfrecord\n.current image id: 2839\n.current image id: 2840\n.current image id: 2841\n.current image id: 2842\n.current image id: 2843\n.current image id: 2844\n.current image id: 2845\n.current image id: 2846\n.current image id: 2847\n.current image id: 2848\n.current image id: 2849\n.current image id: 2850\n.current image id: 2851\n.current image id: 2852\n.current image id: 2853\n.current image id: 2854\n.current image id: 2855\n.current image id: 2856\n.current image id: 2857\n.current image id: 2858\n.current image id: 2859\n.current image id: 2860\n.current image id: 2861\n.current image id: 2862\n.current image id: 2863\n.current image id: 2864\n.current image id: 2865\n.current image id: 2866\n.current image id: 2867\n.current image id: 2868\n.current image id: 2869\n.current image id: 2870\n.current image id: 2871\n.current image id: 2872\n.current image id: 2873\n.current image id: 2874\n.current image id: 2875\n.current image id: 2876\n.current image id: 2877\n.current image id: 2878\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8659567063494726263_2480_000_2500_000_with_camera_labels.tfrecord\nsegment-8659567063494726263_2480_000_2500_000_with_camera_labels.tfrecord\n.current image id: 2879\n.current image id: 2880\n.current image id: 2881\n.current image id: 2882\n.current image id: 2883\n.current image id: 2884\n.current image id: 2885\n.current image id: 2886\n.current image id: 2887\n.current image id: 2888\n.current image id: 2889\n.current image id: 2890\n.current image id: 2891\n.current image id: 2892\n.current image id: 2893\n.current image id: 2894\n.current image id: 2895\n.current image id: 2896\n.current image id: 2897\n.current image id: 2898\n.current image id: 2899\n.current image id: 2900\n.current image id: 2901\n.current image id: 2902\n.current image id: 2903\n.current image id: 2904\n.current image id: 2905\n.current image id: 2906\n.current image id: 2907\n.current image id: 2908\n.current image id: 2909\n.current image id: 2910\n.current image id: 2911\n.current image id: 2912\n.current image id: 2913\n.current image id: 2914\n.current image id: 2915\n.current image id: 2916\n.current image id: 2917\n.current image id: 2918\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8663006751916427679_1520_000_1540_000_with_camera_labels.tfrecord\nsegment-8663006751916427679_1520_000_1540_000_with_camera_labels.tfrecord\n.current image id: 2919\n.current image id: 2920\n.current image id: 2921\n.current image id: 2922\n.current image id: 2923\n.current image id: 2924\n.current image id: 2925\n.current image id: 2926\n.current image id: 2927\n.current image id: 2928\n.current image id: 2929\n.current image id: 2930\n.current image id: 2931\n.current image id: 2932\n.current image id: 2933\n.current image id: 2934\n.current image id: 2935\n.current image id: 2936\n.current image id: 2937\n.current image id: 2938\n.current image id: 2939\n.current image id: 2940\n.current image id: 2941\n.current image id: 2942\n.current image id: 2943\n.current image id: 2944\n.current image id: 2945\n.current image id: 2946\n.current image id: 2947\n.current image id: 2948\n.current image id: 2949\n.current image id: 2950\n.current image id: 2951\n.current image id: 2952\n.current image id: 2953\n.current image id: 2954\n.current image id: 2955\n.current image id: 2956\n.current image id: 2957\n.current image id: 2958\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8700094808505895018_7272_488_7292_488_with_camera_labels.tfrecord\nsegment-8700094808505895018_7272_488_7292_488_with_camera_labels.tfrecord\n.current image id: 2959\n.current image id: 2960\n.current image id: 2961\n.current image id: 2962\n.current image id: 2963\n.current image id: 2964\n.current image id: 2965\n.current image id: 2966\n.current image id: 2967\n.current image id: 2968\n.current image id: 2969\n.current image id: 2970\n.current image id: 2971\n.current image id: 2972\n.current image id: 2973\n.current image id: 2974\n.current image id: 2975\n.current image id: 2976\n.current image id: 2977\n.current image id: 2978\n.current image id: 2979\n.current image id: 2980\n.current image id: 2981\n.current image id: 2982\n.current image id: 2983\n.current image id: 2984\n.current image id: 2985\n.current image id: 2986\n.current image id: 2987\n.current image id: 2988\n.current image id: 2989\n.current image id: 2990\n.current image id: 2991\n.current image id: 2992\n.current image id: 2993\n.current image id: 2994\n.current image id: 2995\n.current image id: 2996\n.current image id: 2997\n.current image id: 2998\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-759208896257112298_184_000_204_000_with_camera_labels.tfrecord\nsegment-759208896257112298_184_000_204_000_with_camera_labels.tfrecord\n.current image id: 2999\n.current image id: 3000\n.current image id: 3001\n.current image id: 3002\n.current image id: 3003\n.current image id: 3004\n.current image id: 3005\n.current image id: 3006\n.current image id: 3007\n.current image id: 3008\n.current image id: 3009\n.current image id: 3010\n.current image id: 3011\n.current image id: 3012\n.current image id: 3013\n.current image id: 3014\n.current image id: 3015\n.current image id: 3016\n.current image id: 3017\n.current image id: 3018\n.current image id: 3019\n.current image id: 3020\n.current image id: 3021\n.current image id: 3022\n.current image id: 3023\n.current image id: 3024\n.current image id: 3025\n.current image id: 3026\n.current image id: 3027\n.current image id: 3028\n.current image id: 3029\n.current image id: 3030\n.current image id: 3031\n.current image id: 3032\n.current image id: 3033\n.current image id: 3034\n.current image id: 3035\n.current image id: 3036\n.current image id: 3037\n.current image id: 3038\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7643597152739318064_3979_000_3999_000_with_camera_labels.tfrecord\nsegment-7643597152739318064_3979_000_3999_000_with_camera_labels.tfrecord\n.current image id: 3039\n.current image id: 3040\n.current image id: 3041\n.current image id: 3042\n.current image id: 3043\n.current image id: 3044\n.current image id: 3045\n.current image id: 3046\n.current image id: 3047\n.current image id: 3048\n.current image id: 3049\n.current image id: 3050\n.current image id: 3051\n.current image id: 3052\n.current image id: 3053\n.current image id: 3054\n.current image id: 3055\n.current image id: 3056\n.current image id: 3057\n.current image id: 3058\n.current image id: 3059\n.current image id: 3060\n.current image id: 3061\n.current image id: 3062\n.current image id: 3063\n.current image id: 3064\n.current image id: 3065\n.current image id: 3066\n.current image id: 3067\n.current image id: 3068\n.current image id: 3069\n.current image id: 3070\n.current image id: 3071\n.current image id: 3072\n.current image id: 3073\n.current image id: 3074\n.current image id: 3075\n.current image id: 3076\n.current image id: 3077\n.current image id: 3078\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7670103006580549715_360_000_380_000_with_camera_labels.tfrecord\nsegment-7670103006580549715_360_000_380_000_with_camera_labels.tfrecord\n.current image id: 3079\n.current image id: 3080\n.current image id: 3081\n.current image id: 3082\n.current image id: 3083\n.current image id: 3084\n.current image id: 3085\n.current image id: 3086\n.current image id: 3087\n.current image id: 3088\n.current image id: 3089\n.current image id: 3090\n.current image id: 3091\n.current image id: 3092\n.current image id: 3093\n.current image id: 3094\n.current image id: 3095\n.current image id: 3096\n.current image id: 3097\n.current image id: 3098\n.current image id: 3099\n.current image id: 3100\n.current image id: 3101\n.current image id: 3102\n.current image id: 3103\n.current image id: 3104\n.current image id: 3105\n.current image id: 3106\n.current image id: 3107\n.current image id: 3108\n.current image id: 3109\n.current image id: 3110\n.current image id: 3111\n.current image id: 3112\n.current image id: 3113\n.current image id: 3114\n.current image id: 3115\n.current image id: 3116\n.current image id: 3117\n.current image id: 3118\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7727809428114700355_2960_000_2980_000_with_camera_labels.tfrecord\nsegment-7727809428114700355_2960_000_2980_000_with_camera_labels.tfrecord\n.current image id: 3119\n.current image id: 3120\n.current image id: 3121\n.current image id: 3122\n.current image id: 3123\n.current image id: 3124\n.current image id: 3125\n.current image id: 3126\n.current image id: 3127\n.current image id: 3128\n.current image id: 3129\n.current image id: 3130\n.current image id: 3131\n.current image id: 3132\n.current image id: 3133\n.current image id: 3134\n.current image id: 3135\n.current image id: 3136\n.current image id: 3137\n.current image id: 3138\n.current image id: 3139\n.current image id: 3140\n.current image id: 3141\n.current image id: 3142\n.current image id: 3143\n.current image id: 3144\n.current image id: 3145\n.current image id: 3146\n.current image id: 3147\n.current image id: 3148\n.current image id: 3149\n.current image id: 3150\n.current image id: 3151\n.current image id: 3152\n.current image id: 3153\n.current image id: 3154\n.current image id: 3155\n.current image id: 3156\n.current image id: 3157\n.current image id: 3158\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7741361323303179462_1230_310_1250_310_with_camera_labels.tfrecord\nsegment-7741361323303179462_1230_310_1250_310_with_camera_labels.tfrecord\n.current image id: 3159\n.current image id: 3160\n.current image id: 3161\n.current image id: 3162\n.current image id: 3163\n.current image id: 3164\n.current image id: 3165\n.current image id: 3166\n.current image id: 3167\n.current image id: 3168\n.current image id: 3169\n.current image id: 3170\n.current image id: 3171\n.current image id: 3172\n.current image id: 3173\n.current image id: 3174\n.current image id: 3175\n.current image id: 3176\n.current image id: 3177\n.current image id: 3178\n.current image id: 3179\n.current image id: 3180\n.current image id: 3181\n.current image id: 3182\n.current image id: 3183\n.current image id: 3184\n.current image id: 3185\n.current image id: 3186\n.current image id: 3187\n.current image id: 3188\n.current image id: 3189\n.current image id: 3190\n.current image id: 3191\n.current image id: 3192\n.current image id: 3193\n.current image id: 3194\n.current image id: 3195\n.current image id: 3196\n.current image id: 3197\n.current image id: 3198\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7761658966964621355_1000_000_1020_000_with_camera_labels.tfrecord\nsegment-7761658966964621355_1000_000_1020_000_with_camera_labels.tfrecord\n.current image id: 3199\n.current image id: 3200\n.current image id: 3201\n.current image id: 3202\n.current image id: 3203\n.current image id: 3204\n.current image id: 3205\n.current image id: 3206\n.current image id: 3207\n.current image id: 3208\n.current image id: 3209\n.current image id: 3210\n.current image id: 3211\n.current image id: 3212\n.current image id: 3213\n.current image id: 3214\n.current image id: 3215\n.current image id: 3216\n.current image id: 3217\n.current image id: 3218\n.current image id: 3219\n.current image id: 3220\n.current image id: 3221\n.current image id: 3222\n.current image id: 3223\n.current image id: 3224\n.current image id: 3225\n.current image id: 3226\n.current image id: 3227\n.current image id: 3228\n.current image id: 3229\n.current image id: 3230\n.current image id: 3231\n.current image id: 3232\n.current image id: 3233\n.current image id: 3234\n.current image id: 3235\n.current image id: 3236\n.current image id: 3237\n.current image id: 3238\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7768517933263896280_1120_000_1140_000_with_camera_labels.tfrecord\nsegment-7768517933263896280_1120_000_1140_000_with_camera_labels.tfrecord\n.current image id: 3239\n.current image id: 3240\n.current image id: 3241\n.current image id: 3242\n.current image id: 3243\n.current image id: 3244\n.current image id: 3245\n.current image id: 3246\n.current image id: 3247\n.current image id: 3248\n.current image id: 3249\n.current image id: 3250\n.current image id: 3251\n.current image id: 3252\n.current image id: 3253\n.current image id: 3254\n.current image id: 3255\n.current image id: 3256\n.current image id: 3257\n.current image id: 3258\n.current image id: 3259\n.current image id: 3260\n.current image id: 3261\n.current image id: 3262\n.current image id: 3263\n.current image id: 3264\n.current image id: 3265\n.current image id: 3266\n.current image id: 3267\n.current image id: 3268\n.current image id: 3269\n.current image id: 3270\n.current image id: 3271\n.current image id: 3272\n.current image id: 3273\n.current image id: 3274\n.current image id: 3275\n.current image id: 3276\n.current image id: 3277\n.current image id: 3278\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7799671367768576481_260_000_280_000_with_camera_labels.tfrecord\nsegment-7799671367768576481_260_000_280_000_with_camera_labels.tfrecord\n.current image id: 3279\n.current image id: 3280\n.current image id: 3281\n.current image id: 3282\n.current image id: 3283\n.current image id: 3284\n.current image id: 3285\n.current image id: 3286\n.current image id: 3287\n.current image id: 3288\n.current image id: 3289\n.current image id: 3290\n.current image id: 3291\n.current image id: 3292\n.current image id: 3293\n.current image id: 3294\n.current image id: 3295\n.current image id: 3296\n.current image id: 3297\n.current image id: 3298\n.current image id: 3299\n.current image id: 3300\n.current image id: 3301\n.current image id: 3302\n.current image id: 3303\n.current image id: 3304\n.current image id: 3305\n.current image id: 3306\n.current image id: 3307\n.current image id: 3308\n.current image id: 3309\n.current image id: 3310\n.current image id: 3311\n.current image id: 3312\n.current image id: 3313\n.current image id: 3314\n.current image id: 3315\n.current image id: 3316\n.current image id: 3317\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7837172662136597262_1140_000_1160_000_with_camera_labels.tfrecord\nsegment-7837172662136597262_1140_000_1160_000_with_camera_labels.tfrecord\n.current image id: 3318\n.current image id: 3319\n.current image id: 3320\n.current image id: 3321\n.current image id: 3322\n.current image id: 3323\n.current image id: 3324\n.current image id: 3325\n.current image id: 3326\n.current image id: 3327\n.current image id: 3328\n.current image id: 3329\n.current image id: 3330\n.current image id: 3331\n.current image id: 3332\n.current image id: 3333\n.current image id: 3334\n.current image id: 3335\n.current image id: 3336\n.current image id: 3337\n.current image id: 3338\n.current image id: 3339\n.current image id: 3340\n.current image id: 3341\n.current image id: 3342\n.current image id: 3343\n.current image id: 3344\n.current image id: 3345\n.current image id: 3346\n.current image id: 3347\n.current image id: 3348\n.current image id: 3349\n.current image id: 3350\n.current image id: 3351\n.current image id: 3352\n.current image id: 3353\n.current image id: 3354\n.current image id: 3355\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7850521592343484282_4576_090_4596_090_with_camera_labels.tfrecord\nsegment-7850521592343484282_4576_090_4596_090_with_camera_labels.tfrecord\n.current image id: 3356\n.current image id: 3357\n.current image id: 3358\n.current image id: 3359\n.current image id: 3360\n.current image id: 3361\n.current image id: 3362\n.current image id: 3363\n.current image id: 3364\n.current image id: 3365\n.current image id: 3366\n.current image id: 3367\n.current image id: 3368\n.current image id: 3369\n.current image id: 3370\n.current image id: 3371\n.current image id: 3372\n.current image id: 3373\n.current image id: 3374\n.current image id: 3375\n.current image id: 3376\n.current image id: 3377\n.current image id: 3378\n.current image id: 3379\n.current image id: 3380\n.current image id: 3381\n.current image id: 3382\n.current image id: 3383\n.current image id: 3384\n.current image id: 3385\n.current image id: 3386\n.current image id: 3387\n.current image id: 3388\n.current image id: 3389\n.current image id: 3390\n.current image id: 3391\n.current image id: 3392\n.current image id: 3393\n.current image id: 3394\n.current image id: 3395\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7861168750216313148_1305_290_1325_290_with_camera_labels.tfrecord\nsegment-7861168750216313148_1305_290_1325_290_with_camera_labels.tfrecord\n.current image id: 3396\n.current image id: 3397\n.current image id: 3398\n.current image id: 3399\n.current image id: 3400\n.current image id: 3401\n.current image id: 3402\n.current image id: 3403\n.current image id: 3404\n.current image id: 3405\n.current image id: 3406\n.current image id: 3407\n.current image id: 3408\n.current image id: 3409\n.current image id: 3410\n.current image id: 3411\n.current image id: 3412\n.current image id: 3413\n.current image id: 3414\n.current image id: 3415\n.current image id: 3416\n.current image id: 3417\n.current image id: 3418\n.current image id: 3419\n.current image id: 3420\n.current image id: 3421\n.current image id: 3422\n.current image id: 3423\n.current image id: 3424\n.current image id: 3425\n.current image id: 3426\n.current image id: 3427\n.current image id: 3428\n.current image id: 3429\n.current image id: 3430\n.current image id: 3431\n.current image id: 3432\n.current image id: 3433\n.current image id: 3434\n.current image id: 3435\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-786582060300383668_2944_060_2964_060_with_camera_labels.tfrecord\nsegment-786582060300383668_2944_060_2964_060_with_camera_labels.tfrecord\n.current image id: 3436\n.current image id: 3437\n.current image id: 3438\n.current image id: 3439\n.current image id: 3440\n.current image id: 3441\n.current image id: 3442\n.current image id: 3443\n.current image id: 3444\n.current image id: 3445\n.current image id: 3446\n.current image id: 3447\n.current image id: 3448\n.current image id: 3449\n.current image id: 3450\n.current image id: 3451\n.current image id: 3452\n.current image id: 3453\n.current image id: 3454\n.current image id: 3455\n.current image id: 3456\n.current image id: 3457\n.current image id: 3458\n.current image id: 3459\n.current image id: 3460\n.current image id: 3461\n.current image id: 3462\n.current image id: 3463\n.current image id: 3464\n.current image id: 3465\n.current image id: 3466\n.current image id: 3467\n.current image id: 3468\n.current image id: 3469\n.current image id: 3470\n.current image id: 3471\n.current image id: 3472\n.current image id: 3473\n.current image id: 3474\n.current image id: 3475\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7885161619764516373_289_280_309_280_with_camera_labels.tfrecord\nsegment-7885161619764516373_289_280_309_280_with_camera_labels.tfrecord\n.current image id: 3476\n.current image id: 3477\n.current image id: 3478\n.current image id: 3479\n.current image id: 3480\n.current image id: 3481\n.current image id: 3482\n.current image id: 3483\n.current image id: 3484\n.current image id: 3485\n.current image id: 3486\n.current image id: 3487\n.current image id: 3488\n.current image id: 3489\n.current image id: 3490\n.current image id: 3491\n.current image id: 3492\n.current image id: 3493\n.current image id: 3494\n.current image id: 3495\n.current image id: 3496\n.current image id: 3497\n.current image id: 3498\n.current image id: 3499\n.current image id: 3500\n.current image id: 3501\n.current image id: 3502\n.current image id: 3503\n.current image id: 3504\n.current image id: 3505\n.current image id: 3506\n.current image id: 3507\n.current image id: 3508\n.current image id: 3509\n.current image id: 3510\n.current image id: 3511\n.current image id: 3512\n.current image id: 3513\n.current image id: 3514\n.current image id: 3515\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7890808800227629086_6162_700_6182_700_with_camera_labels.tfrecord\nsegment-7890808800227629086_6162_700_6182_700_with_camera_labels.tfrecord\n.current image id: 3516\n.current image id: 3517\n.current image id: 3518\n.current image id: 3519\n.current image id: 3520\n.current image id: 3521\n.current image id: 3522\n.current image id: 3523\n.current image id: 3524\n.current image id: 3525\n.current image id: 3526\n.current image id: 3527\n.current image id: 3528\n.current image id: 3529\n.current image id: 3530\n.current image id: 3531\n.current image id: 3532\n.current image id: 3533\n.current image id: 3534\n.current image id: 3535\n.current image id: 3536\n.current image id: 3537\n.current image id: 3538\n.current image id: 3539\n.current image id: 3540\n.current image id: 3541\n.current image id: 3542\n.current image id: 3543\n.current image id: 3544\n.current image id: 3545\n.current image id: 3546\n.current image id: 3547\n.current image id: 3548\n.current image id: 3549\n.current image id: 3550\n.current image id: 3551\n.current image id: 3552\n.current image id: 3553\n.current image id: 3554\n.current image id: 3555\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7912728502266478772_1202_200_1222_200_with_camera_labels.tfrecord\nsegment-7912728502266478772_1202_200_1222_200_with_camera_labels.tfrecord\n.current image id: 3556\n.current image id: 3557\n.current image id: 3558\n.current image id: 3559\n.current image id: 3560\n.current image id: 3561\n.current image id: 3562\n.current image id: 3563\n.current image id: 3564\n.current image id: 3565\n.current image id: 3566\n.current image id: 3567\n.current image id: 3568\n.current image id: 3569\n.current image id: 3570\n.current image id: 3571\n.current image id: 3572\n.current image id: 3573\n.current image id: 3574\n.current image id: 3575\n.current image id: 3576\n.current image id: 3577\n.current image id: 3578\n.current image id: 3579\n.current image id: 3580\n.current image id: 3581\n.current image id: 3582\n.current image id: 3583\n.current image id: 3584\n.current image id: 3585\n.current image id: 3586\n.current image id: 3587\n.current image id: 3588\n.current image id: 3589\n.current image id: 3590\n.current image id: 3591\n.current image id: 3592\n.current image id: 3593\n.current image id: 3594\n.current image id: 3595\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7920326980177504058_2454_310_2474_310_with_camera_labels.tfrecord\nsegment-7920326980177504058_2454_310_2474_310_with_camera_labels.tfrecord\n.current image id: 3596\n.current image id: 3597\n.current image id: 3598\n.current image id: 3599\n.current image id: 3600\n.current image id: 3601\n.current image id: 3602\n.current image id: 3603\n.current image id: 3604\n.current image id: 3605\n.current image id: 3606\n.current image id: 3607\n.current image id: 3608\n.current image id: 3609\n.current image id: 3610\n.current image id: 3611\n.current image id: 3612\n.current image id: 3613\n.current image id: 3614\n.current image id: 3615\n.current image id: 3616\n.current image id: 3617\n.current image id: 3618\n.current image id: 3619\n.current image id: 3620\n.current image id: 3621\n.current image id: 3622\n.current image id: 3623\n.current image id: 3624\n.current image id: 3625\n.current image id: 3626\n.current image id: 3627\n.current image id: 3628\n.current image id: 3629\n.current image id: 3630\n.current image id: 3631\n.current image id: 3632\n.current image id: 3633\n.current image id: 3634\n.current image id: 3635\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7921369793217703814_1060_000_1080_000_with_camera_labels.tfrecord\nsegment-7921369793217703814_1060_000_1080_000_with_camera_labels.tfrecord\n.current image id: 3636\n.current image id: 3637\n.current image id: 3638\n.current image id: 3639\n.current image id: 3640\n.current image id: 3641\n.current image id: 3642\n.current image id: 3643\n.current image id: 3644\n.current image id: 3645\n.current image id: 3646\n.current image id: 3647\n.current image id: 3648\n.current image id: 3649\n.current image id: 3650\n.current image id: 3651\n.current image id: 3652\n.current image id: 3653\n.current image id: 3654\n.current image id: 3655\n.current image id: 3656\n.current image id: 3657\n.current image id: 3658\n.current image id: 3659\n.current image id: 3660\n.current image id: 3661\n.current image id: 3662\n.current image id: 3663\n.current image id: 3664\n.current image id: 3665\n.current image id: 3666\n.current image id: 3667\n.current image id: 3668\n.current image id: 3669\n.current image id: 3670\n.current image id: 3671\n.current image id: 3672\n.current image id: 3673\n.current image id: 3674\n.current image id: 3675\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7934693355186591404_73_000_93_000_with_camera_labels.tfrecord\nsegment-7934693355186591404_73_000_93_000_with_camera_labels.tfrecord\n.current image id: 3676\n.current image id: 3677\n.current image id: 3678\n.current image id: 3679\n.current image id: 3680\n.current image id: 3681\n.current image id: 3682\n.current image id: 3683\n.current image id: 3684\n.current image id: 3685\n.current image id: 3686\n.current image id: 3687\n.current image id: 3688\n.current image id: 3689\n.current image id: 3690\n.current image id: 3691\n.current image id: 3692\n.current image id: 3693\n.current image id: 3694\n.current image id: 3695\n.current image id: 3696\n.current image id: 3697\n.current image id: 3698\n.current image id: 3699\n.current image id: 3700\n.current image id: 3701\n.current image id: 3702\n.current image id: 3703\n.current image id: 3704\n.current image id: 3705\n.current image id: 3706\n.current image id: 3707\n.current image id: 3708\n.current image id: 3709\n.current image id: 3710\n.current image id: 3711\n.current image id: 3712\n.current image id: 3713\n.current image id: 3714\n.current image id: 3715\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7940496892864900543_4783_540_4803_540_with_camera_labels.tfrecord\nsegment-7940496892864900543_4783_540_4803_540_with_camera_labels.tfrecord\n.current image id: 3716\n.current image id: 3717\n.current image id: 3718\n.current image id: 3719\n.current image id: 3720\n.current image id: 3721\n.current image id: 3722\n.current image id: 3723\n.current image id: 3724\n.current image id: 3725\n.current image id: 3726\n.current image id: 3727\n.current image id: 3728\n.current image id: 3729\n.current image id: 3730\n.current image id: 3731\n.current image id: 3732\n.current image id: 3733\n.current image id: 3734\n.current image id: 3735\n.current image id: 3736\n.current image id: 3737\n.current image id: 3738\n.current image id: 3739\n.current image id: 3740\n.current image id: 3741\n.current image id: 3742\n.current image id: 3743\n.current image id: 3744\n.current image id: 3745\n.current image id: 3746\n.current image id: 3747\n.current image id: 3748\n.current image id: 3749\n.current image id: 3750\n.current image id: 3751\n.current image id: 3752\n.current image id: 3753\n.current image id: 3754\n.current image id: 3755\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7950869827763684964_8685_000_8705_000_with_camera_labels.tfrecord\nsegment-7950869827763684964_8685_000_8705_000_with_camera_labels.tfrecord\n.current image id: 3756\n.current image id: 3757\n.current image id: 3758\n.current image id: 3759\n.current image id: 3760\n.current image id: 3761\n.current image id: 3762\n.current image id: 3763\n.current image id: 3764\n.current image id: 3765\n.current image id: 3766\n.current image id: 3767\n.current image id: 3768\n.current image id: 3769\n.current image id: 3770\n.current image id: 3771\n.current image id: 3772\n.current image id: 3773\n.current image id: 3774\n.current image id: 3775\n.current image id: 3776\n.current image id: 3777\n.current image id: 3778\n.current image id: 3779\n.current image id: 3780\n.current image id: 3781\n.current image id: 3782\n.current image id: 3783\n.current image id: 3784\n.current image id: 3785\n.current image id: 3786\n.current image id: 3787\n.current image id: 3788\n.current image id: 3789\n.current image id: 3790\n.current image id: 3791\n.current image id: 3792\n.current image id: 3793\n.current image id: 3794\n.current image id: 3795\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7996500550445322129_2333_304_2353_304_with_camera_labels.tfrecord\nsegment-7996500550445322129_2333_304_2353_304_with_camera_labels.tfrecord\n.current image id: 3796\n.current image id: 3797\n.current image id: 3798\n.current image id: 3799\n.current image id: 3800\n.current image id: 3801\n.current image id: 3802\n.current image id: 3803\n.current image id: 3804\n.current image id: 3805\n.current image id: 3806\n.current image id: 3807\n.current image id: 3808\n.current image id: 3809\n.current image id: 3810\n.current image id: 3811\n.current image id: 3812\n.current image id: 3813\n.current image id: 3814\n.current image id: 3815\n.current image id: 3816\n.current image id: 3817\n.current image id: 3818\n.current image id: 3819\n.current image id: 3820\n.current image id: 3821\n.current image id: 3822\n.current image id: 3823\n.current image id: 3824\n.current image id: 3825\n.current image id: 3826\n.current image id: 3827\n.current image id: 3828\n.current image id: 3829\n.current image id: 3830\n.current image id: 3831\n.current image id: 3832\n.current image id: 3833\n.current image id: 3834\n.current image id: 3835\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-7999729608823422351_1483_600_1503_600_with_camera_labels.tfrecord\nsegment-7999729608823422351_1483_600_1503_600_with_camera_labels.tfrecord\n.current image id: 3836\n.current image id: 3837\n.current image id: 3838\n.current image id: 3839\n.current image id: 3840\n.current image id: 3841\n.current image id: 3842\n.current image id: 3843\n.current image id: 3844\n.current image id: 3845\n.current image id: 3846\n.current image id: 3847\n.current image id: 3848\n.current image id: 3849\n.current image id: 3850\n.current image id: 3851\n.current image id: 3852\n.current image id: 3853\n.current image id: 3854\n.current image id: 3855\n.current image id: 3856\n.current image id: 3857\n.current image id: 3858\n.current image id: 3859\n.current image id: 3860\n.current image id: 3861\n.current image id: 3862\n.current image id: 3863\n.current image id: 3864\n.current image id: 3865\n.current image id: 3866\n.current image id: 3867\n.current image id: 3868\n.current image id: 3869\n.current image id: 3870\n.current image id: 3871\n.current image id: 3872\n.current image id: 3873\n.current image id: 3874\n.current image id: 3875\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-8031709558315183746_491_220_511_220_with_camera_labels.tfrecord\nsegment-8031709558315183746_491_220_511_220_with_camera_labels.tfrecord\n.current image id: 3876\n.current image id: 3877\n.current image id: 3878\n.current image id: 3879\n.current image id: 3880\n.current image id: 3881\n.current image id: 3882\n.current image id: 3883\n.current image id: 3884\n.current image id: 3885\n.current image id: 3886\n.current image id: 3887\n.current image id: 3888\n.current image id: 3889\n.current image id: 3890\n.current image id: 3891\n.current image id: 3892\n.current image id: 3893\n.current image id: 3894\n.current image id: 3895\n.current image id: 3896\n.current image id: 3897\n.current image id: 3898\n.current image id: 3899\n.current image id: 3900\n.current image id: 3901\n.current image id: 3902\n.current image id: 3903\n.current image id: 3904\n.current image id: 3905\n.current image id: 3906\n.current image id: 3907\n.current image id: 3908\n.current image id: 3909\n.current image id: 3910\n.current image id: 3911\n.current image id: 3912\n.current image id: 3913\n.current image id: 3914\n.current image id: 3915\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-80599353855279550_2604_480_2624_480_with_camera_labels.tfrecord\nsegment-80599353855279550_2604_480_2624_480_with_camera_labels.tfrecord\n.current image id: 3916\n.current image id: 3917\n.current image id: 3918\n.current image id: 3919\n.current image id: 3920\n.current image id: 3921\n.current image id: 3922\n.current image id: 3923\n.current image id: 3924\n.current image id: 3925\n.current image id: 3926\n.current image id: 3927\n.current image id: 3928\n.current image id: 3929\n.current image id: 3930\n.current image id: 3931\n.current image id: 3932\n.current image id: 3933\n.current image id: 3934\n.current image id: 3935\n.current image id: 3936\n.current image id: 3937\n.current image id: 3938\n.current image id: 3939\n.current image id: 3940\n.current image id: 3941\n.current image id: 3942\n.current image id: 3943\n.current image id: 3944\n.current image id: 3945\n.current image id: 3946\n.current image id: 3947\n.current image id: 3948\n.current image id: 3949\n.current image id: 3950\n.current image id: 3951\n.current image id: 3952\n.current image id: 3953\n.current image id: 3954\n.current image id: 3955\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-809159138284604331_3355_840_3375_840_with_camera_labels.tfrecord\nsegment-809159138284604331_3355_840_3375_840_with_camera_labels.tfrecord\n.current image id: 3956\n.current image id: 3957\n.current image id: 3958\n.current image id: 3959\n.current image id: 3960\n.current image id: 3961\n.current image id: 3962\n.current image id: 3963\n.current image id: 3964\n.current image id: 3965\n.current image id: 3966\n.current image id: 3967\n.current image id: 3968\n.current image id: 3969\n.current image id: 3970\n.current image id: 3971\n.current image id: 3972\n.current image id: 3973\n.current image id: 3974\n.current image id: 3975\n.current image id: 3976\n.current image id: 3977\n.current image id: 3978\n.current image id: 3979\n.current image id: 3980\n.current image id: 3981\n.current image id: 3982\n.current image id: 3983\n.current image id: 3984\n.current image id: 3985\n.current image id: 3986\n.current image id: 3987\n.current image id: 3988\n.current image id: 3989\n.current image id: 3990\n.current image id: 3991\n.current image id: 3992\n.current image id: 3993\n.current image id: 3994\n.current image id: 3995\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7000927478052605119_1052_330_1072_330_with_camera_labels.tfrecord\nsegment-7000927478052605119_1052_330_1072_330_with_camera_labels.tfrecord\n.current image id: 3996\n.current image id: 3997\n.current image id: 3998\n.current image id: 3999\n.current image id: 4000\n.current image id: 4001\n.current image id: 4002\n.current image id: 4003\n.current image id: 4004\n.current image id: 4005\n.current image id: 4006\n.current image id: 4007\n.current image id: 4008\n.current image id: 4009\n.current image id: 4010\n.current image id: 4011\n.current image id: 4012\n.current image id: 4013\n.current image id: 4014\n.current image id: 4015\n.current image id: 4016\n.current image id: 4017\n.current image id: 4018\n.current image id: 4019\n.current image id: 4020\n.current image id: 4021\n.current image id: 4022\n.current image id: 4023\n.current image id: 4024\n.current image id: 4025\n.current image id: 4026\n.current image id: 4027\n.current image id: 4028\n.current image id: 4029\n.current image id: 4030\n.current image id: 4031\n.current image id: 4032\n.current image id: 4033\n.current image id: 4034\n.current image id: 4035\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7007702792982559244_4400_000_4420_000_with_camera_labels.tfrecord\nsegment-7007702792982559244_4400_000_4420_000_with_camera_labels.tfrecord\n.current image id: 4036\n.current image id: 4037\n.current image id: 4038\n.current image id: 4039\n.current image id: 4040\n.current image id: 4041\n.current image id: 4042\n.current image id: 4043\n.current image id: 4044\n.current image id: 4045\n.current image id: 4046\n.current image id: 4047\n.current image id: 4048\n.current image id: 4049\n.current image id: 4050\n.current image id: 4051\n.current image id: 4052\n.current image id: 4053\n.current image id: 4054\n.current image id: 4055\n.current image id: 4056\n.current image id: 4057\n.current image id: 4058\n.current image id: 4059\n.current image id: 4060\n.current image id: 4061\n.current image id: 4062\n.current image id: 4063\n.current image id: 4064\n.current image id: 4065\n.current image id: 4066\n.current image id: 4067\n.current image id: 4068\n.current image id: 4069\n.current image id: 4070\n.current image id: 4071\n.current image id: 4072\n.current image id: 4073\n.current image id: 4074\n.current image id: 4075\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7019385869759035132_4270_850_4290_850_with_camera_labels.tfrecord\nsegment-7019385869759035132_4270_850_4290_850_with_camera_labels.tfrecord\n.current image id: 4076\n.current image id: 4077\n.current image id: 4078\n.current image id: 4079\n.current image id: 4080\n.current image id: 4081\n.current image id: 4082\n.current image id: 4083\n.current image id: 4084\n.current image id: 4085\n.current image id: 4086\n.current image id: 4087\n.current image id: 4088\n.current image id: 4089\n.current image id: 4090\n.current image id: 4091\n.current image id: 4092\n.current image id: 4093\n.current image id: 4094\n.current image id: 4095\n.current image id: 4096\n.current image id: 4097\n.current image id: 4098\n.current image id: 4099\n.current image id: 4100\n.current image id: 4101\n.current image id: 4102\n.current image id: 4103\n.current image id: 4104\n.current image id: 4105\n.current image id: 4106\n.current image id: 4107\n.current image id: 4108\n.current image id: 4109\n.current image id: 4110\n.current image id: 4111\n.current image id: 4112\n.current image id: 4113\n.current image id: 4114\n.current image id: 4115\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7038362761309539946_4207_130_4227_130_with_camera_labels.tfrecord\nsegment-7038362761309539946_4207_130_4227_130_with_camera_labels.tfrecord\n.current image id: 4116\n.current image id: 4117\n.current image id: 4118\n.current image id: 4119\n.current image id: 4120\n.current image id: 4121\n.current image id: 4122\n.current image id: 4123\n.current image id: 4124\n.current image id: 4125\n.current image id: 4126\n.current image id: 4127\n.current image id: 4128\n.current image id: 4129\n.current image id: 4130\n.current image id: 4131\n.current image id: 4132\n.current image id: 4133\n.current image id: 4134\n.current image id: 4135\n.current image id: 4136\n.current image id: 4137\n.current image id: 4138\n.current image id: 4139\n.current image id: 4140\n.current image id: 4141\n.current image id: 4142\n.current image id: 4143\n.current image id: 4144\n.current image id: 4145\n.current image id: 4146\n.current image id: 4147\n.current image id: 4148\n.current image id: 4149\n.current image id: 4150\n.current image id: 4151\n.current image id: 4152\n.current image id: 4153\n.current image id: 4154\n.current image id: 4155\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7089765864827567005_1020_000_1040_000_with_camera_labels.tfrecord\nsegment-7089765864827567005_1020_000_1040_000_with_camera_labels.tfrecord\n.current image id: 4156\n.current image id: 4157\n.current image id: 4158\n.current image id: 4159\n.current image id: 4160\n.current image id: 4161\n.current image id: 4162\n.current image id: 4163\n.current image id: 4164\n.current image id: 4165\n.current image id: 4166\n.current image id: 4167\n.current image id: 4168\n.current image id: 4169\n.current image id: 4170\n.current image id: 4171\n.current image id: 4172\n.current image id: 4173\n.current image id: 4174\n.current image id: 4175\n.current image id: 4176\n.current image id: 4177\n.current image id: 4178\n.current image id: 4179\n.current image id: 4180\n.current image id: 4181\n.current image id: 4182\n.current image id: 4183\n.current image id: 4184\n.current image id: 4185\n.current image id: 4186\n.current image id: 4187\n.current image id: 4188\n.current image id: 4189\n.current image id: 4190\n.current image id: 4191\n.current image id: 4192\n.current image id: 4193\n.current image id: 4194\n.current image id: 4195\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7101099554331311287_5320_000_5340_000_with_camera_labels.tfrecord\nsegment-7101099554331311287_5320_000_5340_000_with_camera_labels.tfrecord\n.current image id: 4196\n.current image id: 4197\n.current image id: 4198\n.current image id: 4199\n.current image id: 4200\n.current image id: 4201\n.current image id: 4202\n.current image id: 4203\n.current image id: 4204\n.current image id: 4205\n.current image id: 4206\n.current image id: 4207\n.current image id: 4208\n.current image id: 4209\n.current image id: 4210\n.current image id: 4211\n.current image id: 4212\n.current image id: 4213\n.current image id: 4214\n.current image id: 4215\n.current image id: 4216\n.current image id: 4217\n.current image id: 4218\n.current image id: 4219\n.current image id: 4220\n.current image id: 4221\n.current image id: 4222\n.current image id: 4223\n.current image id: 4224\n.current image id: 4225\n.current image id: 4226\n.current image id: 4227\n.current image id: 4228\n.current image id: 4229\n.current image id: 4230\n.current image id: 4231\n.current image id: 4232\n.current image id: 4233\n.current image id: 4234\n.current image id: 4235\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7120839653809570957_1060_000_1080_000_with_camera_labels.tfrecord\nsegment-7120839653809570957_1060_000_1080_000_with_camera_labels.tfrecord\n.current image id: 4236\n.current image id: 4237\n.current image id: 4238\n.current image id: 4239\n.current image id: 4240\n.current image id: 4241\n.current image id: 4242\n.current image id: 4243\n.current image id: 4244\n.current image id: 4245\n.current image id: 4246\n.current image id: 4247\n.current image id: 4248\n.current image id: 4249\n.current image id: 4250\n.current image id: 4251\n.current image id: 4252\n.current image id: 4253\n.current image id: 4254\n.current image id: 4255\n.current image id: 4256\n.current image id: 4257\n.current image id: 4258\n.current image id: 4259\n.current image id: 4260\n.current image id: 4261\n.current image id: 4262\n.current image id: 4263\n.current image id: 4264\n.current image id: 4265\n.current image id: 4266\n.current image id: 4267\n.current image id: 4268\n.current image id: 4269\n.current image id: 4270\n.current image id: 4271\n.current image id: 4272\n.current image id: 4273\n.current image id: 4274\n.current image id: 4275\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7187601925763611197_4384_300_4404_300_with_camera_labels.tfrecord\nsegment-7187601925763611197_4384_300_4404_300_with_camera_labels.tfrecord\n.current image id: 4276\n.current image id: 4277\n.current image id: 4278\n.current image id: 4279\n.current image id: 4280\n.current image id: 4281\n.current image id: 4282\n.current image id: 4283\n.current image id: 4284\n.current image id: 4285\n.current image id: 4286\n.current image id: 4287\n.current image id: 4288\n.current image id: 4289\n.current image id: 4290\n.current image id: 4291\n.current image id: 4292\n.current image id: 4293\n.current image id: 4294\n.current image id: 4295\n.current image id: 4296\n.current image id: 4297\n.current image id: 4298\n.current image id: 4299\n.current image id: 4300\n.current image id: 4301\n.current image id: 4302\n.current image id: 4303\n.current image id: 4304\n.current image id: 4305\n.current image id: 4306\n.current image id: 4307\n.current image id: 4308\n.current image id: 4309\n.current image id: 4310\n.current image id: 4311\n.current image id: 4312\n.current image id: 4313\n.current image id: 4314\n.current image id: 4315\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7189996641300362130_3360_000_3380_000_with_camera_labels.tfrecord\nsegment-7189996641300362130_3360_000_3380_000_with_camera_labels.tfrecord\n.current image id: 4316\n.current image id: 4317\n.current image id: 4318\n.current image id: 4319\n.current image id: 4320\n.current image id: 4321\n.current image id: 4322\n.current image id: 4323\n.current image id: 4324\n.current image id: 4325\n.current image id: 4326\n.current image id: 4327\n.current image id: 4328\n.current image id: 4329\n.current image id: 4330\n.current image id: 4331\n.current image id: 4332\n.current image id: 4333\n.current image id: 4334\n.current image id: 4335\n.current image id: 4336\n.current image id: 4337\n.current image id: 4338\n.current image id: 4339\n.current image id: 4340\n.current image id: 4341\n.current image id: 4342\n.current image id: 4343\n.current image id: 4344\n.current image id: 4345\n.current image id: 4346\n.current image id: 4347\n.current image id: 4348\n.current image id: 4349\n.current image id: 4350\n.current image id: 4351\n.current image id: 4352\n.current image id: 4353\n.current image id: 4354\n.current image id: 4355\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7239123081683545077_4044_370_4064_370_with_camera_labels.tfrecord\nsegment-7239123081683545077_4044_370_4064_370_with_camera_labels.tfrecord\n.current image id: 4356\n.current image id: 4357\n.current image id: 4358\n.current image id: 4359\n.current image id: 4360\n.current image id: 4361\n.current image id: 4362\n.current image id: 4363\n.current image id: 4364\n.current image id: 4365\n.current image id: 4366\n.current image id: 4367\n.current image id: 4368\n.current image id: 4369\n.current image id: 4370\n.current image id: 4371\n.current image id: 4372\n.current image id: 4373\n.current image id: 4374\n.current image id: 4375\n.current image id: 4376\n.current image id: 4377\n.current image id: 4378\n.current image id: 4379\n.current image id: 4380\n.current image id: 4381\n.current image id: 4382\n.current image id: 4383\n.current image id: 4384\n.current image id: 4385\n.current image id: 4386\n.current image id: 4387\n.current image id: 4388\n.current image id: 4389\n.current image id: 4390\n.current image id: 4391\n.current image id: 4392\n.current image id: 4393\n.current image id: 4394\n.current image id: 4395\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7290499689576448085_3960_000_3980_000_with_camera_labels.tfrecord\nsegment-7290499689576448085_3960_000_3980_000_with_camera_labels.tfrecord\n.current image id: 4396\n.current image id: 4397\n.current image id: 4398\n.current image id: 4399\n.current image id: 4400\n.current image id: 4401\n.current image id: 4402\n.current image id: 4403\n.current image id: 4404\n.current image id: 4405\n.current image id: 4406\n.current image id: 4407\n.current image id: 4408\n.current image id: 4409\n.current image id: 4410\n.current image id: 4411\n.current image id: 4412\n.current image id: 4413\n.current image id: 4414\n.current image id: 4415\n.current image id: 4416\n.current image id: 4417\n.current image id: 4418\n.current image id: 4419\n.current image id: 4420\n.current image id: 4421\n.current image id: 4422\n.current image id: 4423\n.current image id: 4424\n.current image id: 4425\n.current image id: 4426\n.current image id: 4427\n.current image id: 4428\n.current image id: 4429\n.current image id: 4430\n.current image id: 4431\n.current image id: 4432\n.current image id: 4433\n.current image id: 4434\n.current image id: 4435\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7313718849795510302_280_000_300_000_with_camera_labels.tfrecord\nsegment-7313718849795510302_280_000_300_000_with_camera_labels.tfrecord\n.current image id: 4436\n.current image id: 4437\n.current image id: 4438\n.current image id: 4439\n.current image id: 4440\n.current image id: 4441\n.current image id: 4442\n.current image id: 4443\n.current image id: 4444\n.current image id: 4445\n.current image id: 4446\n.current image id: 4447\n.current image id: 4448\n.current image id: 4449\n.current image id: 4450\n.current image id: 4451\n.current image id: 4452\n.current image id: 4453\n.current image id: 4454\n.current image id: 4455\n.current image id: 4456\n.current image id: 4457\n.current image id: 4458\n.current image id: 4459\n.current image id: 4460\n.current image id: 4461\n.current image id: 4462\n.current image id: 4463\n.current image id: 4464\n.current image id: 4465\n.current image id: 4466\n.current image id: 4467\n.current image id: 4468\n.current image id: 4469\n.current image id: 4470\n.current image id: 4471\n.current image id: 4472\n.current image id: 4473\n.current image id: 4474\n.current image id: 4475\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7324192826315818756_620_000_640_000_with_camera_labels.tfrecord\nsegment-7324192826315818756_620_000_640_000_with_camera_labels.tfrecord\n.current image id: 4476\n.current image id: 4477\n.current image id: 4478\n.current image id: 4479\n.current image id: 4480\n.current image id: 4481\n.current image id: 4482\n.current image id: 4483\n.current image id: 4484\n.current image id: 4485\n.current image id: 4486\n.current image id: 4487\n.current image id: 4488\n.current image id: 4489\n.current image id: 4490\n.current image id: 4491\n.current image id: 4492\n.current image id: 4493\n.current image id: 4494\n.current image id: 4495\n.current image id: 4496\n.current image id: 4497\n.current image id: 4498\n.current image id: 4499\n.current image id: 4500\n.current image id: 4501\n.current image id: 4502\n.current image id: 4503\n.current image id: 4504\n.current image id: 4505\n.current image id: 4506\n.current image id: 4507\n.current image id: 4508\n.current image id: 4509\n.current image id: 4510\n.current image id: 4511\n.current image id: 4512\n.current image id: 4513\n.current image id: 4514\n.current image id: 4515\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7331965392247645851_1005_940_1025_940_with_camera_labels.tfrecord\nsegment-7331965392247645851_1005_940_1025_940_with_camera_labels.tfrecord\n.current image id: 4516\n.current image id: 4517\n.current image id: 4518\n.current image id: 4519\n.current image id: 4520\n.current image id: 4521\n.current image id: 4522\n.current image id: 4523\n.current image id: 4524\n.current image id: 4525\n.current image id: 4526\n.current image id: 4527\n.current image id: 4528\n.current image id: 4529\n.current image id: 4530\n.current image id: 4531\n.current image id: 4532\n.current image id: 4533\n.current image id: 4534\n.current image id: 4535\n.current image id: 4536\n.current image id: 4537\n.current image id: 4538\n.current image id: 4539\n.current image id: 4540\n.current image id: 4541\n.current image id: 4542\n.current image id: 4543\n.current image id: 4544\n.current image id: 4545\n.current image id: 4546\n.current image id: 4547\n.current image id: 4548\n.current image id: 4549\n.current image id: 4550\n.current image id: 4551\n.current image id: 4552\n.current image id: 4553\n.current image id: 4554\n.current image id: 4555\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7344536712079322768_1360_000_1380_000_with_camera_labels.tfrecord\nsegment-7344536712079322768_1360_000_1380_000_with_camera_labels.tfrecord\n.current image id: 4556\n.current image id: 4557\n.current image id: 4558\n.current image id: 4559\n.current image id: 4560\n.current image id: 4561\n.current image id: 4562\n.current image id: 4563\n.current image id: 4564\n.current image id: 4565\n.current image id: 4566\n.current image id: 4567\n.current image id: 4568\n.current image id: 4569\n.current image id: 4570\n.current image id: 4571\n.current image id: 4572\n.current image id: 4573\n.current image id: 4574\n.current image id: 4575\n.current image id: 4576\n.current image id: 4577\n.current image id: 4578\n.current image id: 4579\n.current image id: 4580\n.current image id: 4581\n.current image id: 4582\n.current image id: 4583\n.current image id: 4584\n.current image id: 4585\n.current image id: 4586\n.current image id: 4587\n.current image id: 4588\n.current image id: 4589\n.current image id: 4590\n.current image id: 4591\n.current image id: 4592\n.current image id: 4593\n.current image id: 4594\n.current image id: 4595\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7373597180370847864_6020_000_6040_000_with_camera_labels.tfrecord\nsegment-7373597180370847864_6020_000_6040_000_with_camera_labels.tfrecord\n.current image id: 4596\n.current image id: 4597\n.current image id: 4598\n.current image id: 4599\n.current image id: 4600\n.current image id: 4601\n.current image id: 4602\n.current image id: 4603\n.current image id: 4604\n.current image id: 4605\n.current image id: 4606\n.current image id: 4607\n.current image id: 4608\n.current image id: 4609\n.current image id: 4610\n.current image id: 4611\n.current image id: 4612\n.current image id: 4613\n.current image id: 4614\n.current image id: 4615\n.current image id: 4616\n.current image id: 4617\n.current image id: 4618\n.current image id: 4619\n.current image id: 4620\n.current image id: 4621\n.current image id: 4622\n.current image id: 4623\n.current image id: 4624\n.current image id: 4625\n.current image id: 4626\n.current image id: 4627\n.current image id: 4628\n.current image id: 4629\n.current image id: 4630\n.current image id: 4631\n.current image id: 4632\n.current image id: 4633\n.current image id: 4634\n.current image id: 4635\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-744006317457557752_2080_000_2100_000_with_camera_labels.tfrecord\nsegment-744006317457557752_2080_000_2100_000_with_camera_labels.tfrecord\n.current image id: 4636\n.current image id: 4637\n.current image id: 4638\n.current image id: 4639\n.current image id: 4640\n.current image id: 4641\n.current image id: 4642\n.current image id: 4643\n.current image id: 4644\n.current image id: 4645\n.current image id: 4646\n.current image id: 4647\n.current image id: 4648\n.current image id: 4649\n.current image id: 4650\n.current image id: 4651\n.current image id: 4652\n.current image id: 4653\n.current image id: 4654\n.current image id: 4655\n.current image id: 4656\n.current image id: 4657\n.current image id: 4658\n.current image id: 4659\n.current image id: 4660\n.current image id: 4661\n.current image id: 4662\n.current image id: 4663\n.current image id: 4664\n.current image id: 4665\n.current image id: 4666\n.current image id: 4667\n.current image id: 4668\n.current image id: 4669\n.current image id: 4670\n.current image id: 4671\n.current image id: 4672\n.current image id: 4673\n.current image id: 4674\n.current image id: 4675\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7440437175443450101_94_000_114_000_with_camera_labels.tfrecord\nsegment-7440437175443450101_94_000_114_000_with_camera_labels.tfrecord\n.current image id: 4676\n.current image id: 4677\n.current image id: 4678\n.current image id: 4679\n.current image id: 4680\n.current image id: 4681\n.current image id: 4682\n.current image id: 4683\n.current image id: 4684\n.current image id: 4685\n.current image id: 4686\n.current image id: 4687\n.current image id: 4688\n.current image id: 4689\n.current image id: 4690\n.current image id: 4691\n.current image id: 4692\n.current image id: 4693\n.current image id: 4694\n.current image id: 4695\n.current image id: 4696\n.current image id: 4697\n.current image id: 4698\n.current image id: 4699\n.current image id: 4700\n.current image id: 4701\n.current image id: 4702\n.current image id: 4703\n.current image id: 4704\n.current image id: 4705\n.current image id: 4706\n.current image id: 4707\n.current image id: 4708\n.current image id: 4709\n.current image id: 4710\n.current image id: 4711\n.current image id: 4712\n.current image id: 4713\n.current image id: 4714\n.current image id: 4715\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7447927974619745860_820_000_840_000_with_camera_labels.tfrecord\nsegment-7447927974619745860_820_000_840_000_with_camera_labels.tfrecord\n.current image id: 4716\n.current image id: 4717\n.current image id: 4718\n.current image id: 4719\n.current image id: 4720\n.current image id: 4721\n.current image id: 4722\n.current image id: 4723\n.current image id: 4724\n.current image id: 4725\n.current image id: 4726\n.current image id: 4727\n.current image id: 4728\n.current image id: 4729\n.current image id: 4730\n.current image id: 4731\n.current image id: 4732\n.current image id: 4733\n.current image id: 4734\n.current image id: 4735\n.current image id: 4736\n.current image id: 4737\n.current image id: 4738\n.current image id: 4739\n.current image id: 4740\n.current image id: 4741\n.current image id: 4742\n.current image id: 4743\n.current image id: 4744\n.current image id: 4745\n.current image id: 4746\n.current image id: 4747\n.current image id: 4748\n.current image id: 4749\n.current image id: 4750\n.current image id: 4751\n.current image id: 4752\n.current image id: 4753\n.current image id: 4754\n.current image id: 4755\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7458568461947999548_700_000_720_000_with_camera_labels.tfrecord\nsegment-7458568461947999548_700_000_720_000_with_camera_labels.tfrecord\n.current image id: 4756\n.current image id: 4757\n.current image id: 4758\n.current image id: 4759\n.current image id: 4760\n.current image id: 4761\n.current image id: 4762\n.current image id: 4763\n.current image id: 4764\n.current image id: 4765\n.current image id: 4766\n.current image id: 4767\n.current image id: 4768\n.current image id: 4769\n.current image id: 4770\n.current image id: 4771\n.current image id: 4772\n.current image id: 4773\n.current image id: 4774\n.current image id: 4775\n.current image id: 4776\n.current image id: 4777\n.current image id: 4778\n.current image id: 4779\n.current image id: 4780\n.current image id: 4781\n.current image id: 4782\n.current image id: 4783\n.current image id: 4784\n.current image id: 4785\n.current image id: 4786\n.current image id: 4787\n.current image id: 4788\n.current image id: 4789\n.current image id: 4790\n.current image id: 4791\n.current image id: 4792\n.current image id: 4793\n.current image id: 4794\n.current image id: 4795\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7466751345307077932_585_000_605_000_with_camera_labels.tfrecord\nsegment-7466751345307077932_585_000_605_000_with_camera_labels.tfrecord\n.current image id: 4796\n.current image id: 4797\n.current image id: 4798\n.current image id: 4799\n.current image id: 4800\n.current image id: 4801\n.current image id: 4802\n.current image id: 4803\n.current image id: 4804\n.current image id: 4805\n.current image id: 4806\n.current image id: 4807\n.current image id: 4808\n.current image id: 4809\n.current image id: 4810\n.current image id: 4811\n.current image id: 4812\n.current image id: 4813\n.current image id: 4814\n.current image id: 4815\n.current image id: 4816\n.current image id: 4817\n.current image id: 4818\n.current image id: 4819\n.current image id: 4820\n.current image id: 4821\n.current image id: 4822\n.current image id: 4823\n.current image id: 4824\n.current image id: 4825\n.current image id: 4826\n.current image id: 4827\n.current image id: 4828\n.current image id: 4829\n.current image id: 4830\n.current image id: 4831\n.current image id: 4832\n.current image id: 4833\n.current image id: 4834\n.current image id: 4835\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7517545172000568481_2325_000_2345_000_with_camera_labels.tfrecord\nsegment-7517545172000568481_2325_000_2345_000_with_camera_labels.tfrecord\n.current image id: 4836\n.current image id: 4837\n.current image id: 4838\n.current image id: 4839\n.current image id: 4840\n.current image id: 4841\n.current image id: 4842\n.current image id: 4843\n.current image id: 4844\n.current image id: 4845\n.current image id: 4846\n.current image id: 4847\n.current image id: 4848\n.current image id: 4849\n.current image id: 4850\n.current image id: 4851\n.current image id: 4852\n.current image id: 4853\n.current image id: 4854\n.current image id: 4855\n.current image id: 4856\n.current image id: 4857\n.current image id: 4858\n.current image id: 4859\n.current image id: 4860\n.current image id: 4861\n.current image id: 4862\n.current image id: 4863\n.current image id: 4864\n.current image id: 4865\n.current image id: 4866\n.current image id: 4867\n.current image id: 4868\n.current image id: 4869\n.current image id: 4870\n.current image id: 4871\n.current image id: 4872\n.current image id: 4873\n.current image id: 4874\n.current image id: 4875\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7543690094688232666_4945_350_4965_350_with_camera_labels.tfrecord\nsegment-7543690094688232666_4945_350_4965_350_with_camera_labels.tfrecord\n.current image id: 4876\n.current image id: 4877\n.current image id: 4878\n.current image id: 4879\n.current image id: 4880\n.current image id: 4881\n.current image id: 4882\n.current image id: 4883\n.current image id: 4884\n.current image id: 4885\n.current image id: 4886\n.current image id: 4887\n.current image id: 4888\n.current image id: 4889\n.current image id: 4890\n.current image id: 4891\n.current image id: 4892\n.current image id: 4893\n.current image id: 4894\n.current image id: 4895\n.current image id: 4896\n.current image id: 4897\n.current image id: 4898\n.current image id: 4899\n.current image id: 4900\n.current image id: 4901\n.current image id: 4902\n.current image id: 4903\n.current image id: 4904\n.current image id: 4905\n.current image id: 4906\n.current image id: 4907\n.current image id: 4908\n.current image id: 4909\n.current image id: 4910\n.current image id: 4911\n.current image id: 4912\n.current image id: 4913\n.current image id: 4914\n.current image id: 4915\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7554208726220851641_380_000_400_000_with_camera_labels.tfrecord\nsegment-7554208726220851641_380_000_400_000_with_camera_labels.tfrecord\n.current image id: 4916\n.current image id: 4917\n.current image id: 4918\n.current image id: 4919\n.current image id: 4920\n.current image id: 4921\n.current image id: 4922\n.current image id: 4923\n.current image id: 4924\n.current image id: 4925\n.current image id: 4926\n.current image id: 4927\n.current image id: 4928\n.current image id: 4929\n.current image id: 4930\n.current image id: 4931\n.current image id: 4932\n.current image id: 4933\n.current image id: 4934\n.current image id: 4935\n.current image id: 4936\n.current image id: 4937\n.current image id: 4938\n.current image id: 4939\n.current image id: 4940\n.current image id: 4941\n.current image id: 4942\n.current image id: 4943\n.current image id: 4944\n.current image id: 4945\n.current image id: 4946\n.current image id: 4947\n.current image id: 4948\n.current image id: 4949\n.current image id: 4950\n.current image id: 4951\n.current image id: 4952\n.current image id: 4953\n.current image id: 4954\n.current image id: 4955\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7566697458525030390_1440_000_1460_000_with_camera_labels.tfrecord\nsegment-7566697458525030390_1440_000_1460_000_with_camera_labels.tfrecord\n.current image id: 4956\n.current image id: 4957\n.current image id: 4958\n.current image id: 4959\n.current image id: 4960\n.current image id: 4961\n.current image id: 4962\n.current image id: 4963\n.current image id: 4964\n.current image id: 4965\n.current image id: 4966\n.current image id: 4967\n.current image id: 4968\n.current image id: 4969\n.current image id: 4970\n.current image id: 4971\n.current image id: 4972\n.current image id: 4973\n.current image id: 4974\n.current image id: 4975\n.current image id: 4976\n.current image id: 4977\n.current image id: 4978\n.current image id: 4979\n.current image id: 4980\n.current image id: 4981\n.current image id: 4982\n.current image id: 4983\n.current image id: 4984\n.current image id: 4985\n.current image id: 4986\n.current image id: 4987\n.current image id: 4988\n.current image id: 4989\n.current image id: 4990\n.current image id: 4991\n.current image id: 4992\n.current image id: 4993\n.current image id: 4994\n.current image id: 4995\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6390847454531723238_6000_000_6020_000_with_camera_labels.tfrecord\nsegment-6390847454531723238_6000_000_6020_000_with_camera_labels.tfrecord\n.current image id: 4996\n.current image id: 4997\n.current image id: 4998\n.current image id: 4999\n.current image id: 5000\n.current image id: 5001\n.current image id: 5002\n.current image id: 5003\n.current image id: 5004\n.current image id: 5005\n.current image id: 5006\n.current image id: 5007\n.current image id: 5008\n.current image id: 5009\n.current image id: 5010\n.current image id: 5011\n.current image id: 5012\n.current image id: 5013\n.current image id: 5014\n.current image id: 5015\n.current image id: 5016\n.current image id: 5017\n.current image id: 5018\n.current image id: 5019\n.current image id: 5020\n.current image id: 5021\n.current image id: 5022\n.current image id: 5023\n.current image id: 5024\n.current image id: 5025\n.current image id: 5026\n.current image id: 5027\n.current image id: 5028\n.current image id: 5029\n.current image id: 5030\n.current image id: 5031\n.current image id: 5032\n.current image id: 5033\n.current image id: 5034\n.current image id: 5035\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6410495600874495447_5287_500_5307_500_with_camera_labels.tfrecord\nsegment-6410495600874495447_5287_500_5307_500_with_camera_labels.tfrecord\n.current image id: 5036\n.current image id: 5037\n.current image id: 5038\n.current image id: 5039\n.current image id: 5040\n.current image id: 5041\n.current image id: 5042\n.current image id: 5043\n.current image id: 5044\n.current image id: 5045\n.current image id: 5046\n.current image id: 5047\n.current image id: 5048\n.current image id: 5049\n.current image id: 5050\n.current image id: 5051\n.current image id: 5052\n.current image id: 5053\n.current image id: 5054\n.current image id: 5055\n.current image id: 5056\n.current image id: 5057\n.current image id: 5058\n.current image id: 5059\n.current image id: 5060\n.current image id: 5061\n.current image id: 5062\n.current image id: 5063\n.current image id: 5064\n.current image id: 5065\n.current image id: 5066\n.current image id: 5067\n.current image id: 5068\n.current image id: 5069\n.current image id: 5070\n.current image id: 5071\n.current image id: 5072\n.current image id: 5073\n.current image id: 5074\n.current image id: 5075\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6417523992887712896_1180_000_1200_000_with_camera_labels.tfrecord\nsegment-6417523992887712896_1180_000_1200_000_with_camera_labels.tfrecord\n.current image id: 5076\n.current image id: 5077\n.current image id: 5078\n.current image id: 5079\n.current image id: 5080\n.current image id: 5081\n.current image id: 5082\n.current image id: 5083\n.current image id: 5084\n.current image id: 5085\n.current image id: 5086\n.current image id: 5087\n.current image id: 5088\n.current image id: 5089\n.current image id: 5090\n.current image id: 5091\n.current image id: 5092\n.current image id: 5093\n.current image id: 5094\n.current image id: 5095\n.current image id: 5096\n.current image id: 5097\n.current image id: 5098\n.current image id: 5099\n.current image id: 5100\n.current image id: 5101\n.current image id: 5102\n.current image id: 5103\n.current image id: 5104\n.current image id: 5105\n.current image id: 5106\n.current image id: 5107\n.current image id: 5108\n.current image id: 5109\n.current image id: 5110\n.current image id: 5111\n.current image id: 5112\n.current image id: 5113\n.current image id: 5114\n.current image id: 5115\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6433401807220119698_4560_000_4580_000_with_camera_labels.tfrecord\nsegment-6433401807220119698_4560_000_4580_000_with_camera_labels.tfrecord\n.current image id: 5116\n.current image id: 5117\n.current image id: 5118\n.current image id: 5119\n.current image id: 5120\n.current image id: 5121\n.current image id: 5122\n.current image id: 5123\n.current image id: 5124\n.current image id: 5125\n.current image id: 5126\n.current image id: 5127\n.current image id: 5128\n.current image id: 5129\n.current image id: 5130\n.current image id: 5131\n.current image id: 5132\n.current image id: 5133\n.current image id: 5134\n.current image id: 5135\n.current image id: 5136\n.current image id: 5137\n.current image id: 5138\n.current image id: 5139\n.current image id: 5140\n.current image id: 5141\n.current image id: 5142\n.current image id: 5143\n.current image id: 5144\n.current image id: 5145\n.current image id: 5146\n.current image id: 5147\n.current image id: 5148\n.current image id: 5149\n.current image id: 5150\n.current image id: 5151\n.current image id: 5152\n.current image id: 5153\n.current image id: 5154\n.current image id: 5155\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6456165750159303330_1770_080_1790_080_with_camera_labels.tfrecord\nsegment-6456165750159303330_1770_080_1790_080_with_camera_labels.tfrecord\n.current image id: 5156\n.current image id: 5157\n.current image id: 5158\n.current image id: 5159\n.current image id: 5160\n.current image id: 5161\n.current image id: 5162\n.current image id: 5163\n.current image id: 5164\n.current image id: 5165\n.current image id: 5166\n.current image id: 5167\n.current image id: 5168\n.current image id: 5169\n.current image id: 5170\n.current image id: 5171\n.current image id: 5172\n.current image id: 5173\n.current image id: 5174\n.current image id: 5175\n.current image id: 5176\n.current image id: 5177\n.current image id: 5178\n.current image id: 5179\n.current image id: 5180\n.current image id: 5181\n.current image id: 5182\n.current image id: 5183\n.current image id: 5184\n.current image id: 5185\n.current image id: 5186\n.current image id: 5187\n.current image id: 5188\n.current image id: 5189\n.current image id: 5190\n.current image id: 5191\n.current image id: 5192\n.current image id: 5193\n.current image id: 5194\n.current image id: 5195\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6559997992780479765_1039_000_1059_000_with_camera_labels.tfrecord\nsegment-6559997992780479765_1039_000_1059_000_with_camera_labels.tfrecord\n.current image id: 5196\n.current image id: 5197\n.current image id: 5198\n.current image id: 5199\n.current image id: 5200\n.current image id: 5201\n.current image id: 5202\n.current image id: 5203\n.current image id: 5204\n.current image id: 5205\n.current image id: 5206\n.current image id: 5207\n.current image id: 5208\n.current image id: 5209\n.current image id: 5210\n.current image id: 5211\n.current image id: 5212\n.current image id: 5213\n.current image id: 5214\n.current image id: 5215\n.current image id: 5216\n.current image id: 5217\n.current image id: 5218\n.current image id: 5219\n.current image id: 5220\n.current image id: 5221\n.current image id: 5222\n.current image id: 5223\n.current image id: 5224\n.current image id: 5225\n.current image id: 5226\n.current image id: 5227\n.current image id: 5228\n.current image id: 5229\n.current image id: 5230\n.current image id: 5231\n.current image id: 5232\n.current image id: 5233\n.current image id: 5234\n.current image id: 5235\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6561206763751799279_2348_600_2368_600_with_camera_labels.tfrecord\nsegment-6561206763751799279_2348_600_2368_600_with_camera_labels.tfrecord\n.current image id: 5236\n.current image id: 5237\n.current image id: 5238\n.current image id: 5239\n.current image id: 5240\n.current image id: 5241\n.current image id: 5242\n.current image id: 5243\n.current image id: 5244\n.current image id: 5245\n.current image id: 5246\n.current image id: 5247\n.current image id: 5248\n.current image id: 5249\n.current image id: 5250\n.current image id: 5251\n.current image id: 5252\n.current image id: 5253\n.current image id: 5254\n.current image id: 5255\n.current image id: 5256\n.current image id: 5257\n.current image id: 5258\n.current image id: 5259\n.current image id: 5260\n.current image id: 5261\n.current image id: 5262\n.current image id: 5263\n.current image id: 5264\n.current image id: 5265\n.current image id: 5266\n.current image id: 5267\n.current image id: 5268\n.current image id: 5269\n.current image id: 5270\n.current image id: 5271\n.current image id: 5272\n.current image id: 5273\n.current image id: 5274\n.current image id: 5275\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6606076833441976341_1340_000_1360_000_with_camera_labels.tfrecord\nsegment-6606076833441976341_1340_000_1360_000_with_camera_labels.tfrecord\n.current image id: 5276\n.current image id: 5277\n.current image id: 5278\n.current image id: 5279\n.current image id: 5280\n.current image id: 5281\n.current image id: 5282\n.current image id: 5283\n.current image id: 5284\n.current image id: 5285\n.current image id: 5286\n.current image id: 5287\n.current image id: 5288\n.current image id: 5289\n.current image id: 5290\n.current image id: 5291\n.current image id: 5292\n.current image id: 5293\n.current image id: 5294\n.current image id: 5295\n.current image id: 5296\n.current image id: 5297\n.current image id: 5298\n.current image id: 5299\n.current image id: 5300\n.current image id: 5301\n.current image id: 5302\n.current image id: 5303\n.current image id: 5304\n.current image id: 5305\n.current image id: 5306\n.current image id: 5307\n.current image id: 5308\n.current image id: 5309\n.current image id: 5310\n.current image id: 5311\n.current image id: 5312\n.current image id: 5313\n.current image id: 5314\n.current image id: 5315\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6625150143263637936_780_000_800_000_with_camera_labels.tfrecord\nsegment-6625150143263637936_780_000_800_000_with_camera_labels.tfrecord\n.current image id: 5316\n.current image id: 5317\n.current image id: 5318\n.current image id: 5319\n.current image id: 5320\n.current image id: 5321\n.current image id: 5322\n.current image id: 5323\n.current image id: 5324\n.current image id: 5325\n.current image id: 5326\n.current image id: 5327\n.current image id: 5328\n.current image id: 5329\n.current image id: 5330\n.current image id: 5331\n.current image id: 5332\n.current image id: 5333\n.current image id: 5334\n.current image id: 5335\n.current image id: 5336\n.current image id: 5337\n.current image id: 5338\n.current image id: 5339\n.current image id: 5340\n.current image id: 5341\n.current image id: 5342\n.current image id: 5343\n.current image id: 5344\n.current image id: 5345\n.current image id: 5346\n.current image id: 5347\n.current image id: 5348\n.current image id: 5349\n.current image id: 5350\n.current image id: 5351\n.current image id: 5352\n.current image id: 5353\n.current image id: 5354\n.current image id: 5355\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6638427309837298695_220_000_240_000_with_camera_labels.tfrecord\nsegment-6638427309837298695_220_000_240_000_with_camera_labels.tfrecord\n.current image id: 5356\n.current image id: 5357\n.current image id: 5358\n.current image id: 5359\n.current image id: 5360\n.current image id: 5361\n.current image id: 5362\n.current image id: 5363\n.current image id: 5364\n.current image id: 5365\n.current image id: 5366\n.current image id: 5367\n.current image id: 5368\n.current image id: 5369\n.current image id: 5370\n.current image id: 5371\n.current image id: 5372\n.current image id: 5373\n.current image id: 5374\n.current image id: 5375\n.current image id: 5376\n.current image id: 5377\n.current image id: 5378\n.current image id: 5379\n.current image id: 5380\n.current image id: 5381\n.current image id: 5382\n.current image id: 5383\n.current image id: 5384\n.current image id: 5385\n.current image id: 5386\n.current image id: 5387\n.current image id: 5388\n.current image id: 5389\n.current image id: 5390\n.current image id: 5391\n.current image id: 5392\n.current image id: 5393\n.current image id: 5394\n.current image id: 5395\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6674547510992884047_1560_000_1580_000_with_camera_labels.tfrecord\nsegment-6674547510992884047_1560_000_1580_000_with_camera_labels.tfrecord\n.current image id: 5396\n.current image id: 5397\n.current image id: 5398\n.current image id: 5399\n.current image id: 5400\n.current image id: 5401\n.current image id: 5402\n.current image id: 5403\n.current image id: 5404\n.current image id: 5405\n.current image id: 5406\n.current image id: 5407\n.current image id: 5408\n.current image id: 5409\n.current image id: 5410\n.current image id: 5411\n.current image id: 5412\n.current image id: 5413\n.current image id: 5414\n.current image id: 5415\n.current image id: 5416\n.current image id: 5417\n.current image id: 5418\n.current image id: 5419\n.current image id: 5420\n.current image id: 5421\n.current image id: 5422\n.current image id: 5423\n.current image id: 5424\n.current image id: 5425\n.current image id: 5426\n.current image id: 5427\n.current image id: 5428\n.current image id: 5429\n.current image id: 5430\n.current image id: 5431\n.current image id: 5432\n.current image id: 5433\n.current image id: 5434\n.current image id: 5435\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6694593639447385226_1040_000_1060_000_with_camera_labels.tfrecord\nsegment-6694593639447385226_1040_000_1060_000_with_camera_labels.tfrecord\n.current image id: 5436\n.current image id: 5437\n.current image id: 5438\n.current image id: 5439\n.current image id: 5440\n.current image id: 5441\n.current image id: 5442\n.current image id: 5443\n.current image id: 5444\n.current image id: 5445\n.current image id: 5446\n.current image id: 5447\n.current image id: 5448\n.current image id: 5449\n.current image id: 5450\n.current image id: 5451\n.current image id: 5452\n.current image id: 5453\n.current image id: 5454\n.current image id: 5455\n.current image id: 5456\n.current image id: 5457\n.current image id: 5458\n.current image id: 5459\n.current image id: 5460\n.current image id: 5461\n.current image id: 5462\n.current image id: 5463\n.current image id: 5464\n.current image id: 5465\n.current image id: 5466\n.current image id: 5467\n.current image id: 5468\n.current image id: 5469\n.current image id: 5470\n.current image id: 5471\n.current image id: 5472\n.current image id: 5473\n.current image id: 5474\n.current image id: 5475\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6722602826685649765_2280_000_2300_000_with_camera_labels.tfrecord\nsegment-6722602826685649765_2280_000_2300_000_with_camera_labels.tfrecord\n.current image id: 5476\n.current image id: 5477\n.current image id: 5478\n.current image id: 5479\n.current image id: 5480\n.current image id: 5481\n.current image id: 5482\n.current image id: 5483\n.current image id: 5484\n.current image id: 5485\n.current image id: 5486\n.current image id: 5487\n.current image id: 5488\n.current image id: 5489\n.current image id: 5490\n.current image id: 5491\n.current image id: 5492\n.current image id: 5493\n.current image id: 5494\n.current image id: 5495\n.current image id: 5496\n.current image id: 5497\n.current image id: 5498\n.current image id: 5499\n.current image id: 5500\n.current image id: 5501\n.current image id: 5502\n.current image id: 5503\n.current image id: 5504\n.current image id: 5505\n.current image id: 5506\n.current image id: 5507\n.current image id: 5508\n.current image id: 5509\n.current image id: 5510\n.current image id: 5511\n.current image id: 5512\n.current image id: 5513\n.current image id: 5514\n.current image id: 5515\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6740694556948402155_3040_000_3060_000_with_camera_labels.tfrecord\nsegment-6740694556948402155_3040_000_3060_000_with_camera_labels.tfrecord\n.current image id: 5516\n.current image id: 5517\n.current image id: 5518\n.current image id: 5519\n.current image id: 5520\n.current image id: 5521\n.current image id: 5522\n.current image id: 5523\n.current image id: 5524\n.current image id: 5525\n.current image id: 5526\n.current image id: 5527\n.current image id: 5528\n.current image id: 5529\n.current image id: 5530\n.current image id: 5531\n.current image id: 5532\n.current image id: 5533\n.current image id: 5534\n.current image id: 5535\n.current image id: 5536\n.current image id: 5537\n.current image id: 5538\n.current image id: 5539\n.current image id: 5540\n.current image id: 5541\n.current image id: 5542\n.current image id: 5543\n.current image id: 5544\n.current image id: 5545\n.current image id: 5546\n.current image id: 5547\n.current image id: 5548\n.current image id: 5549\n.current image id: 5550\n.current image id: 5551\n.current image id: 5552\n.current image id: 5553\n.current image id: 5554\n.current image id: 5555\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6742105013468660925_3645_000_3665_000_with_camera_labels.tfrecord\nsegment-6742105013468660925_3645_000_3665_000_with_camera_labels.tfrecord\n.current image id: 5556\n.current image id: 5557\n.current image id: 5558\n.current image id: 5559\n.current image id: 5560\n.current image id: 5561\n.current image id: 5562\n.current image id: 5563\n.current image id: 5564\n.current image id: 5565\n.current image id: 5566\n.current image id: 5567\n.current image id: 5568\n.current image id: 5569\n.current image id: 5570\n.current image id: 5571\n.current image id: 5572\n.current image id: 5573\n.current image id: 5574\n.current image id: 5575\n.current image id: 5576\n.current image id: 5577\n.current image id: 5578\n.current image id: 5579\n.current image id: 5580\n.current image id: 5581\n.current image id: 5582\n.current image id: 5583\n.current image id: 5584\n.current image id: 5585\n.current image id: 5586\n.current image id: 5587\n.current image id: 5588\n.current image id: 5589\n.current image id: 5590\n.current image id: 5591\n.current image id: 5592\n.current image id: 5593\n.current image id: 5594\n.current image id: 5595\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6763005717101083473_3880_000_3900_000_with_camera_labels.tfrecord\nsegment-6763005717101083473_3880_000_3900_000_with_camera_labels.tfrecord\n.current image id: 5596\n.current image id: 5597\n.current image id: 5598\n.current image id: 5599\n.current image id: 5600\n.current image id: 5601\n.current image id: 5602\n.current image id: 5603\n.current image id: 5604\n.current image id: 5605\n.current image id: 5606\n.current image id: 5607\n.current image id: 5608\n.current image id: 5609\n.current image id: 5610\n.current image id: 5611\n.current image id: 5612\n.current image id: 5613\n.current image id: 5614\n.current image id: 5615\n.current image id: 5616\n.current image id: 5617\n.current image id: 5618\n.current image id: 5619\n.current image id: 5620\n.current image id: 5621\n.current image id: 5622\n.current image id: 5623\n.current image id: 5624\n.current image id: 5625\n.current image id: 5626\n.current image id: 5627\n.current image id: 5628\n.current image id: 5629\n.current image id: 5630\n.current image id: 5631\n.current image id: 5632\n.current image id: 5633\n.current image id: 5634\n.current image id: 5635\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6771783338734577946_6105_840_6125_840_with_camera_labels.tfrecord\nsegment-6771783338734577946_6105_840_6125_840_with_camera_labels.tfrecord\n.current image id: 5636\n.current image id: 5637\n.current image id: 5638\n.current image id: 5639\n.current image id: 5640\n.current image id: 5641\n.current image id: 5642\n.current image id: 5643\n.current image id: 5644\n.current image id: 5645\n.current image id: 5646\n.current image id: 5647\n.current image id: 5648\n.current image id: 5649\n.current image id: 5650\n.current image id: 5651\n.current image id: 5652\n.current image id: 5653\n.current image id: 5654\n.current image id: 5655\n.current image id: 5656\n.current image id: 5657\n.current image id: 5658\n.current image id: 5659\n.current image id: 5660\n.current image id: 5661\n.current image id: 5662\n.current image id: 5663\n.current image id: 5664\n.current image id: 5665\n.current image id: 5666\n.current image id: 5667\n.current image id: 5668\n.current image id: 5669\n.current image id: 5670\n.current image id: 5671\n.current image id: 5672\n.current image id: 5673\n.current image id: 5674\n.current image id: 5675\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6771922013310347577_4249_290_4269_290_with_camera_labels.tfrecord\nsegment-6771922013310347577_4249_290_4269_290_with_camera_labels.tfrecord\n.current image id: 5676\n.current image id: 5677\n.current image id: 5678\n.current image id: 5679\n.current image id: 5680\n.current image id: 5681\n.current image id: 5682\n.current image id: 5683\n.current image id: 5684\n.current image id: 5685\n.current image id: 5686\n.current image id: 5687\n.current image id: 5688\n.current image id: 5689\n.current image id: 5690\n.current image id: 5691\n.current image id: 5692\n.current image id: 5693\n.current image id: 5694\n.current image id: 5695\n.current image id: 5696\n.current image id: 5697\n.current image id: 5698\n.current image id: 5699\n.current image id: 5700\n.current image id: 5701\n.current image id: 5702\n.current image id: 5703\n.current image id: 5704\n.current image id: 5705\n.current image id: 5706\n.current image id: 5707\n.current image id: 5708\n.current image id: 5709\n.current image id: 5710\n.current image id: 5711\n.current image id: 5712\n.current image id: 5713\n.current image id: 5714\n.current image id: 5715\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6791933003490312185_2607_000_2627_000_with_camera_labels.tfrecord\nsegment-6791933003490312185_2607_000_2627_000_with_camera_labels.tfrecord\n.current image id: 5716\n.current image id: 5717\n.current image id: 5718\n.current image id: 5719\n.current image id: 5720\n.current image id: 5721\n.current image id: 5722\n.current image id: 5723\n.current image id: 5724\n.current image id: 5725\n.current image id: 5726\n.current image id: 5727\n.current image id: 5728\n.current image id: 5729\n.current image id: 5730\n.current image id: 5731\n.current image id: 5732\n.current image id: 5733\n.current image id: 5734\n.current image id: 5735\n.current image id: 5736\n.current image id: 5737\n.current image id: 5738\n.current image id: 5739\n.current image id: 5740\n.current image id: 5741\n.current image id: 5742\n.current image id: 5743\n.current image id: 5744\n.current image id: 5745\n.current image id: 5746\n.current image id: 5747\n.current image id: 5748\n.current image id: 5749\n.current image id: 5750\n.current image id: 5751\n.current image id: 5752\n.current image id: 5753\n.current image id: 5754\n.current image id: 5755\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6792191642931213648_1522_000_1542_000_with_camera_labels.tfrecord\nsegment-6792191642931213648_1522_000_1542_000_with_camera_labels.tfrecord\n.current image id: 5756\n.current image id: 5757\n.current image id: 5758\n.current image id: 5759\n.current image id: 5760\n.current image id: 5761\n.current image id: 5762\n.current image id: 5763\n.current image id: 5764\n.current image id: 5765\n.current image id: 5766\n.current image id: 5767\n.current image id: 5768\n.current image id: 5769\n.current image id: 5770\n.current image id: 5771\n.current image id: 5772\n.current image id: 5773\n.current image id: 5774\n.current image id: 5775\n.current image id: 5776\n.current image id: 5777\n.current image id: 5778\n.current image id: 5779\n.current image id: 5780\n.current image id: 5781\n.current image id: 5782\n.current image id: 5783\n.current image id: 5784\n.current image id: 5785\n.current image id: 5786\n.current image id: 5787\n.current image id: 5788\n.current image id: 5789\n.current image id: 5790\n.current image id: 5791\n.current image id: 5792\n.current image id: 5793\n.current image id: 5794\n.current image id: 5795\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6799055159715949496_2503_000_2523_000_with_camera_labels.tfrecord\nsegment-6799055159715949496_2503_000_2523_000_with_camera_labels.tfrecord\n.current image id: 5796\n.current image id: 5797\n.current image id: 5798\n.current image id: 5799\n.current image id: 5800\n.current image id: 5801\n.current image id: 5802\n.current image id: 5803\n.current image id: 5804\n.current image id: 5805\n.current image id: 5806\n.current image id: 5807\n.current image id: 5808\n.current image id: 5809\n.current image id: 5810\n.current image id: 5811\n.current image id: 5812\n.current image id: 5813\n.current image id: 5814\n.current image id: 5815\n.current image id: 5816\n.current image id: 5817\n.current image id: 5818\n.current image id: 5819\n.current image id: 5820\n.current image id: 5821\n.current image id: 5822\n.current image id: 5823\n.current image id: 5824\n.current image id: 5825\n.current image id: 5826\n.current image id: 5827\n.current image id: 5828\n.current image id: 5829\n.current image id: 5830\n.current image id: 5831\n.current image id: 5832\n.current image id: 5833\n.current image id: 5834\n.current image id: 5835\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6813611334239274394_535_000_555_000_with_camera_labels.tfrecord\nsegment-6813611334239274394_535_000_555_000_with_camera_labels.tfrecord\n.current image id: 5836\n.current image id: 5837\n.current image id: 5838\n.current image id: 5839\n.current image id: 5840\n.current image id: 5841\n.current image id: 5842\n.current image id: 5843\n.current image id: 5844\n.current image id: 5845\n.current image id: 5846\n.current image id: 5847\n.current image id: 5848\n.current image id: 5849\n.current image id: 5850\n.current image id: 5851\n.current image id: 5852\n.current image id: 5853\n.current image id: 5854\n.current image id: 5855\n.current image id: 5856\n.current image id: 5857\n.current image id: 5858\n.current image id: 5859\n.current image id: 5860\n.current image id: 5861\n.current image id: 5862\n.current image id: 5863\n.current image id: 5864\n.current image id: 5865\n.current image id: 5866\n.current image id: 5867\n.current image id: 5868\n.current image id: 5869\n.current image id: 5870\n.current image id: 5871\n.current image id: 5872\n.current image id: 5873\n.current image id: 5874\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6814918034011049245_134_170_154_170_with_camera_labels.tfrecord\nsegment-6814918034011049245_134_170_154_170_with_camera_labels.tfrecord\n.current image id: 5875\n.current image id: 5876\n.current image id: 5877\n.current image id: 5878\n.current image id: 5879\n.current image id: 5880\n.current image id: 5881\n.current image id: 5882\n.current image id: 5883\n.current image id: 5884\n.current image id: 5885\n.current image id: 5886\n.current image id: 5887\n.current image id: 5888\n.current image id: 5889\n.current image id: 5890\n.current image id: 5891\n.current image id: 5892\n.current image id: 5893\n.current image id: 5894\n.current image id: 5895\n.current image id: 5896\n.current image id: 5897\n.current image id: 5898\n.current image id: 5899\n.current image id: 5900\n.current image id: 5901\n.current image id: 5902\n.current image id: 5903\n.current image id: 5904\n.current image id: 5905\n.current image id: 5906\n.current image id: 5907\n.current image id: 5908\n.current image id: 5909\n.current image id: 5910\n.current image id: 5911\n.current image id: 5912\n.current image id: 5913\n.current image id: 5914\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6904827860701329567_960_000_980_000_with_camera_labels.tfrecord\nsegment-6904827860701329567_960_000_980_000_with_camera_labels.tfrecord\n.current image id: 5915\n.current image id: 5916\n.current image id: 5917\n.current image id: 5918\n.current image id: 5919\n.current image id: 5920\n.current image id: 5921\n.current image id: 5922\n.current image id: 5923\n.current image id: 5924\n.current image id: 5925\n.current image id: 5926\n.current image id: 5927\n.current image id: 5928\n.current image id: 5929\n.current image id: 5930\n.current image id: 5931\n.current image id: 5932\n.current image id: 5933\n.current image id: 5934\n.current image id: 5935\n.current image id: 5936\n.current image id: 5937\n.current image id: 5938\n.current image id: 5939\n.current image id: 5940\n.current image id: 5941\n.current image id: 5942\n.current image id: 5943\n.current image id: 5944\n.current image id: 5945\n.current image id: 5946\n.current image id: 5947\n.current image id: 5948\n.current image id: 5949\n.current image id: 5950\n.current image id: 5951\n.current image id: 5952\n.current image id: 5953\n.current image id: 5954\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6935841224766931310_2770_310_2790_310_with_camera_labels.tfrecord\nsegment-6935841224766931310_2770_310_2790_310_with_camera_labels.tfrecord\n.current image id: 5955\n.current image id: 5956\n.current image id: 5957\n.current image id: 5958\n.current image id: 5959\n.current image id: 5960\n.current image id: 5961\n.current image id: 5962\n.current image id: 5963\n.current image id: 5964\n.current image id: 5965\n.current image id: 5966\n.current image id: 5967\n.current image id: 5968\n.current image id: 5969\n.current image id: 5970\n.current image id: 5971\n.current image id: 5972\n.current image id: 5973\n.current image id: 5974\n.current image id: 5975\n.current image id: 5976\n.current image id: 5977\n.current image id: 5978\n.current image id: 5979\n.current image id: 5980\n.current image id: 5981\n.current image id: 5982\n.current image id: 5983\n.current image id: 5984\n.current image id: 5985\n.current image id: 5986\n.current image id: 5987\n.current image id: 5988\n.current image id: 5989\n.current image id: 5990\n.current image id: 5991\n.current image id: 5992\n.current image id: 5993\n.current image id: 5994\n"
],
[
"PATH='/data/cmpe295-liu/Waymo'\nfolderslist = [\"validation_0007\",\"training_0006\"]#,\"training_0029\",\"training_0028\",\"training_0027\",\"training_0026\"]\n#folderslist = [\"training_0031\",\"training_0030\",\"training_0029\",\"training_0028\",\"training_0027\",\"training_0026\",\"training_0025\", \"training_0024\", \"training_0023\",\"training_0022\",\"training_0021\",\"training_0020\",\"training_0019\",\"training_0018\",\"training_0017\",\"training_0016\",\"training_0015\",\"training_0014\",\"training_0013\",\"training_0012\",\"training_0011\",\"training_0010\",\"training_0009\",\"training_0008\",\"training_0007\",\"training_0006\",\"training_0005\",\"training_0004\",\"training_0003\",\"training_0002\",\"training_0001\",\"training_0000\"]\ntfrecord_files = [path for x in folderslist for path in glob(os.path.join(PATH, x, \"*.tfrecord\"))]\nprint(len(tfrecord_files))#total number of tfrecord files\n\nout_dir='/data/cmpe295-liu/Waymo/WaymoCOCOsmall/Validation'\nstep=5 #downsample\nout_dir = Path(out_dir)\n\nextract_segment_frontcamera(tfrecord_files, out_dir, step)",
"49\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-8079607115087394458_1240_000_1260_000_with_camera_labels.tfrecord\nsegment-8079607115087394458_1240_000_1260_000_with_camera_labels.tfrecord\n.current image id: 1\n.current image id: 2\n.current image id: 3\n.current image id: 4\n.current image id: 5\n.current image id: 6\n.current image id: 7\n.current image id: 8\n.current image id: 9\n.current image id: 10\n.current image id: 11\n.current image id: 12\n.current image id: 13\n.current image id: 14\n.current image id: 15\n.current image id: 16\n.current image id: 17\n.current image id: 18\n.current image id: 19\n.current image id: 20\n.current image id: 21\n.current image id: 22\n.current image id: 23\n.current image id: 24\n.current image id: 25\n.current image id: 26\n.current image id: 27\n.current image id: 28\n.current image id: 29\n.current image id: 30\n.current image id: 31\n.current image id: 32\n.current image id: 33\n.current image id: 34\n.current image id: 35\n.current image id: 36\n.current image id: 37\n.current image id: 38\n.current image id: 39\n.current image id: 40\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-8133434654699693993_1162_020_1182_020_with_camera_labels.tfrecord\nsegment-8133434654699693993_1162_020_1182_020_with_camera_labels.tfrecord\n.current image id: 41\n.current image id: 42\n.current image id: 43\n.current image id: 44\n.current image id: 45\n.current image id: 46\n.current image id: 47\n.current image id: 48\n.current image id: 49\n.current image id: 50\n.current image id: 51\n.current image id: 52\n.current image id: 53\n.current image id: 54\n.current image id: 55\n.current image id: 56\n.current image id: 57\n.current image id: 58\n.current image id: 59\n.current image id: 60\n.current image id: 61\n.current image id: 62\n.current image id: 63\n.current image id: 64\n.current image id: 65\n.current image id: 66\n.current image id: 67\n.current image id: 68\n.current image id: 69\n.current image id: 70\n.current image id: 71\n.current image id: 72\n.current image id: 73\n.current image id: 74\n.current image id: 75\n.current image id: 76\n.current image id: 77\n.current image id: 78\n.current image id: 79\n.current image id: 80\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-8137195482049459160_3100_000_3120_000_with_camera_labels.tfrecord\nsegment-8137195482049459160_3100_000_3120_000_with_camera_labels.tfrecord\n.current image id: 81\n.current image id: 82\n.current image id: 83\n.current image id: 84\n.current image id: 85\n.current image id: 86\n.current image id: 87\n.current image id: 88\n.current image id: 89\n.current image id: 90\n.current image id: 91\n.current image id: 92\n.current image id: 93\n.current image id: 94\n.current image id: 95\n.current image id: 96\n.current image id: 97\n.current image id: 98\n.current image id: 99\n.current image id: 100\n.current image id: 101\n.current image id: 102\n.current image id: 103\n.current image id: 104\n.current image id: 105\n.current image id: 106\n.current image id: 107\n.current image id: 108\n.current image id: 109\n.current image id: 110\n.current image id: 111\n.current image id: 112\n.current image id: 113\n.current image id: 114\n.current image id: 115\n.current image id: 116\n.current image id: 117\n.current image id: 118\n.current image id: 119\n.current image id: 120\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-8302000153252334863_6020_000_6040_000_with_camera_labels.tfrecord\nsegment-8302000153252334863_6020_000_6040_000_with_camera_labels.tfrecord\n.current image id: 121\n.current image id: 122\n.current image id: 123\n.current image id: 124\n.current image id: 125\n.current image id: 126\n.current image id: 127\n.current image id: 128\n.current image id: 129\n.current image id: 130\n.current image id: 131\n.current image id: 132\n.current image id: 133\n.current image id: 134\n.current image id: 135\n.current image id: 136\n.current image id: 137\n.current image id: 138\n.current image id: 139\n.current image id: 140\n.current image id: 141\n.current image id: 142\n.current image id: 143\n.current image id: 144\n.current image id: 145\n.current image id: 146\n.current image id: 147\n.current image id: 148\n.current image id: 149\n.current image id: 150\n.current image id: 151\n.current image id: 152\n.current image id: 153\n.current image id: 154\n.current image id: 155\n.current image id: 156\n.current image id: 157\n.current image id: 158\n.current image id: 159\n.current image id: 160\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-8331804655557290264_4351_740_4371_740_with_camera_labels.tfrecord\nsegment-8331804655557290264_4351_740_4371_740_with_camera_labels.tfrecord\n.current image id: 161\n.current image id: 162\n.current image id: 163\n.current image id: 164\n.current image id: 165\n.current image id: 166\n.current image id: 167\n.current image id: 168\n.current image id: 169\n.current image id: 170\n.current image id: 171\n.current image id: 172\n.current image id: 173\n.current image id: 174\n.current image id: 175\n.current image id: 176\n.current image id: 177\n.current image id: 178\n.current image id: 179\n.current image id: 180\n.current image id: 181\n.current image id: 182\n.current image id: 183\n.current image id: 184\n.current image id: 185\n.current image id: 186\n.current image id: 187\n.current image id: 188\n.current image id: 189\n.current image id: 190\n.current image id: 191\n.current image id: 192\n.current image id: 193\n.current image id: 194\n.current image id: 195\n.current image id: 196\n.current image id: 197\n.current image id: 198\n.current image id: 199\n.current image id: 200\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-8398516118967750070_3958_000_3978_000_with_camera_labels.tfrecord\nsegment-8398516118967750070_3958_000_3978_000_with_camera_labels.tfrecord\n.current image id: 201\n.current image id: 202\n.current image id: 203\n.current image id: 204\n.current image id: 205\n.current image id: 206\n.current image id: 207\n.current image id: 208\n.current image id: 209\n.current image id: 210\n.current image id: 211\n.current image id: 212\n.current image id: 213\n.current image id: 214\n.current image id: 215\n.current image id: 216\n.current image id: 217\n.current image id: 218\n.current image id: 219\n.current image id: 220\n.current image id: 221\n.current image id: 222\n.current image id: 223\n.current image id: 224\n.current image id: 225\n.current image id: 226\n.current image id: 227\n.current image id: 228\n.current image id: 229\n.current image id: 230\n.current image id: 231\n.current image id: 232\n.current image id: 233\n.current image id: 234\n.current image id: 235\n.current image id: 236\n.current image id: 237\n.current image id: 238\n.current image id: 239\n.current image id: 240\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-8506432817378693815_4860_000_4880_000_with_camera_labels.tfrecord\nsegment-8506432817378693815_4860_000_4880_000_with_camera_labels.tfrecord\n.current image id: 241\n.current image id: 242\n.current image id: 243\n.current image id: 244\n.current image id: 245\n.current image id: 246\n.current image id: 247\n.current image id: 248\n.current image id: 249\n.current image id: 250\n.current image id: 251\n.current image id: 252\n.current image id: 253\n.current image id: 254\n.current image id: 255\n.current image id: 256\n.current image id: 257\n.current image id: 258\n.current image id: 259\n.current image id: 260\n.current image id: 261\n.current image id: 262\n.current image id: 263\n.current image id: 264\n.current image id: 265\n.current image id: 266\n.current image id: 267\n.current image id: 268\n.current image id: 269\n.current image id: 270\n.current image id: 271\n.current image id: 272\n.current image id: 273\n.current image id: 274\n.current image id: 275\n.current image id: 276\n.current image id: 277\n.current image id: 278\n.current image id: 279\n.current image id: 280\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-8679184381783013073_7740_000_7760_000_with_camera_labels.tfrecord\nsegment-8679184381783013073_7740_000_7760_000_with_camera_labels.tfrecord\n.current image id: 281\n.current image id: 282\n.current image id: 283\n.current image id: 284\n.current image id: 285\n.current image id: 286\n.current image id: 287\n.current image id: 288\n.current image id: 289\n.current image id: 290\n.current image id: 291\n.current image id: 292\n.current image id: 293\n.current image id: 294\n.current image id: 295\n.current image id: 296\n.current image id: 297\n.current image id: 298\n.current image id: 299\n.current image id: 300\n.current image id: 301\n.current image id: 302\n.current image id: 303\n.current image id: 304\n.current image id: 305\n.current image id: 306\n.current image id: 307\n.current image id: 308\n.current image id: 309\n.current image id: 310\n.current image id: 311\n.current image id: 312\n.current image id: 313\n.current image id: 314\n.current image id: 315\n.current image id: 316\n.current image id: 317\n.current image id: 318\n.current image id: 319\n.current image id: 320\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-8845277173853189216_3828_530_3848_530_with_camera_labels.tfrecord\nsegment-8845277173853189216_3828_530_3848_530_with_camera_labels.tfrecord\n.current image id: 321\n.current image id: 322\n.current image id: 323\n.current image id: 324\n.current image id: 325\n.current image id: 326\n.current image id: 327\n.current image id: 328\n.current image id: 329\n.current image id: 330\n.current image id: 331\n.current image id: 332\n.current image id: 333\n.current image id: 334\n.current image id: 335\n.current image id: 336\n.current image id: 337\n.current image id: 338\n.current image id: 339\n.current image id: 340\n.current image id: 341\n.current image id: 342\n.current image id: 343\n.current image id: 344\n.current image id: 345\n.current image id: 346\n.current image id: 347\n.current image id: 348\n.current image id: 349\n.current image id: 350\n.current image id: 351\n.current image id: 352\n.current image id: 353\n.current image id: 354\n.current image id: 355\n.current image id: 356\n.current image id: 357\n.current image id: 358\n.current image id: 359\n.current image id: 360\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-8888517708810165484_1549_770_1569_770_with_camera_labels.tfrecord\nsegment-8888517708810165484_1549_770_1569_770_with_camera_labels.tfrecord\n.current image id: 361\n.current image id: 362\n.current image id: 363\n.current image id: 364\n.current image id: 365\n.current image id: 366\n.current image id: 367\n.current image id: 368\n.current image id: 369\n.current image id: 370\n.current image id: 371\n.current image id: 372\n.current image id: 373\n.current image id: 374\n.current image id: 375\n.current image id: 376\n.current image id: 377\n.current image id: 378\n.current image id: 379\n.current image id: 380\n.current image id: 381\n.current image id: 382\n.current image id: 383\n.current image id: 384\n.current image id: 385\n.current image id: 386\n.current image id: 387\n.current image id: 388\n.current image id: 389\n.current image id: 390\n.current image id: 391\n.current image id: 392\n.current image id: 393\n.current image id: 394\n.current image id: 395\n.current image id: 396\n.current image id: 397\n.current image id: 398\n.current image id: 399\n.current image id: 400\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-8907419590259234067_1960_000_1980_000_with_camera_labels.tfrecord\nsegment-8907419590259234067_1960_000_1980_000_with_camera_labels.tfrecord\n.current image id: 401\n.current image id: 402\n.current image id: 403\n.current image id: 404\n.current image id: 405\n.current image id: 406\n.current image id: 407\n.current image id: 408\n.current image id: 409\n.current image id: 410\n.current image id: 411\n.current image id: 412\n.current image id: 413\n.current image id: 414\n.current image id: 415\n.current image id: 416\n.current image id: 417\n.current image id: 418\n.current image id: 419\n.current image id: 420\n.current image id: 421\n.current image id: 422\n.current image id: 423\n.current image id: 424\n.current image id: 425\n.current image id: 426\n.current image id: 427\n.current image id: 428\n.current image id: 429\n.current image id: 430\n.current image id: 431\n.current image id: 432\n.current image id: 433\n.current image id: 434\n.current image id: 435\n.current image id: 436\n.current image id: 437\n.current image id: 438\n.current image id: 439\n.current image id: 440\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-89454214745557131_3160_000_3180_000_with_camera_labels.tfrecord\nsegment-89454214745557131_3160_000_3180_000_with_camera_labels.tfrecord\n.current image id: 441\n.current image id: 442\n.current image id: 443\n.current image id: 444\n.current image id: 445\n.current image id: 446\n.current image id: 447\n.current image id: 448\n.current image id: 449\n.current image id: 450\n.current image id: 451\n.current image id: 452\n.current image id: 453\n.current image id: 454\n.current image id: 455\n.current image id: 456\n.current image id: 457\n.current image id: 458\n.current image id: 459\n.current image id: 460\n.current image id: 461\n.current image id: 462\n.current image id: 463\n.current image id: 464\n.current image id: 465\n.current image id: 466\n.current image id: 467\n.current image id: 468\n.current image id: 469\n.current image id: 470\n.current image id: 471\n.current image id: 472\n.current image id: 473\n.current image id: 474\n.current image id: 475\n.current image id: 476\n.current image id: 477\n.current image id: 478\n.current image id: 479\n.current image id: 480\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-8956556778987472864_3404_790_3424_790_with_camera_labels.tfrecord\nsegment-8956556778987472864_3404_790_3424_790_with_camera_labels.tfrecord\n.current image id: 481\n.current image id: 482\n.current image id: 483\n.current image id: 484\n.current image id: 485\n.current image id: 486\n.current image id: 487\n.current image id: 488\n.current image id: 489\n.current image id: 490\n.current image id: 491\n.current image id: 492\n.current image id: 493\n.current image id: 494\n.current image id: 495\n.current image id: 496\n.current image id: 497\n.current image id: 498\n.current image id: 499\n.current image id: 500\n.current image id: 501\n.current image id: 502\n.current image id: 503\n.current image id: 504\n.current image id: 505\n.current image id: 506\n.current image id: 507\n.current image id: 508\n.current image id: 509\n.current image id: 510\n.current image id: 511\n.current image id: 512\n.current image id: 513\n.current image id: 514\n.current image id: 515\n.current image id: 516\n.current image id: 517\n.current image id: 518\n.current image id: 519\n.current image id: 520\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-902001779062034993_2880_000_2900_000_with_camera_labels.tfrecord\nsegment-902001779062034993_2880_000_2900_000_with_camera_labels.tfrecord\n.current image id: 521\n.current image id: 522\n.current image id: 523\n.current image id: 524\n.current image id: 525\n.current image id: 526\n.current image id: 527\n.current image id: 528\n.current image id: 529\n.current image id: 530\n.current image id: 531\n.current image id: 532\n.current image id: 533\n.current image id: 534\n.current image id: 535\n.current image id: 536\n.current image id: 537\n.current image id: 538\n.current image id: 539\n.current image id: 540\n.current image id: 541\n.current image id: 542\n.current image id: 543\n.current image id: 544\n.current image id: 545\n.current image id: 546\n.current image id: 547\n.current image id: 548\n.current image id: 549\n.current image id: 550\n.current image id: 551\n.current image id: 552\n.current image id: 553\n.current image id: 554\n.current image id: 555\n.current image id: 556\n.current image id: 557\n.current image id: 558\n.current image id: 559\n.current image id: 560\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-9024872035982010942_2578_810_2598_810_with_camera_labels.tfrecord\nsegment-9024872035982010942_2578_810_2598_810_with_camera_labels.tfrecord\n.current image id: 561\n.current image id: 562\n.current image id: 563\n.current image id: 564\n.current image id: 565\n.current image id: 566\n.current image id: 567\n.current image id: 568\n.current image id: 569\n.current image id: 570\n.current image id: 571\n.current image id: 572\n.current image id: 573\n.current image id: 574\n.current image id: 575\n.current image id: 576\n.current image id: 577\n.current image id: 578\n.current image id: 579\n.current image id: 580\n.current image id: 581\n.current image id: 582\n.current image id: 583\n.current image id: 584\n.current image id: 585\n.current image id: 586\n.current image id: 587\n.current image id: 588\n.current image id: 589\n.current image id: 590\n.current image id: 591\n.current image id: 592\n.current image id: 593\n.current image id: 594\n.current image id: 595\n.current image id: 596\n.current image id: 597\n.current image id: 598\n.current image id: 599\n.current image id: 600\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-9041488218266405018_6454_030_6474_030_with_camera_labels.tfrecord\nsegment-9041488218266405018_6454_030_6474_030_with_camera_labels.tfrecord\n.current image id: 601\n.current image id: 602\n.current image id: 603\n.current image id: 604\n.current image id: 605\n.current image id: 606\n.current image id: 607\n.current image id: 608\n.current image id: 609\n.current image id: 610\n.current image id: 611\n.current image id: 612\n.current image id: 613\n.current image id: 614\n.current image id: 615\n.current image id: 616\n.current image id: 617\n.current image id: 618\n.current image id: 619\n.current image id: 620\n.current image id: 621\n.current image id: 622\n.current image id: 623\n.current image id: 624\n.current image id: 625\n.current image id: 626\n.current image id: 627\n.current image id: 628\n.current image id: 629\n.current image id: 630\n.current image id: 631\n.current image id: 632\n.current image id: 633\n.current image id: 634\n.current image id: 635\n.current image id: 636\n.current image id: 637\n.current image id: 638\n.current image id: 639\n.current image id: 640\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-9114112687541091312_1100_000_1120_000_with_camera_labels.tfrecord\nsegment-9114112687541091312_1100_000_1120_000_with_camera_labels.tfrecord\n.current image id: 641\n.current image id: 642\n.current image id: 643\n.current image id: 644\n.current image id: 645\n.current image id: 646\n.current image id: 647\n.current image id: 648\n.current image id: 649\n.current image id: 650\n.current image id: 651\n.current image id: 652\n.current image id: 653\n.current image id: 654\n.current image id: 655\n.current image id: 656\n.current image id: 657\n.current image id: 658\n.current image id: 659\n.current image id: 660\n.current image id: 661\n.current image id: 662\n.current image id: 663\n.current image id: 664\n.current image id: 665\n.current image id: 666\n.current image id: 667\n.current image id: 668\n.current image id: 669\n.current image id: 670\n.current image id: 671\n.current image id: 672\n.current image id: 673\n.current image id: 674\n.current image id: 675\n.current image id: 676\n.current image id: 677\n.current image id: 678\n.current image id: 679\n.current image id: 680\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-9164052963393400298_4692_970_4712_970_with_camera_labels.tfrecord\nsegment-9164052963393400298_4692_970_4712_970_with_camera_labels.tfrecord\n.current image id: 681\n.current image id: 682\n.current image id: 683\n.current image id: 684\n.current image id: 685\n.current image id: 686\n.current image id: 687\n.current image id: 688\n.current image id: 689\n.current image id: 690\n.current image id: 691\n.current image id: 692\n.current image id: 693\n.current image id: 694\n.current image id: 695\n.current image id: 696\n.current image id: 697\n.current image id: 698\n.current image id: 699\n.current image id: 700\n.current image id: 701\n.current image id: 702\n.current image id: 703\n.current image id: 704\n.current image id: 705\n.current image id: 706\n.current image id: 707\n.current image id: 708\n.current image id: 709\n.current image id: 710\n.current image id: 711\n.current image id: 712\n.current image id: 713\n.current image id: 714\n.current image id: 715\n.current image id: 716\n.current image id: 717\n.current image id: 718\n.current image id: 719\n.current image id: 720\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-9231652062943496183_1740_000_1760_000_with_camera_labels.tfrecord\nsegment-9231652062943496183_1740_000_1760_000_with_camera_labels.tfrecord\n.current image id: 721\n.current image id: 722\n.current image id: 723\n.current image id: 724\n.current image id: 725\n.current image id: 726\n.current image id: 727\n.current image id: 728\n.current image id: 729\n.current image id: 730\n.current image id: 731\n.current image id: 732\n.current image id: 733\n.current image id: 734\n.current image id: 735\n.current image id: 736\n.current image id: 737\n.current image id: 738\n.current image id: 739\n.current image id: 740\n.current image id: 741\n.current image id: 742\n.current image id: 743\n.current image id: 744\n.current image id: 745\n.current image id: 746\n.current image id: 747\n.current image id: 748\n.current image id: 749\n.current image id: 750\n.current image id: 751\n.current image id: 752\n.current image id: 753\n.current image id: 754\n.current image id: 755\n.current image id: 756\n.current image id: 757\n.current image id: 758\n.current image id: 759\n.current image id: 760\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-9243656068381062947_1297_428_1317_428_with_camera_labels.tfrecord\nsegment-9243656068381062947_1297_428_1317_428_with_camera_labels.tfrecord\n.current image id: 761\n.current image id: 762\n.current image id: 763\n.current image id: 764\n.current image id: 765\n.current image id: 766\n.current image id: 767\n.current image id: 768\n.current image id: 769\n.current image id: 770\n.current image id: 771\n.current image id: 772\n.current image id: 773\n.current image id: 774\n.current image id: 775\n.current image id: 776\n.current image id: 777\n.current image id: 778\n.current image id: 779\n.current image id: 780\n.current image id: 781\n.current image id: 782\n.current image id: 783\n.current image id: 784\n.current image id: 785\n.current image id: 786\n.current image id: 787\n.current image id: 788\n.current image id: 789\n.current image id: 790\n.current image id: 791\n.current image id: 792\n.current image id: 793\n.current image id: 794\n.current image id: 795\n.current image id: 796\n.current image id: 797\n.current image id: 798\n.current image id: 799\n.current image id: 800\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-9265793588137545201_2981_960_3001_960_with_camera_labels.tfrecord\nsegment-9265793588137545201_2981_960_3001_960_with_camera_labels.tfrecord\n.current image id: 801\n.current image id: 802\n.current image id: 803\n.current image id: 804\n.current image id: 805\n.current image id: 806\n.current image id: 807\n.current image id: 808\n.current image id: 809\n.current image id: 810\n.current image id: 811\n.current image id: 812\n.current image id: 813\n.current image id: 814\n.current image id: 815\n.current image id: 816\n.current image id: 817\n.current image id: 818\n.current image id: 819\n.current image id: 820\n.current image id: 821\n.current image id: 822\n.current image id: 823\n.current image id: 824\n.current image id: 825\n.current image id: 826\n.current image id: 827\n.current image id: 828\n.current image id: 829\n.current image id: 830\n.current image id: 831\n.current image id: 832\n.current image id: 833\n.current image id: 834\n.current image id: 835\n.current image id: 836\n.current image id: 837\n.current image id: 838\n.current image id: 839\n.current image id: 840\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-933621182106051783_4160_000_4180_000_with_camera_labels.tfrecord\nsegment-933621182106051783_4160_000_4180_000_with_camera_labels.tfrecord\n.current image id: 841\n.current image id: 842\n.current image id: 843\n.current image id: 844\n.current image id: 845\n.current image id: 846\n.current image id: 847\n.current image id: 848\n.current image id: 849\n.current image id: 850\n.current image id: 851\n.current image id: 852\n.current image id: 853\n.current image id: 854\n.current image id: 855\n.current image id: 856\n.current image id: 857\n.current image id: 858\n.current image id: 859\n.current image id: 860\n.current image id: 861\n.current image id: 862\n.current image id: 863\n.current image id: 864\n.current image id: 865\n.current image id: 866\n.current image id: 867\n.current image id: 868\n.current image id: 869\n.current image id: 870\n.current image id: 871\n.current image id: 872\n.current image id: 873\n.current image id: 874\n.current image id: 875\n.current image id: 876\n.current image id: 877\n.current image id: 878\n.current image id: 879\n.current image id: 880\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-9443948810903981522_6538_870_6558_870_with_camera_labels.tfrecord\nsegment-9443948810903981522_6538_870_6558_870_with_camera_labels.tfrecord\n.current image id: 881\n.current image id: 882\n.current image id: 883\n.current image id: 884\n.current image id: 885\n.current image id: 886\n.current image id: 887\n.current image id: 888\n.current image id: 889\n.current image id: 890\n.current image id: 891\n.current image id: 892\n.current image id: 893\n.current image id: 894\n.current image id: 895\n.current image id: 896\n.current image id: 897\n.current image id: 898\n.current image id: 899\n.current image id: 900\n.current image id: 901\n.current image id: 902\n.current image id: 903\n.current image id: 904\n.current image id: 905\n.current image id: 906\n.current image id: 907\n.current image id: 908\n.current image id: 909\n.current image id: 910\n.current image id: 911\n.current image id: 912\n.current image id: 913\n.current image id: 914\n.current image id: 915\n.current image id: 916\n.current image id: 917\n.current image id: 918\n.current image id: 919\n.current image id: 920\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-9472420603764812147_850_000_870_000_with_camera_labels.tfrecord\nsegment-9472420603764812147_850_000_870_000_with_camera_labels.tfrecord\n.current image id: 921\n.current image id: 922\n.current image id: 923\n.current image id: 924\n.current image id: 925\n.current image id: 926\n.current image id: 927\n.current image id: 928\n.current image id: 929\n.current image id: 930\n.current image id: 931\n.current image id: 932\n.current image id: 933\n.current image id: 934\n.current image id: 935\n.current image id: 936\n.current image id: 937\n.current image id: 938\n.current image id: 939\n.current image id: 940\n.current image id: 941\n.current image id: 942\n.current image id: 943\n.current image id: 944\n.current image id: 945\n.current image id: 946\n.current image id: 947\n.current image id: 948\n.current image id: 949\n.current image id: 950\n.current image id: 951\n.current image id: 952\n.current image id: 953\n.current image id: 954\n.current image id: 955\n.current image id: 956\n.current image id: 957\n.current image id: 958\n.current image id: 959\n.current image id: 960\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-9579041874842301407_1300_000_1320_000_with_camera_labels.tfrecord\nsegment-9579041874842301407_1300_000_1320_000_with_camera_labels.tfrecord\n.current image id: 961\n.current image id: 962\n.current image id: 963\n.current image id: 964\n.current image id: 965\n.current image id: 966\n.current image id: 967\n.current image id: 968\n.current image id: 969\n.current image id: 970\n.current image id: 971\n.current image id: 972\n.current image id: 973\n.current image id: 974\n.current image id: 975\n.current image id: 976\n.current image id: 977\n.current image id: 978\n.current image id: 979\n.current image id: 980\n.current image id: 981\n.current image id: 982\n.current image id: 983\n.current image id: 984\n.current image id: 985\n.current image id: 986\n.current image id: 987\n.current image id: 988\n.current image id: 989\n.current image id: 990\n.current image id: 991\n.current image id: 992\n.current image id: 993\n.current image id: 994\n.current image id: 995\n.current image id: 996\n.current image id: 997\n.current image id: 998\n.current image id: 999\n.current image id: 1000\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-967082162553397800_5102_900_5122_900_with_camera_labels.tfrecord\nsegment-967082162553397800_5102_900_5122_900_with_camera_labels.tfrecord\n.current image id: 1001\n.current image id: 1002\n.current image id: 1003\n.current image id: 1004\n.current image id: 1005\n.current image id: 1006\n.current image id: 1007\n.current image id: 1008\n.current image id: 1009\n.current image id: 1010\n.current image id: 1011\n.current image id: 1012\n.current image id: 1013\n.current image id: 1014\n.current image id: 1015\n.current image id: 1016\n.current image id: 1017\n.current image id: 1018\n.current image id: 1019\n.current image id: 1020\n.current image id: 1021\n.current image id: 1022\n.current image id: 1023\n.current image id: 1024\n.current image id: 1025\n.current image id: 1026\n.current image id: 1027\n.current image id: 1028\n.current image id: 1029\n.current image id: 1030\n.current image id: 1031\n.current image id: 1032\n.current image id: 1033\n.current image id: 1034\n.current image id: 1035\n.current image id: 1036\n.current image id: 1037\n.current image id: 1038\n.current image id: 1039\n.current image id: 1040\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13238419657658219864_4630_850_4650_850_with_camera_labels.tfrecord\nsegment-13238419657658219864_4630_850_4650_850_with_camera_labels.tfrecord\n.current image id: 1041\n.current image id: 1042\n.current image id: 1043\n.current image id: 1044\n.current image id: 1045\n.current image id: 1046\n.current image id: 1047\n.current image id: 1048\n.current image id: 1049\n.current image id: 1050\n.current image id: 1051\n.current image id: 1052\n.current image id: 1053\n.current image id: 1054\n.current image id: 1055\n.current image id: 1056\n.current image id: 1057\n.current image id: 1058\n.current image id: 1059\n.current image id: 1060\n.current image id: 1061\n.current image id: 1062\n.current image id: 1063\n.current image id: 1064\n.current image id: 1065\n.current image id: 1066\n.current image id: 1067\n.current image id: 1068\n.current image id: 1069\n.current image id: 1070\n.current image id: 1071\n.current image id: 1072\n.current image id: 1073\n.current image id: 1074\n.current image id: 1075\n.current image id: 1076\n.current image id: 1077\n.current image id: 1078\n.current image id: 1079\n.current image id: 1080\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13254498462985394788_980_000_1000_000_with_camera_labels.tfrecord\nsegment-13254498462985394788_980_000_1000_000_with_camera_labels.tfrecord\n.current image id: 1081\n.current image id: 1082\n.current image id: 1083\n.current image id: 1084\n.current image id: 1085\n.current image id: 1086\n.current image id: 1087\n.current image id: 1088\n.current image id: 1089\n.current image id: 1090\n.current image id: 1091\n.current image id: 1092\n.current image id: 1093\n.current image id: 1094\n.current image id: 1095\n.current image id: 1096\n.current image id: 1097\n.current image id: 1098\n.current image id: 1099\n.current image id: 1100\n.current image id: 1101\n.current image id: 1102\n.current image id: 1103\n.current image id: 1104\n.current image id: 1105\n.current image id: 1106\n.current image id: 1107\n.current image id: 1108\n.current image id: 1109\n.current image id: 1110\n.current image id: 1111\n.current image id: 1112\n.current image id: 1113\n.current image id: 1114\n.current image id: 1115\n.current image id: 1116\n.current image id: 1117\n.current image id: 1118\n.current image id: 1119\n.current image id: 1120\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13258835835415292197_965_000_985_000_with_camera_labels.tfrecord\nsegment-13258835835415292197_965_000_985_000_with_camera_labels.tfrecord\n.current image id: 1121\n.current image id: 1122\n.current image id: 1123\n.current image id: 1124\n.current image id: 1125\n.current image id: 1126\n.current image id: 1127\n.current image id: 1128\n.current image id: 1129\n.current image id: 1130\n.current image id: 1131\n.current image id: 1132\n.current image id: 1133\n.current image id: 1134\n.current image id: 1135\n.current image id: 1136\n.current image id: 1137\n.current image id: 1138\n.current image id: 1139\n.current image id: 1140\n.current image id: 1141\n.current image id: 1142\n.current image id: 1143\n.current image id: 1144\n.current image id: 1145\n.current image id: 1146\n.current image id: 1147\n.current image id: 1148\n.current image id: 1149\n.current image id: 1150\n.current image id: 1151\n.current image id: 1152\n.current image id: 1153\n.current image id: 1154\n.current image id: 1155\n.current image id: 1156\n.current image id: 1157\n.current image id: 1158\n.current image id: 1159\n.current image id: 1160\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13271285919570645382_5320_000_5340_000_with_camera_labels.tfrecord\nsegment-13271285919570645382_5320_000_5340_000_with_camera_labels.tfrecord\n.current image id: 1161\n.current image id: 1162\n.current image id: 1163\n.current image id: 1164\n.current image id: 1165\n.current image id: 1166\n.current image id: 1167\n.current image id: 1168\n.current image id: 1169\n.current image id: 1170\n.current image id: 1171\n.current image id: 1172\n.current image id: 1173\n.current image id: 1174\n.current image id: 1175\n.current image id: 1176\n.current image id: 1177\n.current image id: 1178\n.current image id: 1179\n.current image id: 1180\n.current image id: 1181\n.current image id: 1182\n.current image id: 1183\n.current image id: 1184\n.current image id: 1185\n.current image id: 1186\n.current image id: 1187\n.current image id: 1188\n.current image id: 1189\n.current image id: 1190\n.current image id: 1191\n.current image id: 1192\n.current image id: 1193\n.current image id: 1194\n.current image id: 1195\n.current image id: 1196\n.current image id: 1197\n.current image id: 1198\n.current image id: 1199\n.current image id: 1200\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13310437789759009684_2645_000_2665_000_with_camera_labels.tfrecord\nsegment-13310437789759009684_2645_000_2665_000_with_camera_labels.tfrecord\n.current image id: 1201\n.current image id: 1202\n.current image id: 1203\n.current image id: 1204\n.current image id: 1205\n.current image id: 1206\n.current image id: 1207\n.current image id: 1208\n.current image id: 1209\n.current image id: 1210\n.current image id: 1211\n.current image id: 1212\n.current image id: 1213\n.current image id: 1214\n.current image id: 1215\n.current image id: 1216\n.current image id: 1217\n.current image id: 1218\n.current image id: 1219\n.current image id: 1220\n.current image id: 1221\n.current image id: 1222\n.current image id: 1223\n.current image id: 1224\n.current image id: 1225\n.current image id: 1226\n.current image id: 1227\n.current image id: 1228\n.current image id: 1229\n.current image id: 1230\n.current image id: 1231\n.current image id: 1232\n.current image id: 1233\n.current image id: 1234\n.current image id: 1235\n.current image id: 1236\n.current image id: 1237\n.current image id: 1238\n.current image id: 1239\n.current image id: 1240\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13355317306876751663_2200_000_2220_000_with_camera_labels.tfrecord\nsegment-13355317306876751663_2200_000_2220_000_with_camera_labels.tfrecord\n.current image id: 1241\n.current image id: 1242\n.current image id: 1243\n.current image id: 1244\n.current image id: 1245\n.current image id: 1246\n.current image id: 1247\n.current image id: 1248\n.current image id: 1249\n.current image id: 1250\n.current image id: 1251\n.current image id: 1252\n.current image id: 1253\n.current image id: 1254\n.current image id: 1255\n.current image id: 1256\n.current image id: 1257\n.current image id: 1258\n.current image id: 1259\n.current image id: 1260\n.current image id: 1261\n.current image id: 1262\n.current image id: 1263\n.current image id: 1264\n.current image id: 1265\n.current image id: 1266\n.current image id: 1267\n.current image id: 1268\n.current image id: 1269\n.current image id: 1270\n.current image id: 1271\n.current image id: 1272\n.current image id: 1273\n.current image id: 1274\n.current image id: 1275\n.current image id: 1276\n.current image id: 1277\n.current image id: 1278\n.current image id: 1279\n.current image id: 1280\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13363977648531075793_343_000_363_000_with_camera_labels.tfrecord\nsegment-13363977648531075793_343_000_363_000_with_camera_labels.tfrecord\n.current image id: 1281\n.current image id: 1282\n.current image id: 1283\n.current image id: 1284\n.current image id: 1285\n.current image id: 1286\n.current image id: 1287\n.current image id: 1288\n.current image id: 1289\n.current image id: 1290\n.current image id: 1291\n.current image id: 1292\n.current image id: 1293\n.current image id: 1294\n.current image id: 1295\n.current image id: 1296\n.current image id: 1297\n.current image id: 1298\n.current image id: 1299\n.current image id: 1300\n.current image id: 1301\n.current image id: 1302\n.current image id: 1303\n.current image id: 1304\n.current image id: 1305\n.current image id: 1306\n.current image id: 1307\n.current image id: 1308\n.current image id: 1309\n.current image id: 1310\n.current image id: 1311\n.current image id: 1312\n.current image id: 1313\n.current image id: 1314\n.current image id: 1315\n.current image id: 1316\n.current image id: 1317\n.current image id: 1318\n.current image id: 1319\n.current image id: 1320\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13390791323468600062_6718_570_6738_570_with_camera_labels.tfrecord\nsegment-13390791323468600062_6718_570_6738_570_with_camera_labels.tfrecord\n.current image id: 1321\n.current image id: 1322\n.current image id: 1323\n.current image id: 1324\n.current image id: 1325\n.current image id: 1326\n.current image id: 1327\n.current image id: 1328\n.current image id: 1329\n.current image id: 1330\n.current image id: 1331\n.current image id: 1332\n.current image id: 1333\n.current image id: 1334\n.current image id: 1335\n.current image id: 1336\n.current image id: 1337\n.current image id: 1338\n.current image id: 1339\n.current image id: 1340\n.current image id: 1341\n.current image id: 1342\n.current image id: 1343\n.current image id: 1344\n.current image id: 1345\n.current image id: 1346\n.current image id: 1347\n.current image id: 1348\n.current image id: 1349\n.current image id: 1350\n.current image id: 1351\n.current image id: 1352\n.current image id: 1353\n.current image id: 1354\n.current image id: 1355\n.current image id: 1356\n.current image id: 1357\n.current image id: 1358\n.current image id: 1359\n.current image id: 1360\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13402473631986525162_5700_000_5720_000_with_camera_labels.tfrecord\nsegment-13402473631986525162_5700_000_5720_000_with_camera_labels.tfrecord\n.current image id: 1361\n.current image id: 1362\n.current image id: 1363\n.current image id: 1364\n.current image id: 1365\n.current image id: 1366\n.current image id: 1367\n.current image id: 1368\n.current image id: 1369\n.current image id: 1370\n.current image id: 1371\n.current image id: 1372\n.current image id: 1373\n.current image id: 1374\n.current image id: 1375\n.current image id: 1376\n.current image id: 1377\n.current image id: 1378\n.current image id: 1379\n.current image id: 1380\n.current image id: 1381\n.current image id: 1382\n.current image id: 1383\n.current image id: 1384\n.current image id: 1385\n.current image id: 1386\n.current image id: 1387\n.current image id: 1388\n.current image id: 1389\n.current image id: 1390\n.current image id: 1391\n.current image id: 1392\n.current image id: 1393\n.current image id: 1394\n.current image id: 1395\n.current image id: 1396\n.current image id: 1397\n.current image id: 1398\n.current image id: 1399\n.current image id: 1400\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13476374534576730229_240_000_260_000_with_camera_labels.tfrecord\nsegment-13476374534576730229_240_000_260_000_with_camera_labels.tfrecord\n.current image id: 1401\n.current image id: 1402\n.current image id: 1403\n.current image id: 1404\n.current image id: 1405\n.current image id: 1406\n.current image id: 1407\n.current image id: 1408\n.current image id: 1409\n.current image id: 1410\n.current image id: 1411\n.current image id: 1412\n.current image id: 1413\n.current image id: 1414\n.current image id: 1415\n.current image id: 1416\n.current image id: 1417\n.current image id: 1418\n.current image id: 1419\n.current image id: 1420\n.current image id: 1421\n.current image id: 1422\n.current image id: 1423\n.current image id: 1424\n.current image id: 1425\n.current image id: 1426\n.current image id: 1427\n.current image id: 1428\n.current image id: 1429\n.current image id: 1430\n.current image id: 1431\n.current image id: 1432\n.current image id: 1433\n.current image id: 1434\n.current image id: 1435\n.current image id: 1436\n.current image id: 1437\n.current image id: 1438\n.current image id: 1439\n.current image id: 1440\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13506499849906169066_120_000_140_000_with_camera_labels.tfrecord\nsegment-13506499849906169066_120_000_140_000_with_camera_labels.tfrecord\n.current image id: 1441\n.current image id: 1442\n.current image id: 1443\n.current image id: 1444\n.current image id: 1445\n.current image id: 1446\n.current image id: 1447\n.current image id: 1448\n.current image id: 1449\n.current image id: 1450\n.current image id: 1451\n.current image id: 1452\n.current image id: 1453\n.current image id: 1454\n.current image id: 1455\n.current image id: 1456\n.current image id: 1457\n.current image id: 1458\n.current image id: 1459\n.current image id: 1460\n.current image id: 1461\n.current image id: 1462\n.current image id: 1463\n.current image id: 1464\n.current image id: 1465\n.current image id: 1466\n.current image id: 1467\n.current image id: 1468\n.current image id: 1469\n.current image id: 1470\n.current image id: 1471\n.current image id: 1472\n.current image id: 1473\n.current image id: 1474\n.current image id: 1475\n.current image id: 1476\n.current image id: 1477\n.current image id: 1478\n.current image id: 1479\n.current image id: 1480\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13517115297021862252_2680_000_2700_000_with_camera_labels.tfrecord\nsegment-13517115297021862252_2680_000_2700_000_with_camera_labels.tfrecord\n.current image id: 1481\n.current image id: 1482\n.current image id: 1483\n.current image id: 1484\n.current image id: 1485\n.current image id: 1486\n.current image id: 1487\n.current image id: 1488\n.current image id: 1489\n.current image id: 1490\n.current image id: 1491\n.current image id: 1492\n.current image id: 1493\n.current image id: 1494\n.current image id: 1495\n.current image id: 1496\n.current image id: 1497\n.current image id: 1498\n.current image id: 1499\n.current image id: 1500\n.current image id: 1501\n.current image id: 1502\n.current image id: 1503\n.current image id: 1504\n.current image id: 1505\n.current image id: 1506\n.current image id: 1507\n.current image id: 1508\n.current image id: 1509\n.current image id: 1510\n.current image id: 1511\n.current image id: 1512\n.current image id: 1513\n.current image id: 1514\n.current image id: 1515\n.current image id: 1516\n.current image id: 1517\n.current image id: 1518\n.current image id: 1519\n.current image id: 1520\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13519445614718437933_4060_000_4080_000_with_camera_labels.tfrecord\nsegment-13519445614718437933_4060_000_4080_000_with_camera_labels.tfrecord\n.current image id: 1521\n.current image id: 1522\n.current image id: 1523\n.current image id: 1524\n.current image id: 1525\n.current image id: 1526\n.current image id: 1527\n.current image id: 1528\n.current image id: 1529\n.current image id: 1530\n.current image id: 1531\n.current image id: 1532\n.current image id: 1533\n.current image id: 1534\n.current image id: 1535\n.current image id: 1536\n.current image id: 1537\n.current image id: 1538\n.current image id: 1539\n.current image id: 1540\n.current image id: 1541\n.current image id: 1542\n.current image id: 1543\n.current image id: 1544\n.current image id: 1545\n.current image id: 1546\n.current image id: 1547\n.current image id: 1548\n.current image id: 1549\n.current image id: 1550\n.current image id: 1551\n.current image id: 1552\n.current image id: 1553\n.current image id: 1554\n.current image id: 1555\n.current image id: 1556\n.current image id: 1557\n.current image id: 1558\n.current image id: 1559\n.current image id: 1560\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-1352150727715827110_3710_250_3730_250_with_camera_labels.tfrecord\nsegment-1352150727715827110_3710_250_3730_250_with_camera_labels.tfrecord\n.current image id: 1561\n.current image id: 1562\n.current image id: 1563\n.current image id: 1564\n.current image id: 1565\n.current image id: 1566\n.current image id: 1567\n.current image id: 1568\n.current image id: 1569\n.current image id: 1570\n.current image id: 1571\n.current image id: 1572\n.current image id: 1573\n.current image id: 1574\n.current image id: 1575\n.current image id: 1576\n.current image id: 1577\n.current image id: 1578\n.current image id: 1579\n.current image id: 1580\n.current image id: 1581\n.current image id: 1582\n.current image id: 1583\n.current image id: 1584\n.current image id: 1585\n.current image id: 1586\n.current image id: 1587\n.current image id: 1588\n.current image id: 1589\n.current image id: 1590\n.current image id: 1591\n.current image id: 1592\n.current image id: 1593\n.current image id: 1594\n.current image id: 1595\n.current image id: 1596\n.current image id: 1597\n.current image id: 1598\n.current image id: 1599\n.current image id: 1600\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-1357883579772440606_2365_000_2385_000_with_camera_labels.tfrecord\nsegment-1357883579772440606_2365_000_2385_000_with_camera_labels.tfrecord\n.current image id: 1601\n.current image id: 1602\n.current image id: 1603\n.current image id: 1604\n.current image id: 1605\n.current image id: 1606\n.current image id: 1607\n.current image id: 1608\n.current image id: 1609\n.current image id: 1610\n.current image id: 1611\n.current image id: 1612\n.current image id: 1613\n.current image id: 1614\n.current image id: 1615\n.current image id: 1616\n.current image id: 1617\n.current image id: 1618\n.current image id: 1619\n.current image id: 1620\n.current image id: 1621\n.current image id: 1622\n.current image id: 1623\n.current image id: 1624\n.current image id: 1625\n.current image id: 1626\n.current image id: 1627\n.current image id: 1628\n.current image id: 1629\n.current image id: 1630\n.current image id: 1631\n.current image id: 1632\n.current image id: 1633\n.current image id: 1634\n.current image id: 1635\n.current image id: 1636\n.current image id: 1637\n.current image id: 1638\n.current image id: 1639\n.current image id: 1640\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13585809231635721258_1910_770_1930_770_with_camera_labels.tfrecord\nsegment-13585809231635721258_1910_770_1930_770_with_camera_labels.tfrecord\n.current image id: 1641\n.current image id: 1642\n.current image id: 1643\n.current image id: 1644\n.current image id: 1645\n.current image id: 1646\n.current image id: 1647\n.current image id: 1648\n.current image id: 1649\n.current image id: 1650\n.current image id: 1651\n.current image id: 1652\n.current image id: 1653\n.current image id: 1654\n.current image id: 1655\n.current image id: 1656\n.current image id: 1657\n.current image id: 1658\n.current image id: 1659\n.current image id: 1660\n.current image id: 1661\n.current image id: 1662\n.current image id: 1663\n.current image id: 1664\n.current image id: 1665\n.current image id: 1666\n.current image id: 1667\n.current image id: 1668\n.current image id: 1669\n.current image id: 1670\n.current image id: 1671\n.current image id: 1672\n.current image id: 1673\n.current image id: 1674\n.current image id: 1675\n.current image id: 1676\n.current image id: 1677\n.current image id: 1678\n.current image id: 1679\n.current image id: 1680\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13619063687271391084_1519_680_1539_680_with_camera_labels.tfrecord\nsegment-13619063687271391084_1519_680_1539_680_with_camera_labels.tfrecord\n.current image id: 1681\n.current image id: 1682\n.current image id: 1683\n.current image id: 1684\n.current image id: 1685\n.current image id: 1686\n.current image id: 1687\n.current image id: 1688\n.current image id: 1689\n.current image id: 1690\n.current image id: 1691\n.current image id: 1692\n.current image id: 1693\n.current image id: 1694\n.current image id: 1695\n.current image id: 1696\n.current image id: 1697\n.current image id: 1698\n.current image id: 1699\n.current image id: 1700\n.current image id: 1701\n.current image id: 1702\n.current image id: 1703\n.current image id: 1704\n.current image id: 1705\n.current image id: 1706\n.current image id: 1707\n.current image id: 1708\n.current image id: 1709\n.current image id: 1710\n.current image id: 1711\n.current image id: 1712\n.current image id: 1713\n.current image id: 1714\n.current image id: 1715\n.current image id: 1716\n.current image id: 1717\n.current image id: 1718\n.current image id: 1719\n.current image id: 1720\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13622747960068272448_1678_930_1698_930_with_camera_labels.tfrecord\nsegment-13622747960068272448_1678_930_1698_930_with_camera_labels.tfrecord\n.current image id: 1721\n.current image id: 1722\n.current image id: 1723\n.current image id: 1724\n.current image id: 1725\n.current image id: 1726\n.current image id: 1727\n.current image id: 1728\n.current image id: 1729\n.current image id: 1730\n.current image id: 1731\n.current image id: 1732\n.current image id: 1733\n.current image id: 1734\n.current image id: 1735\n.current image id: 1736\n.current image id: 1737\n.current image id: 1738\n.current image id: 1739\n.current image id: 1740\n.current image id: 1741\n.current image id: 1742\n.current image id: 1743\n.current image id: 1744\n.current image id: 1745\n.current image id: 1746\n.current image id: 1747\n.current image id: 1748\n.current image id: 1749\n.current image id: 1750\n.current image id: 1751\n.current image id: 1752\n.current image id: 1753\n.current image id: 1754\n.current image id: 1755\n.current image id: 1756\n.current image id: 1757\n.current image id: 1758\n.current image id: 1759\n.current image id: 1760\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13629997314951696814_1207_000_1227_000_with_camera_labels.tfrecord\nsegment-13629997314951696814_1207_000_1227_000_with_camera_labels.tfrecord\n.current image id: 1761\n.current image id: 1762\n.current image id: 1763\n.current image id: 1764\n.current image id: 1765\n.current image id: 1766\n.current image id: 1767\n.current image id: 1768\n.current image id: 1769\n.current image id: 1770\n.current image id: 1771\n.current image id: 1772\n.current image id: 1773\n.current image id: 1774\n.current image id: 1775\n.current image id: 1776\n.current image id: 1777\n.current image id: 1778\n.current image id: 1779\n.current image id: 1780\n.current image id: 1781\n.current image id: 1782\n.current image id: 1783\n.current image id: 1784\n.current image id: 1785\n.current image id: 1786\n.current image id: 1787\n.current image id: 1788\n.current image id: 1789\n.current image id: 1790\n.current image id: 1791\n.current image id: 1792\n.current image id: 1793\n.current image id: 1794\n.current image id: 1795\n.current image id: 1796\n.current image id: 1797\n.current image id: 1798\n.current image id: 1799\n.current image id: 1800\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13667377240304615855_500_000_520_000_with_camera_labels.tfrecord\nsegment-13667377240304615855_500_000_520_000_with_camera_labels.tfrecord\n.current image id: 1801\n.current image id: 1802\n.current image id: 1803\n.current image id: 1804\n.current image id: 1805\n.current image id: 1806\n.current image id: 1807\n.current image id: 1808\n.current image id: 1809\n.current image id: 1810\n.current image id: 1811\n.current image id: 1812\n.current image id: 1813\n.current image id: 1814\n.current image id: 1815\n.current image id: 1816\n.current image id: 1817\n.current image id: 1818\n.current image id: 1819\n.current image id: 1820\n.current image id: 1821\n.current image id: 1822\n.current image id: 1823\n.current image id: 1824\n.current image id: 1825\n.current image id: 1826\n.current image id: 1827\n.current image id: 1828\n.current image id: 1829\n.current image id: 1830\n.current image id: 1831\n.current image id: 1832\n.current image id: 1833\n.current image id: 1834\n.current image id: 1835\n.current image id: 1836\n.current image id: 1837\n.current image id: 1838\n.current image id: 1839\n.current image id: 1840\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13679757109245957439_4167_170_4187_170_with_camera_labels.tfrecord\nsegment-13679757109245957439_4167_170_4187_170_with_camera_labels.tfrecord\n.current image id: 1841\n.current image id: 1842\n.current image id: 1843\n.current image id: 1844\n.current image id: 1845\n.current image id: 1846\n.current image id: 1847\n.current image id: 1848\n.current image id: 1849\n.current image id: 1850\n.current image id: 1851\n.current image id: 1852\n.current image id: 1853\n.current image id: 1854\n.current image id: 1855\n.current image id: 1856\n.current image id: 1857\n.current image id: 1858\n.current image id: 1859\n.current image id: 1860\n.current image id: 1861\n.current image id: 1862\n.current image id: 1863\n.current image id: 1864\n.current image id: 1865\n.current image id: 1866\n.current image id: 1867\n.current image id: 1868\n.current image id: 1869\n.current image id: 1870\n.current image id: 1871\n.current image id: 1872\n.current image id: 1873\n.current image id: 1874\n.current image id: 1875\n.current image id: 1876\n.current image id: 1877\n.current image id: 1878\n.current image id: 1879\n.current image id: 1880\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13731697468004921673_4920_000_4940_000_with_camera_labels.tfrecord\nsegment-13731697468004921673_4920_000_4940_000_with_camera_labels.tfrecord\n.current image id: 1881\n.current image id: 1882\n.current image id: 1883\n.current image id: 1884\n.current image id: 1885\n.current image id: 1886\n.current image id: 1887\n.current image id: 1888\n.current image id: 1889\n.current image id: 1890\n.current image id: 1891\n.current image id: 1892\n.current image id: 1893\n.current image id: 1894\n.current image id: 1895\n.current image id: 1896\n.current image id: 1897\n.current image id: 1898\n.current image id: 1899\n.current image id: 1900\n.current image id: 1901\n.current image id: 1902\n.current image id: 1903\n.current image id: 1904\n.current image id: 1905\n.current image id: 1906\n.current image id: 1907\n.current image id: 1908\n.current image id: 1909\n.current image id: 1910\n.current image id: 1911\n.current image id: 1912\n.current image id: 1913\n.current image id: 1914\n.current image id: 1915\n.current image id: 1916\n.current image id: 1917\n.current image id: 1918\n.current image id: 1919\n.current image id: 1920\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13807633218762107566_6625_000_6645_000_with_camera_labels.tfrecord\nsegment-13807633218762107566_6625_000_6645_000_with_camera_labels.tfrecord\n.current image id: 1921\n.current image id: 1922\n.current image id: 1923\n.current image id: 1924\n.current image id: 1925\n.current image id: 1926\n.current image id: 1927\n.current image id: 1928\n.current image id: 1929\n.current image id: 1930\n.current image id: 1931\n.current image id: 1932\n.current image id: 1933\n.current image id: 1934\n.current image id: 1935\n.current image id: 1936\n.current image id: 1937\n.current image id: 1938\n.current image id: 1939\n.current image id: 1940\n.current image id: 1941\n.current image id: 1942\n.current image id: 1943\n.current image id: 1944\n.current image id: 1945\n.current image id: 1946\n.current image id: 1947\n.current image id: 1948\n.current image id: 1949\n.current image id: 1950\n.current image id: 1951\n.current image id: 1952\n.current image id: 1953\n.current image id: 1954\n.current image id: 1955\n.current image id: 1956\n.current image id: 1957\n.current image id: 1958\n.current image id: 1959\n.current image id: 1960\n"
],
[
"PATH='/data/cmpe295-liu/Waymo'\n#folderslist = [\"training_0031\",\"training_0030\"]#,\"training_0029\",\"training_0028\",\"training_0027\",\"training_0026\"]\nfolderslist = [\"training_0031\",\"training_0030\",\"training_0029\",\"training_0028\",\"training_0027\",\"training_0026\",\"training_0025\", \"training_0024\", \"training_0023\",\"training_0022\",\"training_0021\",\"training_0020\",\"training_0019\",\"training_0018\",\"training_0017\",\"training_0016\",\"training_0015\",\"training_0014\",\"training_0013\",\"training_0012\",\"training_0011\",\"training_0010\",\"training_0009\",\"training_0008\",\"training_0007\",\"training_0006\",\"training_0005\",\"training_0004\",\"training_0003\",\"training_0002\",\"training_0001\",\"training_0000\"]\ntfrecord_files = [path for x in folderslist for path in glob(os.path.join(PATH, x, \"*.tfrecord\"))]\nprint(len(tfrecord_files))#total number of tfrecord files\n\nout_dir='/data/cmpe295-liu/Waymo/WaymoCOCO/Training'\nstep=5 #downsample\nout_dir = Path(out_dir)\nextract_segment_allfrontcamera(PATH,folderslist, out_dir, step)\n\nfolderslist = validation_folders = [\"validation_0000\",\"validation_0001\",\"validation_0002\",\"validation_0003\",\"validation_0004\",\"validation_0005\", \"validation_0006\", \"validation_0007\"]\ntfrecord_files = [path for x in folderslist for path in glob(os.path.join(PATH, x, \"*.tfrecord\"))]\nprint(len(tfrecord_files))#total number of tfrecord files\nout_dir='/data/cmpe295-liu/Waymo/WaymoCOCO/Validation'\nstep=5 #downsample\nout_dir = Path(out_dir)\nextract_segment_allfrontcamera(PATH,folderslist, out_dir, step)\n\n#extract_segment_frontcamera(tfrecord_files, out_dir, step)",
"784\nFolder name: training_0031\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0031/segment-9288629315134424745_4360_000_4380_000_with_camera_labels.tfrecord\nsegment-9288629315134424745_4360_000_4380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9295161125729168140_1270_000_1290_000_with_camera_labels.tfrecord\nsegment-9295161125729168140_1270_000_1290_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9311322119128915594_5285_000_5305_000_with_camera_labels.tfrecord\nsegment-9311322119128915594_5285_000_5305_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9320169289978396279_1040_000_1060_000_with_camera_labels.tfrecord\nsegment-9320169289978396279_1040_000_1060_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9325580606626376787_4509_140_4529_140_with_camera_labels.tfrecord\nsegment-9325580606626376787_4509_140_4529_140_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9334364225104959137_661_000_681_000_with_camera_labels.tfrecord\nsegment-9334364225104959137_661_000_681_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9350921499281634194_2403_251_2423_251_with_camera_labels.tfrecord\nsegment-9350921499281634194_2403_251_2423_251_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9385013624094020582_2547_650_2567_650_with_camera_labels.tfrecord\nsegment-9385013624094020582_2547_650_2567_650_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9415086857375798767_4760_000_4780_000_with_camera_labels.tfrecord\nsegment-9415086857375798767_4760_000_4780_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9465500459680839281_1100_000_1120_000_with_camera_labels.tfrecord\nsegment-9465500459680839281_1100_000_1120_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9509506420470671704_4049_100_4069_100_with_camera_labels.tfrecord\nsegment-9509506420470671704_4049_100_4069_100_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9521653920958139982_940_000_960_000_with_camera_labels.tfrecord\nsegment-9521653920958139982_940_000_960_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9529958888589376527_640_000_660_000_with_camera_labels.tfrecord\nsegment-9529958888589376527_640_000_660_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9547911055204230158_1567_950_1587_950_with_camera_labels.tfrecord\nsegment-9547911055204230158_1567_950_1587_950_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9568394837328971633_466_365_486_365_with_camera_labels.tfrecord\nsegment-9568394837328971633_466_365_486_365_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9653249092275997647_980_000_1000_000_with_camera_labels.tfrecord\nsegment-9653249092275997647_980_000_1000_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9654060644653474834_3905_000_3925_000_with_camera_labels.tfrecord\nsegment-9654060644653474834_3905_000_3925_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9696413700515401320_1690_000_1710_000_with_camera_labels.tfrecord\nsegment-9696413700515401320_1690_000_1710_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-972142630887801133_642_740_662_740_with_camera_labels.tfrecord\nsegment-972142630887801133_642_740_662_740_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9747453753779078631_940_000_960_000_with_camera_labels.tfrecord\nsegment-9747453753779078631_940_000_960_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9758342966297863572_875_230_895_230_with_camera_labels.tfrecord\nsegment-9758342966297863572_875_230_895_230_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9820553434532681355_2820_000_2840_000_with_camera_labels.tfrecord\nsegment-9820553434532681355_2820_000_2840_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9907794657177651763_1126_570_1146_570_with_camera_labels.tfrecord\nsegment-9907794657177651763_1126_570_1146_570_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-990914685337955114_980_000_1000_000_with_camera_labels.tfrecord\nsegment-990914685337955114_980_000_1000_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0031/segment-9985243312780923024_3049_720_3069_720_with_camera_labels.tfrecord\nsegment-9985243312780923024_3049_720_3069_720_with_camera_labels.tfrecord\n........................................Folder name: training_0030\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0030/segment-8722413665055769182_2840_000_2860_000_with_camera_labels.tfrecord\nsegment-8722413665055769182_2840_000_2860_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-8745106945249251942_1207_000_1227_000_with_camera_labels.tfrecord\nsegment-8745106945249251942_1207_000_1227_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-8763126149209091146_1843_320_1863_320_with_camera_labels.tfrecord\nsegment-8763126149209091146_1843_320_1863_320_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-8796914080594559459_4284_170_4304_170_with_camera_labels.tfrecord\nsegment-8796914080594559459_4284_170_4304_170_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-8806931859563747931_1160_000_1180_000_with_camera_labels.tfrecord\nsegment-8806931859563747931_1160_000_1180_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-8811210064692949185_3066_770_3086_770_with_camera_labels.tfrecord\nsegment-8811210064692949185_3066_770_3086_770_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-8822503619482926605_1080_000_1100_000_with_camera_labels.tfrecord\nsegment-8822503619482926605_1080_000_1100_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-8859409804103625626_2760_000_2780_000_with_camera_labels.tfrecord\nsegment-8859409804103625626_2760_000_2780_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-8938046348067069210_3800_000_3820_000_with_camera_labels.tfrecord\nsegment-8938046348067069210_3800_000_3820_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-8965112222692085704_4860_000_4880_000_with_camera_labels.tfrecord\nsegment-8965112222692085704_4860_000_4880_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-898816942644052013_20_000_40_000_with_camera_labels.tfrecord\nsegment-898816942644052013_20_000_40_000_with_camera_labels.tfrecord\n......................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-9015546800913584551_4431_180_4451_180_with_camera_labels.tfrecord\nsegment-9015546800913584551_4431_180_4451_180_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-9016865488168499365_4780_000_4800_000_with_camera_labels.tfrecord\nsegment-9016865488168499365_4780_000_4800_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-9058545212382992974_5236_200_5256_200_with_camera_labels.tfrecord\nsegment-9058545212382992974_5236_200_5256_200_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-9062286840846668802_31_000_51_000_with_camera_labels.tfrecord\nsegment-9062286840846668802_31_000_51_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-9105380625923157726_4420_000_4440_000_with_camera_labels.tfrecord\nsegment-9105380625923157726_4420_000_4440_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-9110125340505914899_380_000_400_000_with_camera_labels.tfrecord\nsegment-9110125340505914899_380_000_400_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-9123867659877264673_3569_950_3589_950_with_camera_labels.tfrecord\nsegment-9123867659877264673_3569_950_3589_950_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-912496333665446669_1680_000_1700_000_with_camera_labels.tfrecord\nsegment-912496333665446669_1680_000_1700_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-913274067754539885_913_000_933_000_with_camera_labels.tfrecord\nsegment-913274067754539885_913_000_933_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-9142545919543484617_86_000_106_000_with_camera_labels.tfrecord\nsegment-9142545919543484617_86_000_106_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-915935412356143375_1740_030_1760_030_with_camera_labels.tfrecord\nsegment-915935412356143375_1740_030_1760_030_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-9175749307679169289_5933_260_5953_260_with_camera_labels.tfrecord\nsegment-9175749307679169289_5933_260_5953_260_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-9179922063516210200_157_000_177_000_with_camera_labels.tfrecord\nsegment-9179922063516210200_157_000_177_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0030/segment-9250355398701464051_4166_132_4186_132_with_camera_labels.tfrecord\nsegment-9250355398701464051_4166_132_4186_132_with_camera_labels.tfrecord\n........................................Folder name: training_0029\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0029/segment-8099457465580871094_4764_380_4784_380_with_camera_labels.tfrecord\nsegment-8099457465580871094_4764_380_4784_380_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8120716761799622510_862_120_882_120_with_camera_labels.tfrecord\nsegment-8120716761799622510_862_120_882_120_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8123909110537564436_7220_000_7240_000_with_camera_labels.tfrecord\nsegment-8123909110537564436_7220_000_7240_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8126606965364870152_985_090_1005_090_with_camera_labels.tfrecord\nsegment-8126606965364870152_985_090_1005_090_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8148053503558757176_4240_000_4260_000_with_camera_labels.tfrecord\nsegment-8148053503558757176_4240_000_4260_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8158128948493708501_7477_230_7497_230_with_camera_labels.tfrecord\nsegment-8158128948493708501_7477_230_7497_230_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8207498713503609786_3005_450_3025_450_with_camera_labels.tfrecord\nsegment-8207498713503609786_3005_450_3025_450_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8222208340265444449_1400_000_1420_000_with_camera_labels.tfrecord\nsegment-8222208340265444449_1400_000_1420_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8323028393459455521_2105_000_2125_000_with_camera_labels.tfrecord\nsegment-8323028393459455521_2105_000_2125_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8327447186504415549_5200_000_5220_000_with_camera_labels.tfrecord\nsegment-8327447186504415549_5200_000_5220_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8345535260120974350_1980_000_2000_000_with_camera_labels.tfrecord\nsegment-8345535260120974350_1980_000_2000_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8399876466981146110_2560_000_2580_000_with_camera_labels.tfrecord\nsegment-8399876466981146110_2560_000_2580_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8424573439186068308_3460_000_3480_000_with_camera_labels.tfrecord\nsegment-8424573439186068308_3460_000_3480_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8454755173123314088_3202_000_3222_000_with_camera_labels.tfrecord\nsegment-8454755173123314088_3202_000_3222_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8487809726845917818_4779_870_4799_870_with_camera_labels.tfrecord\nsegment-8487809726845917818_4779_870_4799_870_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8494653877777333091_540_000_560_000_with_camera_labels.tfrecord\nsegment-8494653877777333091_540_000_560_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8513241054672631743_115_960_135_960_with_camera_labels.tfrecord\nsegment-8513241054672631743_115_960_135_960_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8543158371164842559_4131_530_4151_530_with_camera_labels.tfrecord\nsegment-8543158371164842559_4131_530_4151_530_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-857746300435138193_1869_000_1889_000_with_camera_labels.tfrecord\nsegment-857746300435138193_1869_000_1889_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8582923946352460474_2360_000_2380_000_with_camera_labels.tfrecord\nsegment-8582923946352460474_2360_000_2380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8603916601243187272_540_000_560_000_with_camera_labels.tfrecord\nsegment-8603916601243187272_540_000_560_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8633296376655504176_514_000_534_000_with_camera_labels.tfrecord\nsegment-8633296376655504176_514_000_534_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8659567063494726263_2480_000_2500_000_with_camera_labels.tfrecord\nsegment-8659567063494726263_2480_000_2500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8663006751916427679_1520_000_1540_000_with_camera_labels.tfrecord\nsegment-8663006751916427679_1520_000_1540_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0029/segment-8700094808505895018_7272_488_7292_488_with_camera_labels.tfrecord\nsegment-8700094808505895018_7272_488_7292_488_with_camera_labels.tfrecord\n........................................Folder name: training_0028\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0028/segment-759208896257112298_184_000_204_000_with_camera_labels.tfrecord\nsegment-759208896257112298_184_000_204_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7643597152739318064_3979_000_3999_000_with_camera_labels.tfrecord\nsegment-7643597152739318064_3979_000_3999_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7670103006580549715_360_000_380_000_with_camera_labels.tfrecord\nsegment-7670103006580549715_360_000_380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7727809428114700355_2960_000_2980_000_with_camera_labels.tfrecord\nsegment-7727809428114700355_2960_000_2980_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7741361323303179462_1230_310_1250_310_with_camera_labels.tfrecord\nsegment-7741361323303179462_1230_310_1250_310_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7761658966964621355_1000_000_1020_000_with_camera_labels.tfrecord\nsegment-7761658966964621355_1000_000_1020_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7768517933263896280_1120_000_1140_000_with_camera_labels.tfrecord\nsegment-7768517933263896280_1120_000_1140_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7799671367768576481_260_000_280_000_with_camera_labels.tfrecord\nsegment-7799671367768576481_260_000_280_000_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7837172662136597262_1140_000_1160_000_with_camera_labels.tfrecord\nsegment-7837172662136597262_1140_000_1160_000_with_camera_labels.tfrecord\n......................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7850521592343484282_4576_090_4596_090_with_camera_labels.tfrecord\nsegment-7850521592343484282_4576_090_4596_090_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7861168750216313148_1305_290_1325_290_with_camera_labels.tfrecord\nsegment-7861168750216313148_1305_290_1325_290_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-786582060300383668_2944_060_2964_060_with_camera_labels.tfrecord\nsegment-786582060300383668_2944_060_2964_060_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7885161619764516373_289_280_309_280_with_camera_labels.tfrecord\nsegment-7885161619764516373_289_280_309_280_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7890808800227629086_6162_700_6182_700_with_camera_labels.tfrecord\nsegment-7890808800227629086_6162_700_6182_700_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7912728502266478772_1202_200_1222_200_with_camera_labels.tfrecord\nsegment-7912728502266478772_1202_200_1222_200_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7920326980177504058_2454_310_2474_310_with_camera_labels.tfrecord\nsegment-7920326980177504058_2454_310_2474_310_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7921369793217703814_1060_000_1080_000_with_camera_labels.tfrecord\nsegment-7921369793217703814_1060_000_1080_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7934693355186591404_73_000_93_000_with_camera_labels.tfrecord\nsegment-7934693355186591404_73_000_93_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7940496892864900543_4783_540_4803_540_with_camera_labels.tfrecord\nsegment-7940496892864900543_4783_540_4803_540_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7950869827763684964_8685_000_8705_000_with_camera_labels.tfrecord\nsegment-7950869827763684964_8685_000_8705_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7996500550445322129_2333_304_2353_304_with_camera_labels.tfrecord\nsegment-7996500550445322129_2333_304_2353_304_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-7999729608823422351_1483_600_1503_600_with_camera_labels.tfrecord\nsegment-7999729608823422351_1483_600_1503_600_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-8031709558315183746_491_220_511_220_with_camera_labels.tfrecord\nsegment-8031709558315183746_491_220_511_220_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-80599353855279550_2604_480_2624_480_with_camera_labels.tfrecord\nsegment-80599353855279550_2604_480_2624_480_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0028/segment-809159138284604331_3355_840_3375_840_with_camera_labels.tfrecord\nsegment-809159138284604331_3355_840_3375_840_with_camera_labels.tfrecord\n........................................Folder name: training_0027\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0027/segment-7000927478052605119_1052_330_1072_330_with_camera_labels.tfrecord\nsegment-7000927478052605119_1052_330_1072_330_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7007702792982559244_4400_000_4420_000_with_camera_labels.tfrecord\nsegment-7007702792982559244_4400_000_4420_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7019385869759035132_4270_850_4290_850_with_camera_labels.tfrecord\nsegment-7019385869759035132_4270_850_4290_850_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7038362761309539946_4207_130_4227_130_with_camera_labels.tfrecord\nsegment-7038362761309539946_4207_130_4227_130_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7089765864827567005_1020_000_1040_000_with_camera_labels.tfrecord\nsegment-7089765864827567005_1020_000_1040_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7101099554331311287_5320_000_5340_000_with_camera_labels.tfrecord\nsegment-7101099554331311287_5320_000_5340_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7120839653809570957_1060_000_1080_000_with_camera_labels.tfrecord\nsegment-7120839653809570957_1060_000_1080_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7187601925763611197_4384_300_4404_300_with_camera_labels.tfrecord\nsegment-7187601925763611197_4384_300_4404_300_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7189996641300362130_3360_000_3380_000_with_camera_labels.tfrecord\nsegment-7189996641300362130_3360_000_3380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7239123081683545077_4044_370_4064_370_with_camera_labels.tfrecord\nsegment-7239123081683545077_4044_370_4064_370_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7290499689576448085_3960_000_3980_000_with_camera_labels.tfrecord\nsegment-7290499689576448085_3960_000_3980_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7313718849795510302_280_000_300_000_with_camera_labels.tfrecord\nsegment-7313718849795510302_280_000_300_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7324192826315818756_620_000_640_000_with_camera_labels.tfrecord\nsegment-7324192826315818756_620_000_640_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7331965392247645851_1005_940_1025_940_with_camera_labels.tfrecord\nsegment-7331965392247645851_1005_940_1025_940_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7344536712079322768_1360_000_1380_000_with_camera_labels.tfrecord\nsegment-7344536712079322768_1360_000_1380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7373597180370847864_6020_000_6040_000_with_camera_labels.tfrecord\nsegment-7373597180370847864_6020_000_6040_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-744006317457557752_2080_000_2100_000_with_camera_labels.tfrecord\nsegment-744006317457557752_2080_000_2100_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7440437175443450101_94_000_114_000_with_camera_labels.tfrecord\nsegment-7440437175443450101_94_000_114_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7447927974619745860_820_000_840_000_with_camera_labels.tfrecord\nsegment-7447927974619745860_820_000_840_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7458568461947999548_700_000_720_000_with_camera_labels.tfrecord\nsegment-7458568461947999548_700_000_720_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7466751345307077932_585_000_605_000_with_camera_labels.tfrecord\nsegment-7466751345307077932_585_000_605_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7517545172000568481_2325_000_2345_000_with_camera_labels.tfrecord\nsegment-7517545172000568481_2325_000_2345_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7543690094688232666_4945_350_4965_350_with_camera_labels.tfrecord\nsegment-7543690094688232666_4945_350_4965_350_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7554208726220851641_380_000_400_000_with_camera_labels.tfrecord\nsegment-7554208726220851641_380_000_400_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0027/segment-7566697458525030390_1440_000_1460_000_with_camera_labels.tfrecord\nsegment-7566697458525030390_1440_000_1460_000_with_camera_labels.tfrecord\n........................................Folder name: training_0026\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0026/segment-6390847454531723238_6000_000_6020_000_with_camera_labels.tfrecord\nsegment-6390847454531723238_6000_000_6020_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6410495600874495447_5287_500_5307_500_with_camera_labels.tfrecord\nsegment-6410495600874495447_5287_500_5307_500_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6417523992887712896_1180_000_1200_000_with_camera_labels.tfrecord\nsegment-6417523992887712896_1180_000_1200_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6433401807220119698_4560_000_4580_000_with_camera_labels.tfrecord\nsegment-6433401807220119698_4560_000_4580_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6456165750159303330_1770_080_1790_080_with_camera_labels.tfrecord\nsegment-6456165750159303330_1770_080_1790_080_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6559997992780479765_1039_000_1059_000_with_camera_labels.tfrecord\nsegment-6559997992780479765_1039_000_1059_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6561206763751799279_2348_600_2368_600_with_camera_labels.tfrecord\nsegment-6561206763751799279_2348_600_2368_600_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6606076833441976341_1340_000_1360_000_with_camera_labels.tfrecord\nsegment-6606076833441976341_1340_000_1360_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6625150143263637936_780_000_800_000_with_camera_labels.tfrecord\nsegment-6625150143263637936_780_000_800_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6638427309837298695_220_000_240_000_with_camera_labels.tfrecord\nsegment-6638427309837298695_220_000_240_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6674547510992884047_1560_000_1580_000_with_camera_labels.tfrecord\nsegment-6674547510992884047_1560_000_1580_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6694593639447385226_1040_000_1060_000_with_camera_labels.tfrecord\nsegment-6694593639447385226_1040_000_1060_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6722602826685649765_2280_000_2300_000_with_camera_labels.tfrecord\nsegment-6722602826685649765_2280_000_2300_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6740694556948402155_3040_000_3060_000_with_camera_labels.tfrecord\nsegment-6740694556948402155_3040_000_3060_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6742105013468660925_3645_000_3665_000_with_camera_labels.tfrecord\nsegment-6742105013468660925_3645_000_3665_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6763005717101083473_3880_000_3900_000_with_camera_labels.tfrecord\nsegment-6763005717101083473_3880_000_3900_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6771783338734577946_6105_840_6125_840_with_camera_labels.tfrecord\nsegment-6771783338734577946_6105_840_6125_840_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6771922013310347577_4249_290_4269_290_with_camera_labels.tfrecord\nsegment-6771922013310347577_4249_290_4269_290_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6791933003490312185_2607_000_2627_000_with_camera_labels.tfrecord\nsegment-6791933003490312185_2607_000_2627_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6792191642931213648_1522_000_1542_000_with_camera_labels.tfrecord\nsegment-6792191642931213648_1522_000_1542_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6799055159715949496_2503_000_2523_000_with_camera_labels.tfrecord\nsegment-6799055159715949496_2503_000_2523_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6813611334239274394_535_000_555_000_with_camera_labels.tfrecord\nsegment-6813611334239274394_535_000_555_000_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6814918034011049245_134_170_154_170_with_camera_labels.tfrecord\nsegment-6814918034011049245_134_170_154_170_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6904827860701329567_960_000_980_000_with_camera_labels.tfrecord\nsegment-6904827860701329567_960_000_980_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0026/segment-6935841224766931310_2770_310_2790_310_with_camera_labels.tfrecord\nsegment-6935841224766931310_2770_310_2790_310_with_camera_labels.tfrecord\n........................................Folder name: training_0025\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0025/segment-5870668058140631588_1180_000_1200_000_with_camera_labels.tfrecord\nsegment-5870668058140631588_1180_000_1200_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-5871373218498789285_3360_000_3380_000_with_camera_labels.tfrecord\nsegment-5871373218498789285_3360_000_3380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-5973788713714489548_2179_770_2199_770_with_camera_labels.tfrecord\nsegment-5973788713714489548_2179_770_2199_770_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6037403592521973757_3260_000_3280_000_with_camera_labels.tfrecord\nsegment-6037403592521973757_3260_000_3280_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6038200663843287458_283_000_303_000_with_camera_labels.tfrecord\nsegment-6038200663843287458_283_000_303_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6104545334635651714_2780_000_2800_000_with_camera_labels.tfrecord\nsegment-6104545334635651714_2780_000_2800_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6128311556082453976_2520_000_2540_000_with_camera_labels.tfrecord\nsegment-6128311556082453976_2520_000_2540_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6142170920525844857_2080_000_2100_000_with_camera_labels.tfrecord\nsegment-6142170920525844857_2080_000_2100_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6148393791213790916_4960_000_4980_000_with_camera_labels.tfrecord\nsegment-6148393791213790916_4960_000_4980_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6150191934425217908_2747_800_2767_800_with_camera_labels.tfrecord\nsegment-6150191934425217908_2747_800_2767_800_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-616184888931414205_2020_000_2040_000_with_camera_labels.tfrecord\nsegment-616184888931414205_2020_000_2040_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6172160122069514875_6866_560_6886_560_with_camera_labels.tfrecord\nsegment-6172160122069514875_6866_560_6886_560_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6177474146670383260_4200_000_4220_000_with_camera_labels.tfrecord\nsegment-6177474146670383260_4200_000_4220_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6193696614129429757_2420_000_2440_000_with_camera_labels.tfrecord\nsegment-6193696614129429757_2420_000_2440_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6207195415812436731_805_000_825_000_with_camera_labels.tfrecord\nsegment-6207195415812436731_805_000_825_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6229371035421550389_2220_000_2240_000_with_camera_labels.tfrecord\nsegment-6229371035421550389_2220_000_2240_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6234738900256277070_320_000_340_000_with_camera_labels.tfrecord\nsegment-6234738900256277070_320_000_340_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6242822583398487496_73_000_93_000_with_camera_labels.tfrecord\nsegment-6242822583398487496_73_000_93_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6280779486809627179_760_000_780_000_with_camera_labels.tfrecord\nsegment-6280779486809627179_760_000_780_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6290334089075942139_1340_000_1360_000_with_camera_labels.tfrecord\nsegment-6290334089075942139_1340_000_1360_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6303332643743862144_5600_000_5620_000_with_camera_labels.tfrecord\nsegment-6303332643743862144_5600_000_5620_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-634378055350569306_280_000_300_000_with_camera_labels.tfrecord\nsegment-634378055350569306_280_000_300_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6350707596465488265_2393_900_2413_900_with_camera_labels.tfrecord\nsegment-6350707596465488265_2393_900_2413_900_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6378340771722906187_1120_000_1140_000_with_camera_labels.tfrecord\nsegment-6378340771722906187_1120_000_1140_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0025/segment-6386303598440879824_1520_000_1540_000_with_camera_labels.tfrecord\nsegment-6386303598440879824_1520_000_1540_000_with_camera_labels.tfrecord\n........................................Folder name: training_0024\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0024/segment-5446766520699850364_157_000_177_000_with_camera_labels.tfrecord\nsegment-5446766520699850364_157_000_177_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5451442719480728410_5660_000_5680_000_with_camera_labels.tfrecord\nsegment-5451442719480728410_5660_000_5680_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5458962501360340931_3140_000_3160_000_with_camera_labels.tfrecord\nsegment-5458962501360340931_3140_000_3160_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5459113827443493510_380_000_400_000_with_camera_labels.tfrecord\nsegment-5459113827443493510_380_000_400_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5468483805452515080_4540_000_4560_000_with_camera_labels.tfrecord\nsegment-5468483805452515080_4540_000_4560_000_with_camera_labels.tfrecord\n...................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5495302100265783181_80_000_100_000_with_camera_labels.tfrecord\nsegment-5495302100265783181_80_000_100_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-550171902340535682_2640_000_2660_000_with_camera_labels.tfrecord\nsegment-550171902340535682_2640_000_2660_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5525943706123287091_4100_000_4120_000_with_camera_labels.tfrecord\nsegment-5525943706123287091_4100_000_4120_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5526948896847934178_1039_000_1059_000_with_camera_labels.tfrecord\nsegment-5526948896847934178_1039_000_1059_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5572351910320677279_3980_000_4000_000_with_camera_labels.tfrecord\nsegment-5572351910320677279_3980_000_4000_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5576800480528461086_1000_000_1020_000_with_camera_labels.tfrecord\nsegment-5576800480528461086_1000_000_1020_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5592790652933523081_667_770_687_770_with_camera_labels.tfrecord\nsegment-5592790652933523081_667_770_687_770_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5602237689147924753_760_000_780_000_with_camera_labels.tfrecord\nsegment-5602237689147924753_760_000_780_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5614471637960666943_6955_675_6975_675_with_camera_labels.tfrecord\nsegment-5614471637960666943_6955_675_6975_675_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5691636094473163491_6889_470_6909_470_with_camera_labels.tfrecord\nsegment-5691636094473163491_6889_470_6909_470_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5707035891877485758_2573_000_2593_000_with_camera_labels.tfrecord\nsegment-5707035891877485758_2573_000_2593_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-57132587708734824_1020_000_1040_000_with_camera_labels.tfrecord\nsegment-57132587708734824_1020_000_1040_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5718418936283106890_1200_000_1220_000_with_camera_labels.tfrecord\nsegment-5718418936283106890_1200_000_1220_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5731414711882954246_1990_250_2010_250_with_camera_labels.tfrecord\nsegment-5731414711882954246_1990_250_2010_250_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-574762194520856849_1660_000_1680_000_with_camera_labels.tfrecord\nsegment-574762194520856849_1660_000_1680_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-575209926587730008_3880_000_3900_000_with_camera_labels.tfrecord\nsegment-575209926587730008_3880_000_3900_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-580580436928611523_792_500_812_500_with_camera_labels.tfrecord\nsegment-580580436928611523_792_500_812_500_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5835049423600303130_180_000_200_000_with_camera_labels.tfrecord\nsegment-5835049423600303130_180_000_200_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5846229052615948000_2120_000_2140_000_with_camera_labels.tfrecord\nsegment-5846229052615948000_2120_000_2140_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0024/segment-5861181219697109969_1732_000_1752_000_with_camera_labels.tfrecord\nsegment-5861181219697109969_1732_000_1752_000_with_camera_labels.tfrecord\n........................................Folder name: training_0023\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0023/segment-4960194482476803293_4575_960_4595_960_with_camera_labels.tfrecord\nsegment-4960194482476803293_4575_960_4595_960_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-4967385055468388261_720_000_740_000_with_camera_labels.tfrecord\nsegment-4967385055468388261_720_000_740_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-4971817041565280127_780_500_800_500_with_camera_labels.tfrecord\nsegment-4971817041565280127_780_500_800_500_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-4986495627634617319_2980_000_3000_000_with_camera_labels.tfrecord\nsegment-4986495627634617319_2980_000_3000_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5005815668926224220_2194_330_2214_330_with_camera_labels.tfrecord\nsegment-5005815668926224220_2194_330_2214_330_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5065468048522043429_2080_000_2100_000_with_camera_labels.tfrecord\nsegment-5065468048522043429_2080_000_2100_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5072733804607719382_5807_570_5827_570_with_camera_labels.tfrecord\nsegment-5072733804607719382_5807_570_5827_570_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5076950993715916459_3265_000_3285_000_with_camera_labels.tfrecord\nsegment-5076950993715916459_3265_000_3285_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5083516879091912247_3600_000_3620_000_with_camera_labels.tfrecord\nsegment-5083516879091912247_3600_000_3620_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5100136784230856773_2517_300_2537_300_with_camera_labels.tfrecord\nsegment-5100136784230856773_2517_300_2537_300_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5121298817582693383_4882_000_4902_000_with_camera_labels.tfrecord\nsegment-5121298817582693383_4882_000_4902_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5127440443725457056_2921_340_2941_340_with_camera_labels.tfrecord\nsegment-5127440443725457056_2921_340_2941_340_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5129792222840846899_2145_000_2165_000_with_camera_labels.tfrecord\nsegment-5129792222840846899_2145_000_2165_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5144634012371033641_920_000_940_000_with_camera_labels.tfrecord\nsegment-5144634012371033641_920_000_940_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-514687114615102902_6240_000_6260_000_with_camera_labels.tfrecord\nsegment-514687114615102902_6240_000_6260_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5189543236187113739_2929_000_2949_000_with_camera_labels.tfrecord\nsegment-5189543236187113739_2929_000_2949_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5200186706748209867_80_000_100_000_with_camera_labels.tfrecord\nsegment-5200186706748209867_80_000_100_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5214491533551928383_1918_780_1938_780_with_camera_labels.tfrecord\nsegment-5214491533551928383_1918_780_1938_780_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5215905243049326497_20_000_40_000_with_camera_labels.tfrecord\nsegment-5215905243049326497_20_000_40_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5222336716599194110_8940_000_8960_000_with_camera_labels.tfrecord\nsegment-5222336716599194110_8940_000_8960_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5268267801500934740_2160_000_2180_000_with_camera_labels.tfrecord\nsegment-5268267801500934740_2160_000_2180_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5328596138024684667_2180_000_2200_000_with_camera_labels.tfrecord\nsegment-5328596138024684667_2180_000_2200_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5349843997395815699_1040_000_1060_000_with_camera_labels.tfrecord\nsegment-5349843997395815699_1040_000_1060_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-5423607012724948145_3900_000_3920_000_with_camera_labels.tfrecord\nsegment-5423607012724948145_3900_000_3920_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0023/segment-54293441958058219_2335_200_2355_200_with_camera_labels.tfrecord\nsegment-54293441958058219_2335_200_2355_200_with_camera_labels.tfrecord\n........................................Folder name: training_0022\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0022/segment-4447423683538547117_536_022_556_022_with_camera_labels.tfrecord\nsegment-4447423683538547117_536_022_556_022_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4457475194088194008_3100_000_3120_000_with_camera_labels.tfrecord\nsegment-4457475194088194008_3100_000_3120_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4458730539804900192_535_000_555_000_with_camera_labels.tfrecord\nsegment-4458730539804900192_535_000_555_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4468278022208380281_455_820_475_820_with_camera_labels.tfrecord\nsegment-4468278022208380281_455_820_475_820_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4487677815262010875_4940_000_4960_000_with_camera_labels.tfrecord\nsegment-4487677815262010875_4940_000_4960_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4537254579383578009_3820_000_3840_000_with_camera_labels.tfrecord\nsegment-4537254579383578009_3820_000_3840_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4546515828974914709_922_040_942_040_with_camera_labels.tfrecord\nsegment-4546515828974914709_922_040_942_040_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-454855130179746819_4580_000_4600_000_with_camera_labels.tfrecord\nsegment-454855130179746819_4580_000_4600_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4575961016807404107_880_000_900_000_with_camera_labels.tfrecord\nsegment-4575961016807404107_880_000_900_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4604173119409817302_2820_000_2840_000_with_camera_labels.tfrecord\nsegment-4604173119409817302_2820_000_2840_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4641822195449131669_380_000_400_000_with_camera_labels.tfrecord\nsegment-4641822195449131669_380_000_400_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4655005625668154134_560_000_580_000_with_camera_labels.tfrecord\nsegment-4655005625668154134_560_000_580_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4672649953433758614_2700_000_2720_000_with_camera_labels.tfrecord\nsegment-4672649953433758614_2700_000_2720_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4702302448560822815_927_380_947_380_with_camera_labels.tfrecord\nsegment-4702302448560822815_927_380_947_380_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4723255145958809564_741_350_761_350_with_camera_labels.tfrecord\nsegment-4723255145958809564_741_350_761_350_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4733704239941053266_960_000_980_000_with_camera_labels.tfrecord\nsegment-4733704239941053266_960_000_980_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-473735159277431842_630_095_650_095_with_camera_labels.tfrecord\nsegment-473735159277431842_630_095_650_095_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4747171543583769736_425_544_445_544_with_camera_labels.tfrecord\nsegment-4747171543583769736_425_544_445_544_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4781039348168995891_280_000_300_000_with_camera_labels.tfrecord\nsegment-4781039348168995891_280_000_300_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4784689467343773295_1700_000_1720_000_with_camera_labels.tfrecord\nsegment-4784689467343773295_1700_000_1720_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4808842546020773462_2310_000_2330_000_with_camera_labels.tfrecord\nsegment-4808842546020773462_2310_000_2330_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4880464427217074989_4680_000_4700_000_with_camera_labels.tfrecord\nsegment-4880464427217074989_4680_000_4700_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4898453812993984151_199_000_219_000_with_camera_labels.tfrecord\nsegment-4898453812993984151_199_000_219_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4916527289027259239_5180_000_5200_000_with_camera_labels.tfrecord\nsegment-4916527289027259239_5180_000_5200_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0022/segment-4931036732523207946_10755_600_10775_600_with_camera_labels.tfrecord\nsegment-4931036732523207946_10755_600_10775_600_with_camera_labels.tfrecord\n........................................Folder name: training_0021\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0021/segment-3966447614090524826_320_000_340_000_with_camera_labels.tfrecord\nsegment-3966447614090524826_320_000_340_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-3988957004231180266_5566_500_5586_500_with_camera_labels.tfrecord\nsegment-3988957004231180266_5566_500_5586_500_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4013698638848102906_7757_240_7777_240_with_camera_labels.tfrecord\nsegment-4013698638848102906_7757_240_7777_240_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4017824591066644473_3000_000_3020_000_with_camera_labels.tfrecord\nsegment-4017824591066644473_3000_000_3020_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4058410353286511411_3980_000_4000_000_with_camera_labels.tfrecord\nsegment-4058410353286511411_3980_000_4000_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4114454788208078028_660_000_680_000_with_camera_labels.tfrecord\nsegment-4114454788208078028_660_000_680_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4114548607314119333_2780_000_2800_000_with_camera_labels.tfrecord\nsegment-4114548607314119333_2780_000_2800_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4138614210962611770_2459_360_2479_360_with_camera_labels.tfrecord\nsegment-4138614210962611770_2459_360_2479_360_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4164064449185492261_400_000_420_000_with_camera_labels.tfrecord\nsegment-4164064449185492261_400_000_420_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4167304237516228486_5720_000_5740_000_with_camera_labels.tfrecord\nsegment-4167304237516228486_5720_000_5740_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4191035366928259953_1732_708_1752_708_with_camera_labels.tfrecord\nsegment-4191035366928259953_1732_708_1752_708_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4266984864799709257_720_000_740_000_with_camera_labels.tfrecord\nsegment-4266984864799709257_720_000_740_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4277109506993614243_1648_000_1668_000_with_camera_labels.tfrecord\nsegment-4277109506993614243_1648_000_1668_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4292360793125812833_3080_000_3100_000_with_camera_labels.tfrecord\nsegment-4292360793125812833_3080_000_3100_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4295449061847708198_3769_000_3789_000_with_camera_labels.tfrecord\nsegment-4295449061847708198_3769_000_3789_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4305539677513798673_2200_000_2220_000_with_camera_labels.tfrecord\nsegment-4305539677513798673_2200_000_2220_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4323857429732097807_1005_000_1025_000_with_camera_labels.tfrecord\nsegment-4323857429732097807_1005_000_1025_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4324227028219935045_1520_000_1540_000_with_camera_labels.tfrecord\nsegment-4324227028219935045_1520_000_1540_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4337887720320812223_1857_930_1877_930_with_camera_labels.tfrecord\nsegment-4337887720320812223_1857_930_1877_930_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4348478035380346090_1000_000_1020_000_with_camera_labels.tfrecord\nsegment-4348478035380346090_1000_000_1020_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4380865029019172232_480_000_500_000_with_camera_labels.tfrecord\nsegment-4380865029019172232_480_000_500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4384676699661561426_1662_670_1682_670_with_camera_labels.tfrecord\nsegment-4384676699661561426_1662_670_1682_670_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4392459808686681511_5006_200_5026_200_with_camera_labels.tfrecord\nsegment-4392459808686681511_5006_200_5026_200_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4414235478445376689_2020_000_2040_000_with_camera_labels.tfrecord\nsegment-4414235478445376689_2020_000_2040_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0021/segment-4427374597960783085_4168_000_4188_000_with_camera_labels.tfrecord\nsegment-4427374597960783085_4168_000_4188_000_with_camera_labels.tfrecord\n........................................Folder name: training_0020\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0020/segment-3451017128488170637_5280_000_5300_000_with_camera_labels.tfrecord\nsegment-3451017128488170637_5280_000_5300_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3461228720457810721_4511_120_4531_120_with_camera_labels.tfrecord\nsegment-3461228720457810721_4511_120_4531_120_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3461811179177118163_1161_000_1181_000_with_camera_labels.tfrecord\nsegment-3461811179177118163_1161_000_1181_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3490810581309970603_11125_000_11145_000_with_camera_labels.tfrecord\nsegment-3490810581309970603_11125_000_11145_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3504776317009340435_6920_000_6940_000_with_camera_labels.tfrecord\nsegment-3504776317009340435_6920_000_6940_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3543045673995761051_460_000_480_000_with_camera_labels.tfrecord\nsegment-3543045673995761051_460_000_480_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3555170065073130842_451_000_471_000_with_camera_labels.tfrecord\nsegment-3555170065073130842_451_000_471_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3563349510410371738_7465_000_7485_000_with_camera_labels.tfrecord\nsegment-3563349510410371738_7465_000_7485_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3584210979358667442_2880_000_2900_000_with_camera_labels.tfrecord\nsegment-3584210979358667442_2880_000_2900_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3591015878717398163_1381_280_1401_280_with_camera_labels.tfrecord\nsegment-3591015878717398163_1381_280_1401_280_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3617043125954612277_240_000_260_000_with_camera_labels.tfrecord\nsegment-3617043125954612277_240_000_260_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3635081602482786801_900_000_920_000_with_camera_labels.tfrecord\nsegment-3635081602482786801_900_000_920_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3644145307034257093_3000_400_3020_400_with_camera_labels.tfrecord\nsegment-3644145307034257093_3000_400_3020_400_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3657581213864582252_340_000_360_000_with_camera_labels.tfrecord\nsegment-3657581213864582252_340_000_360_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3665329186611360820_2329_010_2349_010_with_camera_labels.tfrecord\nsegment-3665329186611360820_2329_010_2349_010_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3698685523057788592_4303_630_4323_630_with_camera_labels.tfrecord\nsegment-3698685523057788592_4303_630_4323_630_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3711598698808133144_2060_000_2080_000_with_camera_labels.tfrecord\nsegment-3711598698808133144_2060_000_2080_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-384975055665199088_4480_000_4500_000_with_camera_labels.tfrecord\nsegment-384975055665199088_4480_000_4500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3872781118550194423_3654_670_3674_670_with_camera_labels.tfrecord\nsegment-3872781118550194423_3654_670_3674_670_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3894883757914505116_1840_000_1860_000_with_camera_labels.tfrecord\nsegment-3894883757914505116_1840_000_1860_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3908622028474148527_3480_000_3500_000_with_camera_labels.tfrecord\nsegment-3908622028474148527_3480_000_3500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3911646355261329044_580_000_600_000_with_camera_labels.tfrecord\nsegment-3911646355261329044_580_000_600_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3919438171935923501_280_000_300_000_with_camera_labels.tfrecord\nsegment-3919438171935923501_280_000_300_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3927294516406132977_792_740_812_740_with_camera_labels.tfrecord\nsegment-3927294516406132977_792_740_812_740_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0020/segment-3928923269768424494_3060_000_3080_000_with_camera_labels.tfrecord\nsegment-3928923269768424494_3060_000_3080_000_with_camera_labels.tfrecord\n........................................Folder name: training_0019\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0019/segment-3078075798413050298_890_370_910_370_with_camera_labels.tfrecord\nsegment-3078075798413050298_890_370_910_370_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3112630089558008159_7280_000_7300_000_with_camera_labels.tfrecord\nsegment-3112630089558008159_7280_000_7300_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3132521568089292927_2220_000_2240_000_with_camera_labels.tfrecord\nsegment-3132521568089292927_2220_000_2240_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3132641021038352938_1937_160_1957_160_with_camera_labels.tfrecord\nsegment-3132641021038352938_1937_160_1957_160_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3154510051521049916_7000_000_7020_000_with_camera_labels.tfrecord\nsegment-3154510051521049916_7000_000_7020_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3156155872654629090_2474_780_2494_780_with_camera_labels.tfrecord\nsegment-3156155872654629090_2474_780_2494_780_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3194871563717679715_4980_000_5000_000_with_camera_labels.tfrecord\nsegment-3194871563717679715_4980_000_5000_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3195159706851203049_2763_790_2783_790_with_camera_labels.tfrecord\nsegment-3195159706851203049_2763_790_2783_790_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3220249619779692045_505_000_525_000_with_camera_labels.tfrecord\nsegment-3220249619779692045_505_000_525_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3224923476345749285_4480_000_4500_000_with_camera_labels.tfrecord\nsegment-3224923476345749285_4480_000_4500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3247914894323111613_1820_000_1840_000_with_camera_labels.tfrecord\nsegment-3247914894323111613_1820_000_1840_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3270384983482134275_3220_000_3240_000_with_camera_labels.tfrecord\nsegment-3270384983482134275_3220_000_3240_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3276301746183196185_436_450_456_450_with_camera_labels.tfrecord\nsegment-3276301746183196185_436_450_456_450_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-33101359476901423_6720_910_6740_910_with_camera_labels.tfrecord\nsegment-33101359476901423_6720_910_6740_910_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3338044015505973232_1804_490_1824_490_with_camera_labels.tfrecord\nsegment-3338044015505973232_1804_490_1824_490_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3363533094480067586_1580_000_1600_000_with_camera_labels.tfrecord\nsegment-3363533094480067586_1580_000_1600_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3364861183015885008_1720_000_1740_000_with_camera_labels.tfrecord\nsegment-3364861183015885008_1720_000_1740_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3375636961848927657_1942_000_1962_000_with_camera_labels.tfrecord\nsegment-3375636961848927657_1942_000_1962_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3385534893506316900_4252_000_4272_000_with_camera_labels.tfrecord\nsegment-3385534893506316900_4252_000_4272_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3390120876390766963_2300_000_2320_000_with_camera_labels.tfrecord\nsegment-3390120876390766963_2300_000_2320_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3417928259332148981_7018_550_7038_550_with_camera_labels.tfrecord\nsegment-3417928259332148981_7018_550_7038_550_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3418007171190630157_3585_530_3605_530_with_camera_labels.tfrecord\nsegment-3418007171190630157_3585_530_3605_530_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3425716115468765803_977_756_997_756_with_camera_labels.tfrecord\nsegment-3425716115468765803_977_756_997_756_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3437741670889149170_1411_550_1431_550_with_camera_labels.tfrecord\nsegment-3437741670889149170_1411_550_1431_550_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0019/segment-3441838785578020259_1300_000_1320_000_with_camera_labels.tfrecord\nsegment-3441838785578020259_1300_000_1320_000_with_camera_labels.tfrecord\n........................................Folder name: training_0018\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0018/segment-2660301763960988190_3742_580_3762_580_with_camera_labels.tfrecord\nsegment-2660301763960988190_3742_580_3762_580_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2670674176367830809_180_000_200_000_with_camera_labels.tfrecord\nsegment-2670674176367830809_180_000_200_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2681180680221317256_1144_000_1164_000_with_camera_labels.tfrecord\nsegment-2681180680221317256_1144_000_1164_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-268278198029493143_1400_000_1420_000_with_camera_labels.tfrecord\nsegment-268278198029493143_1400_000_1420_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2684088316387726629_180_000_200_000_with_camera_labels.tfrecord\nsegment-2684088316387726629_180_000_200_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2692887320656885771_2480_000_2500_000_with_camera_labels.tfrecord\nsegment-2692887320656885771_2480_000_2500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2698953791490960477_2660_000_2680_000_with_camera_labels.tfrecord\nsegment-2698953791490960477_2660_000_2680_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2711351338963414257_1360_000_1380_000_with_camera_labels.tfrecord\nsegment-2711351338963414257_1360_000_1380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2739239662326039445_5890_320_5910_320_with_camera_labels.tfrecord\nsegment-2739239662326039445_5890_320_5910_320_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2752216004511723012_260_000_280_000_with_camera_labels.tfrecord\nsegment-2752216004511723012_260_000_280_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2791302832590946720_1900_000_1920_000_with_camera_labels.tfrecord\nsegment-2791302832590946720_1900_000_1920_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2863984611797967753_3200_000_3220_000_with_camera_labels.tfrecord\nsegment-2863984611797967753_3200_000_3220_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2895681525868621979_480_000_500_000_with_camera_labels.tfrecord\nsegment-2895681525868621979_480_000_500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2899357195020129288_3723_163_3743_163_with_camera_labels.tfrecord\nsegment-2899357195020129288_3723_163_3743_163_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2899997824484054994_320_000_340_000_with_camera_labels.tfrecord\nsegment-2899997824484054994_320_000_340_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2919021496271356282_2300_000_2320_000_with_camera_labels.tfrecord\nsegment-2919021496271356282_2300_000_2320_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2922309829144504838_1840_000_1860_000_with_camera_labels.tfrecord\nsegment-2922309829144504838_1840_000_1860_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2935377810101940676_300_000_320_000_with_camera_labels.tfrecord\nsegment-2935377810101940676_300_000_320_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2961247865039433386_920_000_940_000_with_camera_labels.tfrecord\nsegment-2961247865039433386_920_000_940_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2974991090366925955_4924_000_4944_000_with_camera_labels.tfrecord\nsegment-2974991090366925955_4924_000_4944_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-2975249314261309142_6540_000_6560_000_with_camera_labels.tfrecord\nsegment-2975249314261309142_6540_000_6560_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-3002379261592154728_2256_691_2276_691_with_camera_labels.tfrecord\nsegment-3002379261592154728_2256_691_2276_691_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-3031519073799366723_1140_000_1160_000_with_camera_labels.tfrecord\nsegment-3031519073799366723_1140_000_1160_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-3060057659029579482_420_000_440_000_with_camera_labels.tfrecord\nsegment-3060057659029579482_420_000_440_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0018/segment-3068522656378006650_540_000_560_000_with_camera_labels.tfrecord\nsegment-3068522656378006650_540_000_560_000_with_camera_labels.tfrecord\n........................................Folder name: training_0017\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0017/segment-2206505463279484253_476_189_496_189_with_camera_labels.tfrecord\nsegment-2206505463279484253_476_189_496_189_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2209007584159204953_2200_000_2220_000_with_camera_labels.tfrecord\nsegment-2209007584159204953_2200_000_2220_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2217043033232259972_2720_000_2740_000_with_camera_labels.tfrecord\nsegment-2217043033232259972_2720_000_2740_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2224716024428969146_1420_000_1440_000_with_camera_labels.tfrecord\nsegment-2224716024428969146_1420_000_1440_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2259324582958830057_3767_030_3787_030_with_camera_labels.tfrecord\nsegment-2259324582958830057_3767_030_3787_030_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2265177645248606981_2340_000_2360_000_with_camera_labels.tfrecord\nsegment-2265177645248606981_2340_000_2360_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2273990870973289942_4009_680_4029_680_with_camera_labels.tfrecord\nsegment-2273990870973289942_4009_680_4029_680_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2323851946122476774_7240_000_7260_000_with_camera_labels.tfrecord\nsegment-2323851946122476774_7240_000_7260_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2330686858362435307_603_210_623_210_with_camera_labels.tfrecord\nsegment-2330686858362435307_603_210_623_210_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2336233899565126347_1180_000_1200_000_with_camera_labels.tfrecord\nsegment-2336233899565126347_1180_000_1200_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2342300897175196823_1179_360_1199_360_with_camera_labels.tfrecord\nsegment-2342300897175196823_1179_360_1199_360_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2400780041057579262_660_000_680_000_with_camera_labels.tfrecord\nsegment-2400780041057579262_660_000_680_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2415873247906962761_5460_000_5480_000_with_camera_labels.tfrecord\nsegment-2415873247906962761_5460_000_5480_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2475623575993725245_400_000_420_000_with_camera_labels.tfrecord\nsegment-2475623575993725245_400_000_420_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2508530288521370100_3385_660_3405_660_with_camera_labels.tfrecord\nsegment-2508530288521370100_3385_660_3405_660_with_camera_labels.tfrecord\n......................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2547899409721197155_1380_000_1400_000_with_camera_labels.tfrecord\nsegment-2547899409721197155_1380_000_1400_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2555987917096562599_1620_000_1640_000_with_camera_labels.tfrecord\nsegment-2555987917096562599_1620_000_1640_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2570264768774616538_860_000_880_000_with_camera_labels.tfrecord\nsegment-2570264768774616538_860_000_880_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2577669988012459365_1640_000_1660_000_with_camera_labels.tfrecord\nsegment-2577669988012459365_1640_000_1660_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2581599794006798586_2440_000_2460_000_with_camera_labels.tfrecord\nsegment-2581599794006798586_2440_000_2460_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2590213596097851051_460_000_480_000_with_camera_labels.tfrecord\nsegment-2590213596097851051_460_000_480_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2598465433001774398_740_670_760_670_with_camera_labels.tfrecord\nsegment-2598465433001774398_740_670_760_670_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2607999228439188545_2960_000_2980_000_with_camera_labels.tfrecord\nsegment-2607999228439188545_2960_000_2980_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2618605158242502527_1860_000_1880_000_with_camera_labels.tfrecord\nsegment-2618605158242502527_1860_000_1880_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0017/segment-2656110181316327570_940_000_960_000_with_camera_labels.tfrecord\nsegment-2656110181316327570_940_000_960_000_with_camera_labels.tfrecord\n........................................Folder name: training_0016\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0016/segment-1863454917318776530_1040_000_1060_000_with_camera_labels.tfrecord\nsegment-1863454917318776530_1040_000_1060_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-1887497421568128425_94_000_114_000_with_camera_labels.tfrecord\nsegment-1887497421568128425_94_000_114_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-1891390218766838725_4980_000_5000_000_with_camera_labels.tfrecord\nsegment-1891390218766838725_4980_000_5000_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-1907783283319966632_3221_000_3241_000_with_camera_labels.tfrecord\nsegment-1907783283319966632_3221_000_3241_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-1918764220984209654_5680_000_5700_000_with_camera_labels.tfrecord\nsegment-1918764220984209654_5680_000_5700_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-1921439581405198744_1354_000_1374_000_with_camera_labels.tfrecord\nsegment-1921439581405198744_1354_000_1374_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-1926967104529174124_5214_780_5234_780_with_camera_labels.tfrecord\nsegment-1926967104529174124_5214_780_5234_780_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-1939881723297238689_6848_040_6868_040_with_camera_labels.tfrecord\nsegment-1939881723297238689_6848_040_6868_040_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-1940032764689855266_3690_210_3710_210_with_camera_labels.tfrecord\nsegment-1940032764689855266_3690_210_3710_210_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-1972128316147758939_2500_000_2520_000_with_camera_labels.tfrecord\nsegment-1972128316147758939_2500_000_2520_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-1988987616835805847_3500_000_3520_000_with_camera_labels.tfrecord\nsegment-1988987616835805847_3500_000_3520_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-1994338527906508494_3438_100_3458_100_with_camera_labels.tfrecord\nsegment-1994338527906508494_3438_100_3458_100_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-1999080374382764042_7094_100_7114_100_with_camera_labels.tfrecord\nsegment-1999080374382764042_7094_100_7114_100_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-200287570390499785_2102_000_2122_000_with_camera_labels.tfrecord\nsegment-200287570390499785_2102_000_2122_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-2025831330434849594_1520_000_1540_000_with_camera_labels.tfrecord\nsegment-2025831330434849594_1520_000_1540_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-2036908808378190283_4340_000_4360_000_with_camera_labels.tfrecord\nsegment-2036908808378190283_4340_000_4360_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-204421859195625800_1080_000_1100_000_with_camera_labels.tfrecord\nsegment-204421859195625800_1080_000_1100_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-2064489349728221803_3060_000_3080_000_with_camera_labels.tfrecord\nsegment-2064489349728221803_3060_000_3080_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-2075681641472425669_2635_000_2655_000_with_camera_labels.tfrecord\nsegment-2075681641472425669_2635_000_2655_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-207754730878135627_1140_000_1160_000_with_camera_labels.tfrecord\nsegment-207754730878135627_1140_000_1160_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-2088865281951278665_4460_000_4480_000_with_camera_labels.tfrecord\nsegment-2088865281951278665_4460_000_4480_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-2101027554826767753_2504_580_2524_580_with_camera_labels.tfrecord\nsegment-2101027554826767753_2504_580_2524_580_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-2107164705125601090_3920_000_3940_000_with_camera_labels.tfrecord\nsegment-2107164705125601090_3920_000_3940_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-2114574223307001959_1163_280_1183_280_with_camera_labels.tfrecord\nsegment-2114574223307001959_1163_280_1183_280_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0016/segment-2151482270865536784_900_000_920_000_with_camera_labels.tfrecord\nsegment-2151482270865536784_900_000_920_000_with_camera_labels.tfrecord\n........................................Folder name: training_0015\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0015/segment-17941839888833418904_1240_000_1260_000_with_camera_labels.tfrecord\nsegment-17941839888833418904_1240_000_1260_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-17958696356648515477_1660_000_1680_000_with_camera_labels.tfrecord\nsegment-17958696356648515477_1660_000_1680_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-17959337482465423746_2840_000_2860_000_with_camera_labels.tfrecord\nsegment-17959337482465423746_2840_000_2860_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-17987556068410436875_520_610_540_610_with_camera_labels.tfrecord\nsegment-17987556068410436875_520_610_540_610_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-17993467596234560701_4940_000_4960_000_with_camera_labels.tfrecord\nsegment-17993467596234560701_4940_000_4960_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-1800857743596232165_1500_000_1520_000_with_camera_labels.tfrecord\nsegment-1800857743596232165_1500_000_1520_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18025338595059503802_571_216_591_216_with_camera_labels.tfrecord\nsegment-18025338595059503802_571_216_591_216_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18068531698704694137_920_000_940_000_with_camera_labels.tfrecord\nsegment-18068531698704694137_920_000_940_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18096167044602516316_2360_000_2380_000_with_camera_labels.tfrecord\nsegment-18096167044602516316_2360_000_2380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18111897798871103675_320_000_340_000_with_camera_labels.tfrecord\nsegment-18111897798871103675_320_000_340_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18136695827203527782_2860_000_2880_000_with_camera_labels.tfrecord\nsegment-18136695827203527782_2860_000_2880_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18141076662151909970_2755_710_2775_710_with_camera_labels.tfrecord\nsegment-18141076662151909970_2755_710_2775_710_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18233614482685846350_7060_000_7080_000_with_camera_labels.tfrecord\nsegment-18233614482685846350_7060_000_7080_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18244334282518155052_2360_000_2380_000_with_camera_labels.tfrecord\nsegment-18244334282518155052_2360_000_2380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18286677872269962604_3520_000_3540_000_with_camera_labels.tfrecord\nsegment-18286677872269962604_3520_000_3540_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18295766828140813622_6775_000_6795_000_with_camera_labels.tfrecord\nsegment-18295766828140813622_6775_000_6795_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18311996733670569136_5880_000_5900_000_with_camera_labels.tfrecord\nsegment-18311996733670569136_5880_000_5900_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18331713844982117868_2920_900_2940_900_with_camera_labels.tfrecord\nsegment-18331713844982117868_2920_900_2940_900_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18380281348728758158_4820_000_4840_000_with_camera_labels.tfrecord\nsegment-18380281348728758158_4820_000_4840_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-183829460855609442_430_000_450_000_with_camera_labels.tfrecord\nsegment-183829460855609442_430_000_450_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18397511418934954408_620_000_640_000_with_camera_labels.tfrecord\nsegment-18397511418934954408_620_000_640_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18403940760739364047_920_000_940_000_with_camera_labels.tfrecord\nsegment-18403940760739364047_920_000_940_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18418533015310829002_480_000_500_000_with_camera_labels.tfrecord\nsegment-18418533015310829002_480_000_500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-18441113814326864765_725_000_745_000_with_camera_labels.tfrecord\nsegment-18441113814326864765_725_000_745_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0015/segment-1857377326903987736_80_000_100_000_with_camera_labels.tfrecord\nsegment-1857377326903987736_80_000_100_000_with_camera_labels.tfrecord\n........................................Folder name: training_0014\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0014/segment-17552108427312284959_3200_000_3220_000_with_camera_labels.tfrecord\nsegment-17552108427312284959_3200_000_3220_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17564868480517233150_3643_000_3663_000_with_camera_labels.tfrecord\nsegment-17564868480517233150_3643_000_3663_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-175830748773502782_1580_000_1600_000_with_camera_labels.tfrecord\nsegment-175830748773502782_1580_000_1600_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-1758724094753801109_1251_037_1271_037_with_camera_labels.tfrecord\nsegment-1758724094753801109_1251_037_1271_037_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17597174721305220109_178_000_198_000_with_camera_labels.tfrecord\nsegment-17597174721305220109_178_000_198_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17601040886987343289_472_000_492_000_with_camera_labels.tfrecord\nsegment-17601040886987343289_472_000_492_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17642771458376274038_2080_000_2100_000_with_camera_labels.tfrecord\nsegment-17642771458376274038_2080_000_2100_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17647858901077503501_1500_000_1520_000_with_camera_labels.tfrecord\nsegment-17647858901077503501_1500_000_1520_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17674974223808194792_8787_692_8807_692_with_camera_labels.tfrecord\nsegment-17674974223808194792_8787_692_8807_692_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17677899007099302421_5911_000_5931_000_with_camera_labels.tfrecord\nsegment-17677899007099302421_5911_000_5931_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-1773696223367475365_1060_000_1080_000_with_camera_labels.tfrecord\nsegment-1773696223367475365_1060_000_1080_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17750787536486427868_560_000_580_000_with_camera_labels.tfrecord\nsegment-17750787536486427868_560_000_580_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17752423643206316420_920_850_940_850_with_camera_labels.tfrecord\nsegment-17752423643206316420_920_850_940_850_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17759280403078053118_6060_580_6080_580_with_camera_labels.tfrecord\nsegment-17759280403078053118_6060_580_6080_580_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17761959194352517553_5448_420_5468_420_with_camera_labels.tfrecord\nsegment-17761959194352517553_5448_420_5468_420_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17778522338768131809_5920_000_5940_000_with_camera_labels.tfrecord\nsegment-17778522338768131809_5920_000_5940_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17782258508241656695_1354_000_1374_000_with_camera_labels.tfrecord\nsegment-17782258508241656695_1354_000_1374_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17790754307864212354_1520_000_1540_000_with_camera_labels.tfrecord\nsegment-17790754307864212354_1520_000_1540_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17792522237954398691_2698_000_2718_000_with_camera_labels.tfrecord\nsegment-17792522237954398691_2698_000_2718_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17818548625922145895_1372_430_1392_430_with_camera_labels.tfrecord\nsegment-17818548625922145895_1372_430_1392_430_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17850487901509155700_9065_000_9085_000_with_camera_labels.tfrecord\nsegment-17850487901509155700_9065_000_9085_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17874036087982478403_733_674_753_674_with_camera_labels.tfrecord\nsegment-17874036087982478403_733_674_753_674_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17885096890374683162_755_580_775_580_with_camera_labels.tfrecord\nsegment-17885096890374683162_755_580_775_580_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17902907331132202998_1564_000_1584_000_with_camera_labels.tfrecord\nsegment-17902907331132202998_1564_000_1584_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0014/segment-17912777897400903477_2047_500_2067_500_with_camera_labels.tfrecord\nsegment-17912777897400903477_2047_500_2067_500_with_camera_labels.tfrecord\n........................................Folder name: training_0013\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0013/segment-16801666784196221098_2480_000_2500_000_with_camera_labels.tfrecord\nsegment-16801666784196221098_2480_000_2500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-16873108320324977627_780_000_800_000_with_camera_labels.tfrecord\nsegment-16873108320324977627_780_000_800_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-16911037681440249335_700_000_720_000_with_camera_labels.tfrecord\nsegment-16911037681440249335_700_000_720_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-169115044301335945_480_000_500_000_with_camera_labels.tfrecord\nsegment-169115044301335945_480_000_500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-16951470340360921766_2840_000_2860_000_with_camera_labels.tfrecord\nsegment-16951470340360921766_2840_000_2860_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-16977844994272847523_2140_000_2160_000_with_camera_labels.tfrecord\nsegment-16977844994272847523_2140_000_2160_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17066133495361694802_1220_000_1240_000_with_camera_labels.tfrecord\nsegment-17066133495361694802_1220_000_1240_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17144150788361379549_2720_000_2740_000_with_camera_labels.tfrecord\nsegment-17144150788361379549_2720_000_2740_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17159836069183024120_640_000_660_000_with_camera_labels.tfrecord\nsegment-17159836069183024120_640_000_660_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17160696560226550358_6229_820_6249_820_with_camera_labels.tfrecord\nsegment-17160696560226550358_6229_820_6249_820_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17216329305659006368_4800_000_4820_000_with_camera_labels.tfrecord\nsegment-17216329305659006368_4800_000_4820_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17270469718624587995_5202_030_5222_030_with_camera_labels.tfrecord\nsegment-17270469718624587995_5202_030_5222_030_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17295069199227237940_3160_000_3180_000_with_camera_labels.tfrecord\nsegment-17295069199227237940_3160_000_3180_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-1730266523558914470_305_260_325_260_with_camera_labels.tfrecord\nsegment-1730266523558914470_305_260_325_260_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17330200445788773877_2700_000_2720_000_with_camera_labels.tfrecord\nsegment-17330200445788773877_2700_000_2720_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17342274091983078806_80_000_100_000_with_camera_labels.tfrecord\nsegment-17342274091983078806_80_000_100_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17356174167372765800_1720_000_1740_000_with_camera_labels.tfrecord\nsegment-17356174167372765800_1720_000_1740_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17364342162691622478_780_000_800_000_with_camera_labels.tfrecord\nsegment-17364342162691622478_780_000_800_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-1737018592744049492_1960_000_1980_000_with_camera_labels.tfrecord\nsegment-1737018592744049492_1960_000_1980_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17386176497741125938_2180_000_2200_000_with_camera_labels.tfrecord\nsegment-17386176497741125938_2180_000_2200_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17386718718413812426_1763_140_1783_140_with_camera_labels.tfrecord\nsegment-17386718718413812426_1763_140_1783_140_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17388121177218499911_2520_000_2540_000_with_camera_labels.tfrecord\nsegment-17388121177218499911_2520_000_2540_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17407069523496279950_4354_900_4374_900_with_camera_labels.tfrecord\nsegment-17407069523496279950_4354_900_4374_900_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17437352085580560526_2120_000_2140_000_with_camera_labels.tfrecord\nsegment-17437352085580560526_2120_000_2140_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0013/segment-17547795428359040137_5056_070_5076_070_with_camera_labels.tfrecord\nsegment-17547795428359040137_5056_070_5076_070_with_camera_labels.tfrecord\n........................................Folder name: training_0012\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0012/segment-16372013171456210875_5631_040_5651_040_with_camera_labels.tfrecord\nsegment-16372013171456210875_5631_040_5651_040_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16388696051060074747_140_000_160_000_with_camera_labels.tfrecord\nsegment-16388696051060074747_140_000_160_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16403578704435467513_5133_870_5153_870_with_camera_labels.tfrecord\nsegment-16403578704435467513_5133_870_5153_870_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16435050660165962165_3635_310_3655_310_with_camera_labels.tfrecord\nsegment-16435050660165962165_3635_310_3655_310_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16470190748368943792_4369_490_4389_490_with_camera_labels.tfrecord\nsegment-16470190748368943792_4369_490_4389_490_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16473613811052081539_1060_000_1080_000_with_camera_labels.tfrecord\nsegment-16473613811052081539_1060_000_1080_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16485056021060230344_1576_741_1596_741_with_camera_labels.tfrecord\nsegment-16485056021060230344_1576_741_1596_741_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16504318334867223853_480_000_500_000_with_camera_labels.tfrecord\nsegment-16504318334867223853_480_000_500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16511546224219511043_6226_000_6246_000_with_camera_labels.tfrecord\nsegment-16511546224219511043_6226_000_6246_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16534202648288984983_900_000_920_000_with_camera_labels.tfrecord\nsegment-16534202648288984983_900_000_920_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16552287303455735122_7587_380_7607_380_with_camera_labels.tfrecord\nsegment-16552287303455735122_7587_380_7607_380_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16561295363965082313_3720_000_3740_000_with_camera_labels.tfrecord\nsegment-16561295363965082313_3720_000_3740_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16578409328451172992_3780_000_3800_000_with_camera_labels.tfrecord\nsegment-16578409328451172992_3780_000_3800_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16600468011801266684_1500_000_1520_000_with_camera_labels.tfrecord\nsegment-16600468011801266684_1500_000_1520_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16608525782988721413_100_000_120_000_with_camera_labels.tfrecord\nsegment-16608525782988721413_100_000_120_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16625429321676352815_1543_860_1563_860_with_camera_labels.tfrecord\nsegment-16625429321676352815_1543_860_1563_860_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16646360389507147817_3320_000_3340_000_with_camera_labels.tfrecord\nsegment-16646360389507147817_3320_000_3340_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16646502593577530501_4878_080_4898_080_with_camera_labels.tfrecord\nsegment-16646502593577530501_4878_080_4898_080_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16651261238721788858_2365_000_2385_000_with_camera_labels.tfrecord\nsegment-16651261238721788858_2365_000_2385_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16652690380969095006_2580_000_2600_000_with_camera_labels.tfrecord\nsegment-16652690380969095006_2580_000_2600_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16676202516797441395_2460_000_2480_000_with_camera_labels.tfrecord\nsegment-16676202516797441395_2460_000_2480_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16676683078119047936_300_000_320_000_with_camera_labels.tfrecord\nsegment-16676683078119047936_300_000_320_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16735938448970076374_1126_430_1146_430_with_camera_labels.tfrecord\nsegment-16735938448970076374_1126_430_1146_430_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16793466851577046940_2800_000_2820_000_with_camera_labels.tfrecord\nsegment-16793466851577046940_2800_000_2820_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0012/segment-16797668128356194527_2430_390_2450_390_with_camera_labels.tfrecord\nsegment-16797668128356194527_2430_390_2450_390_with_camera_labels.tfrecord\n........................................Folder name: training_0011\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0011/segment-15903184480576180688_3160_000_3180_000_with_camera_labels.tfrecord\nsegment-15903184480576180688_3160_000_3180_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-15903544160717261009_3961_870_3981_870_with_camera_labels.tfrecord\nsegment-15903544160717261009_3961_870_3981_870_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-15942468615931009553_1243_190_1263_190_with_camera_labels.tfrecord\nsegment-15942468615931009553_1243_190_1263_190_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-15943938987133888575_2767_300_2787_300_with_camera_labels.tfrecord\nsegment-15943938987133888575_2767_300_2787_300_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16034875274658204340_240_000_260_000_with_camera_labels.tfrecord\nsegment-16034875274658204340_240_000_260_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16042842363202855955_265_000_285_000_with_camera_labels.tfrecord\nsegment-16042842363202855955_265_000_285_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16042886962142359737_1060_000_1080_000_with_camera_labels.tfrecord\nsegment-16042886962142359737_1060_000_1080_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-1605912288178321742_451_000_471_000_with_camera_labels.tfrecord\nsegment-1605912288178321742_451_000_471_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16080705915014211452_620_000_640_000_with_camera_labels.tfrecord\nsegment-16080705915014211452_620_000_640_000_with_camera_labels.tfrecord\n.....................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16087604685956889409_40_000_60_000_with_camera_labels.tfrecord\nsegment-16087604685956889409_40_000_60_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16093022852977039323_2981_100_3001_100_with_camera_labels.tfrecord\nsegment-16093022852977039323_2981_100_3001_100_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16102220208346880_1420_000_1440_000_with_camera_labels.tfrecord\nsegment-16102220208346880_1420_000_1440_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16105359875195888139_4420_000_4440_000_with_camera_labels.tfrecord\nsegment-16105359875195888139_4420_000_4440_000_with_camera_labels.tfrecord\n.....................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16121633832852116614_240_000_260_000_with_camera_labels.tfrecord\nsegment-16121633832852116614_240_000_260_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16153607877566142572_2262_000_2282_000_with_camera_labels.tfrecord\nsegment-16153607877566142572_2262_000_2282_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16191439239940794174_2245_000_2265_000_with_camera_labels.tfrecord\nsegment-16191439239940794174_2245_000_2265_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16202688197024602345_3818_820_3838_820_with_camera_labels.tfrecord\nsegment-16202688197024602345_3818_820_3838_820_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16208935658045135756_4412_730_4432_730_with_camera_labels.tfrecord\nsegment-16208935658045135756_4412_730_4432_730_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16224018017168210482_6353_500_6373_500_with_camera_labels.tfrecord\nsegment-16224018017168210482_6353_500_6373_500_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16238753252899859750_1340_000_1360_000_with_camera_labels.tfrecord\nsegment-16238753252899859750_1340_000_1360_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16262849101474060261_3459_585_3479_585_with_camera_labels.tfrecord\nsegment-16262849101474060261_3459_585_3479_585_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16331619444570993520_1020_000_1040_000_with_camera_labels.tfrecord\nsegment-16331619444570993520_1020_000_1040_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16336545122307923741_486_637_506_637_with_camera_labels.tfrecord\nsegment-16336545122307923741_486_637_506_637_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16341778301681295961_178_800_198_800_with_camera_labels.tfrecord\nsegment-16341778301681295961_178_800_198_800_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0011/segment-16345319168590318167_1420_000_1440_000_with_camera_labels.tfrecord\nsegment-16345319168590318167_1420_000_1440_000_with_camera_labels.tfrecord\n........................................Folder name: training_0010\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0010/segment-15374821596407640257_3388_480_3408_480_with_camera_labels.tfrecord\nsegment-15374821596407640257_3388_480_3408_480_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15379350264706417068_3120_000_3140_000_with_camera_labels.tfrecord\nsegment-15379350264706417068_3120_000_3140_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15445436653637630344_3957_561_3977_561_with_camera_labels.tfrecord\nsegment-15445436653637630344_3957_561_3977_561_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15448466074775525292_2920_000_2940_000_with_camera_labels.tfrecord\nsegment-15448466074775525292_2920_000_2940_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15458436361042752328_3549_030_3569_030_with_camera_labels.tfrecord\nsegment-15458436361042752328_3549_030_3569_030_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15482064737890453610_5180_000_5200_000_with_camera_labels.tfrecord\nsegment-15482064737890453610_5180_000_5200_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15533468984793020049_800_000_820_000_with_camera_labels.tfrecord\nsegment-15533468984793020049_800_000_820_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15535062863944567958_1100_000_1120_000_with_camera_labels.tfrecord\nsegment-15535062863944567958_1100_000_1120_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15539619898625779290_760_000_780_000_with_camera_labels.tfrecord\nsegment-15539619898625779290_760_000_780_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15550613280008674010_1780_000_1800_000_with_camera_labels.tfrecord\nsegment-15550613280008674010_1780_000_1800_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15578655130939579324_620_000_640_000_with_camera_labels.tfrecord\nsegment-15578655130939579324_620_000_640_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15628918650068847391_8077_670_8097_670_with_camera_labels.tfrecord\nsegment-15628918650068847391_8077_670_8097_670_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15644354861949427452_3645_350_3665_350_with_camera_labels.tfrecord\nsegment-15644354861949427452_3645_350_3665_350_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15646511153936256674_1620_000_1640_000_with_camera_labels.tfrecord\nsegment-15646511153936256674_1620_000_1640_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15696964848687303249_4615_200_4635_200_with_camera_labels.tfrecord\nsegment-15696964848687303249_4615_200_4635_200_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15717839202171538526_1124_920_1144_920_with_camera_labels.tfrecord\nsegment-15717839202171538526_1124_920_1144_920_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15787777881771177481_8820_000_8840_000_with_camera_labels.tfrecord\nsegment-15787777881771177481_8820_000_8840_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15795616688853411272_1245_000_1265_000_with_camera_labels.tfrecord\nsegment-15795616688853411272_1245_000_1265_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15803855782190483017_1060_000_1080_000_with_camera_labels.tfrecord\nsegment-15803855782190483017_1060_000_1080_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15832924468527961_1564_160_1584_160_with_camera_labels.tfrecord\nsegment-15832924468527961_1564_160_1584_160_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15834329472172048691_2956_760_2976_760_with_camera_labels.tfrecord\nsegment-15834329472172048691_2956_760_2976_760_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15844593126368860820_3260_000_3280_000_with_camera_labels.tfrecord\nsegment-15844593126368860820_3260_000_3280_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15857303257471811288_1840_000_1860_000_with_camera_labels.tfrecord\nsegment-15857303257471811288_1840_000_1860_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15868625208244306149_4340_000_4360_000_with_camera_labels.tfrecord\nsegment-15868625208244306149_4340_000_4360_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0010/segment-15882343134097151256_4820_000_4840_000_with_camera_labels.tfrecord\nsegment-15882343134097151256_4820_000_4840_000_with_camera_labels.tfrecord\n........................................Folder name: training_0009\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0009/segment-14818835630668820137_1780_000_1800_000_with_camera_labels.tfrecord\nsegment-14818835630668820137_1780_000_1800_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-14824622621331930560_2395_420_2415_420_with_camera_labels.tfrecord\nsegment-14824622621331930560_2395_420_2415_420_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-14830022845193837364_3488_060_3508_060_with_camera_labels.tfrecord\nsegment-14830022845193837364_3488_060_3508_060_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-14869732972903148657_2420_000_2440_000_with_camera_labels.tfrecord\nsegment-14869732972903148657_2420_000_2440_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-14940138913070850675_5755_330_5775_330_with_camera_labels.tfrecord\nsegment-14940138913070850675_5755_330_5775_330_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-14964131310266936779_3292_850_3312_850_with_camera_labels.tfrecord\nsegment-14964131310266936779_3292_850_3312_850_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-14964691552976940738_2219_229_2239_229_with_camera_labels.tfrecord\nsegment-14964691552976940738_2219_229_2239_229_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15036582848618865396_3752_830_3772_830_with_camera_labels.tfrecord\nsegment-15036582848618865396_3752_830_3772_830_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15053781258223091665_3192_117_3212_117_with_camera_labels.tfrecord\nsegment-15053781258223091665_3192_117_3212_117_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15062351272945542584_5921_360_5941_360_with_camera_labels.tfrecord\nsegment-15062351272945542584_5921_360_5941_360_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15090871771939393635_1266_320_1286_320_with_camera_labels.tfrecord\nsegment-15090871771939393635_1266_320_1286_320_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15125792363972595336_4960_000_4980_000_with_camera_labels.tfrecord\nsegment-15125792363972595336_4960_000_4980_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15166409572599113654_808_000_828_000_with_camera_labels.tfrecord\nsegment-15166409572599113654_808_000_828_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15202102284304593700_1900_000_1920_000_with_camera_labels.tfrecord\nsegment-15202102284304593700_1900_000_1920_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15221704733958986648_1400_000_1420_000_with_camera_labels.tfrecord\nsegment-15221704733958986648_1400_000_1420_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15241656472211725662_2500_000_2520_000_with_camera_labels.tfrecord\nsegment-15241656472211725662_2500_000_2520_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15265053588821562107_60_000_80_000_with_camera_labels.tfrecord\nsegment-15265053588821562107_60_000_80_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15266427834976906738_1620_000_1640_000_with_camera_labels.tfrecord\nsegment-15266427834976906738_1620_000_1640_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15270638100874320175_2720_000_2740_000_with_camera_labels.tfrecord\nsegment-15270638100874320175_2720_000_2740_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15308653868652290306_460_000_480_000_with_camera_labels.tfrecord\nsegment-15308653868652290306_460_000_480_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15331851695963211598_1620_000_1640_000_with_camera_labels.tfrecord\nsegment-15331851695963211598_1620_000_1640_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15342828002152531464_1543_000_1563_000_with_camera_labels.tfrecord\nsegment-15342828002152531464_1543_000_1563_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15349503153813328111_2160_000_2180_000_with_camera_labels.tfrecord\nsegment-15349503153813328111_2160_000_2180_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15365821471737026848_1160_000_1180_000_with_camera_labels.tfrecord\nsegment-15365821471737026848_1160_000_1180_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0009/segment-15367782110311024266_2103_310_2123_310_with_camera_labels.tfrecord\nsegment-15367782110311024266_2103_310_2123_310_with_camera_labels.tfrecord\n........................................Folder name: training_0008\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0008/segment-14276116893664145886_1785_080_1805_080_with_camera_labels.tfrecord\nsegment-14276116893664145886_1785_080_1805_080_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-1432918953215186312_5101_320_5121_320_with_camera_labels.tfrecord\nsegment-1432918953215186312_5101_320_5121_320_with_camera_labels.tfrecord\n....................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14348136031422182645_3360_000_3380_000_with_camera_labels.tfrecord\nsegment-14348136031422182645_3360_000_3380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14358192009676582448_3396_400_3416_400_with_camera_labels.tfrecord\nsegment-14358192009676582448_3396_400_3416_400_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14369250836076988112_7249_040_7269_040_with_camera_labels.tfrecord\nsegment-14369250836076988112_7249_040_7269_040_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14388269713149187289_1994_280_2014_280_with_camera_labels.tfrecord\nsegment-14388269713149187289_1994_280_2014_280_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14424804287031718399_1281_030_1301_030_with_camera_labels.tfrecord\nsegment-14424804287031718399_1281_030_1301_030_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-1442753028323350651_4065_000_4085_000_with_camera_labels.tfrecord\nsegment-1442753028323350651_4065_000_4085_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14430914081327266277_6480_000_6500_000_with_camera_labels.tfrecord\nsegment-14430914081327266277_6480_000_6500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14466332043440571514_6530_560_6550_560_with_camera_labels.tfrecord\nsegment-14466332043440571514_6530_560_6550_560_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14479353969865741728_2340_000_2360_000_with_camera_labels.tfrecord\nsegment-14479353969865741728_2340_000_2360_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14503113925613619599_975_506_995_506_with_camera_labels.tfrecord\nsegment-14503113925613619599_975_506_995_506_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14561791273891593514_2558_030_2578_030_with_camera_labels.tfrecord\nsegment-14561791273891593514_2558_030_2578_030_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14619874262915043759_2801_090_2821_090_with_camera_labels.tfrecord\nsegment-14619874262915043759_2801_090_2821_090_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14705303724557273004_3105_000_3125_000_with_camera_labels.tfrecord\nsegment-14705303724557273004_3105_000_3125_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14734824171146590110_880_000_900_000_with_camera_labels.tfrecord\nsegment-14734824171146590110_880_000_900_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-1473681173028010305_1780_000_1800_000_with_camera_labels.tfrecord\nsegment-1473681173028010305_1780_000_1800_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14742731916935095621_1325_000_1345_000_with_camera_labels.tfrecord\nsegment-14742731916935095621_1325_000_1345_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14752341928540512649_4960_000_4980_000_with_camera_labels.tfrecord\nsegment-14752341928540512649_4960_000_4980_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14753089714893635383_873_600_893_600_with_camera_labels.tfrecord\nsegment-14753089714893635383_873_600_893_600_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14763701469114129880_2260_000_2280_000_with_camera_labels.tfrecord\nsegment-14763701469114129880_2260_000_2280_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14766384747691229841_6315_730_6335_730_with_camera_labels.tfrecord\nsegment-14766384747691229841_6315_730_6335_730_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14777753086917826209_4147_000_4167_000_with_camera_labels.tfrecord\nsegment-14777753086917826209_4147_000_4167_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14791260641858988448_1018_000_1038_000_with_camera_labels.tfrecord\nsegment-14791260641858988448_1018_000_1038_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0008/segment-14810689888487451189_720_000_740_000_with_camera_labels.tfrecord\nsegment-14810689888487451189_720_000_740_000_with_camera_labels.tfrecord\n........................................Folder name: training_0007\nNum of tfrecord file: 13\nextracting /data/cmpe295-liu/Waymo/training_0007/segment-13830510593707564159_5575_000_5595_000_with_camera_labels.tfrecord\nsegment-13830510593707564159_5575_000_5595_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0007/segment-13840133134545942567_1060_000_1080_000_with_camera_labels.tfrecord\nsegment-13840133134545942567_1060_000_1080_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0007/segment-13862220583747475906_1260_000_1280_000_with_camera_labels.tfrecord\nsegment-13862220583747475906_1260_000_1280_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0007/segment-13909033332341079321_4007_930_4027_930_with_camera_labels.tfrecord\nsegment-13909033332341079321_4007_930_4027_930_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0007/segment-13940755514149579648_821_157_841_157_with_camera_labels.tfrecord\nsegment-13940755514149579648_821_157_841_157_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0007/segment-13944915979337652825_4260_668_4280_668_with_camera_labels.tfrecord\nsegment-13944915979337652825_4260_668_4280_668_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0007/segment-13965460994524880649_2842_050_2862_050_with_camera_labels.tfrecord\nsegment-13965460994524880649_2842_050_2862_050_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0007/segment-13984577671034960830_4545_000_4565_000_with_camera_labels.tfrecord\nsegment-13984577671034960830_4545_000_4565_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0007/segment-14004546003548947884_2331_861_2351_861_with_camera_labels.tfrecord\nsegment-14004546003548947884_2331_861_2351_861_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0007/segment-14018515129165961775_483_260_503_260_with_camera_labels.tfrecord\nsegment-14018515129165961775_483_260_503_260_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0007/segment-14073491244121877213_4066_056_4086_056_with_camera_labels.tfrecord\nsegment-14073491244121877213_4066_056_4086_056_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0007/segment-14073578965827700743_1564_000_1584_000_with_camera_labels.tfrecord\nsegment-14073578965827700743_1564_000_1584_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0007/segment-14076089808269682731_54_730_74_730_with_camera_labels.tfrecord\nsegment-14076089808269682731_54_730_74_730_with_camera_labels.tfrecord\n........................................Folder name: training_0006\nNum of tfrecord file: 23\nextracting /data/cmpe295-liu/Waymo/training_0006/segment-13238419657658219864_4630_850_4650_850_with_camera_labels.tfrecord\nsegment-13238419657658219864_4630_850_4650_850_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13254498462985394788_980_000_1000_000_with_camera_labels.tfrecord\nsegment-13254498462985394788_980_000_1000_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13258835835415292197_965_000_985_000_with_camera_labels.tfrecord\nsegment-13258835835415292197_965_000_985_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13271285919570645382_5320_000_5340_000_with_camera_labels.tfrecord\nsegment-13271285919570645382_5320_000_5340_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13310437789759009684_2645_000_2665_000_with_camera_labels.tfrecord\nsegment-13310437789759009684_2645_000_2665_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13355317306876751663_2200_000_2220_000_with_camera_labels.tfrecord\nsegment-13355317306876751663_2200_000_2220_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13363977648531075793_343_000_363_000_with_camera_labels.tfrecord\nsegment-13363977648531075793_343_000_363_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13390791323468600062_6718_570_6738_570_with_camera_labels.tfrecord\nsegment-13390791323468600062_6718_570_6738_570_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13402473631986525162_5700_000_5720_000_with_camera_labels.tfrecord\nsegment-13402473631986525162_5700_000_5720_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13476374534576730229_240_000_260_000_with_camera_labels.tfrecord\nsegment-13476374534576730229_240_000_260_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13506499849906169066_120_000_140_000_with_camera_labels.tfrecord\nsegment-13506499849906169066_120_000_140_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13517115297021862252_2680_000_2700_000_with_camera_labels.tfrecord\nsegment-13517115297021862252_2680_000_2700_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13519445614718437933_4060_000_4080_000_with_camera_labels.tfrecord\nsegment-13519445614718437933_4060_000_4080_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-1352150727715827110_3710_250_3730_250_with_camera_labels.tfrecord\nsegment-1352150727715827110_3710_250_3730_250_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-1357883579772440606_2365_000_2385_000_with_camera_labels.tfrecord\nsegment-1357883579772440606_2365_000_2385_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13585809231635721258_1910_770_1930_770_with_camera_labels.tfrecord\nsegment-13585809231635721258_1910_770_1930_770_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13619063687271391084_1519_680_1539_680_with_camera_labels.tfrecord\nsegment-13619063687271391084_1519_680_1539_680_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13622747960068272448_1678_930_1698_930_with_camera_labels.tfrecord\nsegment-13622747960068272448_1678_930_1698_930_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13629997314951696814_1207_000_1227_000_with_camera_labels.tfrecord\nsegment-13629997314951696814_1207_000_1227_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13667377240304615855_500_000_520_000_with_camera_labels.tfrecord\nsegment-13667377240304615855_500_000_520_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13679757109245957439_4167_170_4187_170_with_camera_labels.tfrecord\nsegment-13679757109245957439_4167_170_4187_170_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13731697468004921673_4920_000_4940_000_with_camera_labels.tfrecord\nsegment-13731697468004921673_4920_000_4940_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0006/segment-13807633218762107566_6625_000_6645_000_with_camera_labels.tfrecord\nsegment-13807633218762107566_6625_000_6645_000_with_camera_labels.tfrecord\n........................................Folder name: training_0005\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0005/segment-12848519977617081063_2488_000_2508_000_with_camera_labels.tfrecord\nsegment-12848519977617081063_2488_000_2508_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-12856053589272984699_1020_000_1040_000_with_camera_labels.tfrecord\nsegment-12856053589272984699_1020_000_1040_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-12858738411692807959_2865_000_2885_000_with_camera_labels.tfrecord\nsegment-12858738411692807959_2865_000_2885_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-12879640240483815315_5852_605_5872_605_with_camera_labels.tfrecord\nsegment-12879640240483815315_5852_605_5872_605_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-12894036666871194216_787_000_807_000_with_camera_labels.tfrecord\nsegment-12894036666871194216_787_000_807_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-12896629105712361308_4520_000_4540_000_with_camera_labels.tfrecord\nsegment-12896629105712361308_4520_000_4540_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-12900898236728415654_1906_686_1926_686_with_camera_labels.tfrecord\nsegment-12900898236728415654_1906_686_1926_686_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-12956664801249730713_2840_000_2860_000_with_camera_labels.tfrecord\nsegment-12956664801249730713_2840_000_2860_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-12974838039736660070_4586_990_4606_990_with_camera_labels.tfrecord\nsegment-12974838039736660070_4586_990_4606_990_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-12979718722917614085_1039_490_1059_490_with_camera_labels.tfrecord\nsegment-12979718722917614085_1039_490_1059_490_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-12988666890418932775_5516_730_5536_730_with_camera_labels.tfrecord\nsegment-12988666890418932775_5516_730_5536_730_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-13005562150845909564_3141_360_3161_360_with_camera_labels.tfrecord\nsegment-13005562150845909564_3141_360_3161_360_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-13033853066564892960_1040_000_1060_000_with_camera_labels.tfrecord\nsegment-13033853066564892960_1040_000_1060_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-1305342127382455702_3720_000_3740_000_with_camera_labels.tfrecord\nsegment-1305342127382455702_3720_000_3740_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-1306458236359471795_2524_330_2544_330_with_camera_labels.tfrecord\nsegment-1306458236359471795_2524_330_2544_330_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-13078892192456386060_2960_000_2980_000_with_camera_labels.tfrecord\nsegment-13078892192456386060_2960_000_2980_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-13085453465864374565_2040_000_2060_000_with_camera_labels.tfrecord\nsegment-13085453465864374565_2040_000_2060_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-13142190313715360621_3888_090_3908_090_with_camera_labels.tfrecord\nsegment-13142190313715360621_3888_090_3908_090_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-13145971249179441231_1640_000_1660_000_with_camera_labels.tfrecord\nsegment-13145971249179441231_1640_000_1660_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-13177337129001451839_9160_000_9180_000_with_camera_labels.tfrecord\nsegment-13177337129001451839_9160_000_9180_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-13181198025433053194_2620_770_2640_770_with_camera_labels.tfrecord\nsegment-13181198025433053194_2620_770_2640_770_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-13182548552824592684_4160_250_4180_250_with_camera_labels.tfrecord\nsegment-13182548552824592684_4160_250_4180_250_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-13186511704021307558_2000_000_2020_000_with_camera_labels.tfrecord\nsegment-13186511704021307558_2000_000_2020_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-13196796799137805454_3036_940_3056_940_with_camera_labels.tfrecord\nsegment-13196796799137805454_3036_940_3056_940_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0005/segment-13207915841618107559_2980_000_3000_000_with_camera_labels.tfrecord\nsegment-13207915841618107559_2980_000_3000_000_with_camera_labels.tfrecord\n........................................Folder name: training_0004\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0004/segment-12179768245749640056_5561_070_5581_070_with_camera_labels.tfrecord\nsegment-12179768245749640056_5561_070_5581_070_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12200383401366682847_2552_140_2572_140_with_camera_labels.tfrecord\nsegment-12200383401366682847_2552_140_2572_140_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12208410199966712301_4480_000_4500_000_with_camera_labels.tfrecord\nsegment-12208410199966712301_4480_000_4500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12212767626682531382_2100_150_2120_150_with_camera_labels.tfrecord\nsegment-12212767626682531382_2100_150_2120_150_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12251442326766052580_1840_000_1860_000_with_camera_labels.tfrecord\nsegment-12251442326766052580_1840_000_1860_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12257951615341726923_2196_690_2216_690_with_camera_labels.tfrecord\nsegment-12257951615341726923_2196_690_2216_690_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12273083120751993429_7285_000_7305_000_with_camera_labels.tfrecord\nsegment-12273083120751993429_7285_000_7305_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12281202743097872109_3387_370_3407_370_with_camera_labels.tfrecord\nsegment-12281202743097872109_3387_370_3407_370_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12303641360375776820_4378_000_4398_000_with_camera_labels.tfrecord\nsegment-12303641360375776820_4378_000_4398_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12304907743194762419_1522_000_1542_000_with_camera_labels.tfrecord\nsegment-12304907743194762419_1522_000_1542_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-1231623110026745648_480_000_500_000_with_camera_labels.tfrecord\nsegment-1231623110026745648_480_000_500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12321865437129862911_3480_000_3500_000_with_camera_labels.tfrecord\nsegment-12321865437129862911_3480_000_3500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12337317986514501583_5346_260_5366_260_with_camera_labels.tfrecord\nsegment-12337317986514501583_5346_260_5366_260_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12339284075576056695_1920_000_1940_000_with_camera_labels.tfrecord\nsegment-12339284075576056695_1920_000_1940_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12365808668068790137_2920_000_2940_000_with_camera_labels.tfrecord\nsegment-12365808668068790137_2920_000_2940_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12473470522729755785_4000_000_4020_000_with_camera_labels.tfrecord\nsegment-12473470522729755785_4000_000_4020_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12505030131868863688_1740_000_1760_000_with_camera_labels.tfrecord\nsegment-12505030131868863688_1740_000_1760_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12511696717465549299_4209_630_4229_630_with_camera_labels.tfrecord\nsegment-12511696717465549299_4209_630_4229_630_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12551320916264703416_1420_000_1440_000_with_camera_labels.tfrecord\nsegment-12551320916264703416_1420_000_1440_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-1255991971750044803_1700_000_1720_000_with_camera_labels.tfrecord\nsegment-1255991971750044803_1700_000_1720_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12566399510596872945_2078_320_2098_320_with_camera_labels.tfrecord\nsegment-12566399510596872945_2078_320_2098_320_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12581809607914381746_1219_547_1239_547_with_camera_labels.tfrecord\nsegment-12581809607914381746_1219_547_1239_547_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-1265122081809781363_2879_530_2899_530_with_camera_labels.tfrecord\nsegment-1265122081809781363_2879_530_2899_530_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12681651284932598380_3585_280_3605_280_with_camera_labels.tfrecord\nsegment-12681651284932598380_3585_280_3605_280_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0004/segment-12844373518178303651_2140_000_2160_000_with_camera_labels.tfrecord\nsegment-12844373518178303651_2140_000_2160_000_with_camera_labels.tfrecord\n........................................Folder name: training_0003\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0003/segment-1172406780360799916_1660_000_1680_000_with_camera_labels.tfrecord\nsegment-1172406780360799916_1660_000_1680_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-1146261869236413282_1680_000_1700_000_with_camera_labels.tfrecord\nsegment-1146261869236413282_1680_000_1700_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11486225968269855324_92_000_112_000_with_camera_labels.tfrecord\nsegment-11486225968269855324_92_000_112_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11489533038039664633_4820_000_4840_000_with_camera_labels.tfrecord\nsegment-11489533038039664633_4820_000_4840_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11566385337103696871_5740_000_5760_000_with_camera_labels.tfrecord\nsegment-11566385337103696871_5740_000_5760_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11588853832866011756_2184_462_2204_462_with_camera_labels.tfrecord\nsegment-11588853832866011756_2184_462_2204_462_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11623618970700582562_2840_367_2860_367_with_camera_labels.tfrecord\nsegment-11623618970700582562_2840_367_2860_367_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11674150664140226235_680_000_700_000_with_camera_labels.tfrecord\nsegment-11674150664140226235_680_000_700_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11718898130355901268_2300_000_2320_000_with_camera_labels.tfrecord\nsegment-11718898130355901268_2300_000_2320_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11799592541704458019_9828_750_9848_750_with_camera_labels.tfrecord\nsegment-11799592541704458019_9828_750_9848_750_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11839652018869852123_2565_000_2585_000_with_camera_labels.tfrecord\nsegment-11839652018869852123_2565_000_2585_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11846396154240966170_3540_000_3560_000_with_camera_labels.tfrecord\nsegment-11846396154240966170_3540_000_3560_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11847506886204460250_1640_000_1660_000_with_camera_labels.tfrecord\nsegment-11847506886204460250_1640_000_1660_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-1191788760630624072_3880_000_3900_000_with_camera_labels.tfrecord\nsegment-1191788760630624072_3880_000_3900_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11918003324473417938_1400_000_1420_000_with_camera_labels.tfrecord\nsegment-11918003324473417938_1400_000_1420_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11925224148023145510_1040_000_1060_000_with_camera_labels.tfrecord\nsegment-11925224148023145510_1040_000_1060_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11928449532664718059_1200_000_1220_000_with_camera_labels.tfrecord\nsegment-11928449532664718059_1200_000_1220_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11940460932056521663_1760_000_1780_000_with_camera_labels.tfrecord\nsegment-11940460932056521663_1760_000_1780_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11967272535264406807_580_000_600_000_with_camera_labels.tfrecord\nsegment-11967272535264406807_580_000_600_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-11971497357570544465_1200_000_1220_000_with_camera_labels.tfrecord\nsegment-11971497357570544465_1200_000_1220_000_with_camera_labels.tfrecord\n...................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-12012663867578114640_820_000_840_000_with_camera_labels.tfrecord\nsegment-12012663867578114640_820_000_840_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-12027892938363296829_4086_280_4106_280_with_camera_labels.tfrecord\nsegment-12027892938363296829_4086_280_4106_280_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-1208303279778032257_1360_000_1380_000_with_camera_labels.tfrecord\nsegment-1208303279778032257_1360_000_1380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-12161824480686739258_1813_380_1833_380_with_camera_labels.tfrecord\nsegment-12161824480686739258_1813_380_1833_380_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0003/segment-12174529769287588121_3848_440_3868_440_with_camera_labels.tfrecord\nsegment-12174529769287588121_3848_440_3868_440_with_camera_labels.tfrecord\n........................................Folder name: training_0002\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/training_0002/segment-11076364019363412893_1711_000_1731_000_with_camera_labels.tfrecord\nsegment-11076364019363412893_1711_000_1731_000_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-10940952441434390507_1888_710_1908_710_with_camera_labels.tfrecord\nsegment-10940952441434390507_1888_710_1908_710_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-10963653239323173269_1924_000_1944_000_with_camera_labels.tfrecord\nsegment-10963653239323173269_1924_000_1944_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-10964956617027590844_1584_680_1604_680_with_camera_labels.tfrecord\nsegment-10964956617027590844_1584_680_1604_680_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-10975280749486260148_940_000_960_000_with_camera_labels.tfrecord\nsegment-10975280749486260148_940_000_960_000_with_camera_labels.tfrecord\n......................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11004685739714500220_2300_000_2320_000_with_camera_labels.tfrecord\nsegment-11004685739714500220_2300_000_2320_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11017034898130016754_697_830_717_830_with_camera_labels.tfrecord\nsegment-11017034898130016754_697_830_717_830_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11060291335850384275_3761_210_3781_210_with_camera_labels.tfrecord\nsegment-11060291335850384275_3761_210_3781_210_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11070802577416161387_740_000_760_000_with_camera_labels.tfrecord\nsegment-11070802577416161387_740_000_760_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11113047206980595400_2560_000_2580_000_with_camera_labels.tfrecord\nsegment-11113047206980595400_2560_000_2580_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11119453952284076633_1369_940_1389_940_with_camera_labels.tfrecord\nsegment-11119453952284076633_1369_940_1389_940_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11126313430116606120_1439_990_1459_990_with_camera_labels.tfrecord\nsegment-11126313430116606120_1439_990_1459_990_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11139647661584646830_5470_000_5490_000_with_camera_labels.tfrecord\nsegment-11139647661584646830_5470_000_5490_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11183906854663518829_2294_000_2314_000_with_camera_labels.tfrecord\nsegment-11183906854663518829_2294_000_2314_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11199484219241918646_2810_030_2830_030_with_camera_labels.tfrecord\nsegment-11199484219241918646_2810_030_2830_030_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11219370372259322863_5320_000_5340_000_with_camera_labels.tfrecord\nsegment-11219370372259322863_5320_000_5340_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11236550977973464715_3620_000_3640_000_with_camera_labels.tfrecord\nsegment-11236550977973464715_3620_000_3640_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11252086830380107152_1540_000_1560_000_with_camera_labels.tfrecord\nsegment-11252086830380107152_1540_000_1560_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11318901554551149504_520_000_540_000_with_camera_labels.tfrecord\nsegment-11318901554551149504_520_000_540_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11343624116265195592_5910_530_5930_530_with_camera_labels.tfrecord\nsegment-11343624116265195592_5910_530_5930_530_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11355519273066561009_5323_000_5343_000_with_camera_labels.tfrecord\nsegment-11355519273066561009_5323_000_5343_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11379226583756500423_6230_810_6250_810_with_camera_labels.tfrecord\nsegment-11379226583756500423_6230_810_6250_810_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11388947676680954806_5427_320_5447_320_with_camera_labels.tfrecord\nsegment-11388947676680954806_5427_320_5447_320_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11392401368700458296_1086_429_1106_429_with_camera_labels.tfrecord\nsegment-11392401368700458296_1086_429_1106_429_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/training_0002/segment-11454085070345530663_1905_000_1925_000_with_camera_labels.tfrecord\nsegment-11454085070345530663_1905_000_1925_000_with_camera_labels.tfrecord\n........................................Folder name: training_0001\nNum of tfrecord file: 24\nextracting /data/cmpe295-liu/Waymo/training_0001/segment-10596949720463106554_1933_530_1953_530_with_camera_labels.tfrecord\nsegment-10596949720463106554_1933_530_1953_530_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10485926982439064520_4980_000_5000_000_with_camera_labels.tfrecord\nsegment-10485926982439064520_4980_000_5000_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10498013744573185290_1240_000_1260_000_with_camera_labels.tfrecord\nsegment-10498013744573185290_1240_000_1260_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10500357041547037089_1474_800_1494_800_with_camera_labels.tfrecord\nsegment-10500357041547037089_1474_800_1494_800_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10517728057304349900_3360_000_3380_000_with_camera_labels.tfrecord\nsegment-10517728057304349900_3360_000_3380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-1051897962568538022_238_170_258_170_with_camera_labels.tfrecord\nsegment-1051897962568538022_238_170_258_170_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10526338824408452410_5714_660_5734_660_with_camera_labels.tfrecord\nsegment-10526338824408452410_5714_660_5734_660_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10584247114982259878_490_000_510_000_with_camera_labels.tfrecord\nsegment-10584247114982259878_490_000_510_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10588771936253546636_2300_000_2320_000_with_camera_labels.tfrecord\nsegment-10588771936253546636_2300_000_2320_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10599748131695282446_1380_000_1400_000_with_camera_labels.tfrecord\nsegment-10599748131695282446_1380_000_1400_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10625026498155904401_200_000_220_000_with_camera_labels.tfrecord\nsegment-10625026498155904401_200_000_220_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10664823084372323928_4360_000_4380_000_with_camera_labels.tfrecord\nsegment-10664823084372323928_4360_000_4380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10676267326664322837_311_180_331_180_with_camera_labels.tfrecord\nsegment-10676267326664322837_311_180_331_180_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10723911392655396041_860_000_880_000_with_camera_labels.tfrecord\nsegment-10723911392655396041_860_000_880_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10724020115992582208_7660_400_7680_400_with_camera_labels.tfrecord\nsegment-10724020115992582208_7660_400_7680_400_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10734565072045778791_440_000_460_000_with_camera_labels.tfrecord\nsegment-10734565072045778791_440_000_460_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10750135302241325253_180_000_200_000_with_camera_labels.tfrecord\nsegment-10750135302241325253_180_000_200_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10770759614217273359_1465_000_1485_000_with_camera_labels.tfrecord\nsegment-10770759614217273359_1465_000_1485_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10786629299947667143_3440_000_3460_000_with_camera_labels.tfrecord\nsegment-10786629299947667143_3440_000_3460_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10793018113277660068_2714_540_2734_540_with_camera_labels.tfrecord\nsegment-10793018113277660068_2714_540_2734_540_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-1083056852838271990_4080_000_4100_000_with_camera_labels.tfrecord\nsegment-1083056852838271990_4080_000_4100_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10876852935525353526_1640_000_1660_000_with_camera_labels.tfrecord\nsegment-10876852935525353526_1640_000_1660_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10923963890428322967_1445_000_1465_000_with_camera_labels.tfrecord\nsegment-10923963890428322967_1445_000_1465_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0001/segment-10927752430968246422_4940_000_4960_000_with_camera_labels.tfrecord\nsegment-10927752430968246422_4940_000_4960_000_with_camera_labels.tfrecord\n........................................Folder name: training_0000\nNum of tfrecord file: 24\nextracting /data/cmpe295-liu/Waymo/training_0000/segment-10094743350625019937_3420_000_3440_000_with_camera_labels.tfrecord\nsegment-10094743350625019937_3420_000_3440_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10017090168044687777_6380_000_6400_000_with_camera_labels.tfrecord\nsegment-10017090168044687777_6380_000_6400_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10023947602400723454_1120_000_1140_000_with_camera_labels.tfrecord\nsegment-10023947602400723454_1120_000_1140_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-1005081002024129653_5313_150_5333_150_with_camera_labels.tfrecord\nsegment-1005081002024129653_5313_150_5333_150_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10061305430875486848_1080_000_1100_000_with_camera_labels.tfrecord\nsegment-10061305430875486848_1080_000_1100_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10072140764565668044_4060_000_4080_000_with_camera_labels.tfrecord\nsegment-10072140764565668044_4060_000_4080_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10072231702153043603_5725_000_5745_000_with_camera_labels.tfrecord\nsegment-10072231702153043603_5725_000_5745_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10075870402459732738_1060_000_1080_000_with_camera_labels.tfrecord\nsegment-10075870402459732738_1060_000_1080_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10082223140073588526_6140_000_6160_000_with_camera_labels.tfrecord\nsegment-10082223140073588526_6140_000_6160_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10096619443888687526_2820_000_2840_000_with_camera_labels.tfrecord\nsegment-10096619443888687526_2820_000_2840_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10107710434105775874_760_000_780_000_with_camera_labels.tfrecord\nsegment-10107710434105775874_760_000_780_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10153695247769592104_787_000_807_000_with_camera_labels.tfrecord\nsegment-10153695247769592104_787_000_807_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10206293520369375008_2796_800_2816_800_with_camera_labels.tfrecord\nsegment-10206293520369375008_2796_800_2816_800_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10212406498497081993_5300_000_5320_000_with_camera_labels.tfrecord\nsegment-10212406498497081993_5300_000_5320_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-1022527355599519580_4866_960_4886_960_with_camera_labels.tfrecord\nsegment-1022527355599519580_4866_960_4886_960_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10226164909075980558_180_000_200_000_with_camera_labels.tfrecord\nsegment-10226164909075980558_180_000_200_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10231929575853664160_1160_000_1180_000_with_camera_labels.tfrecord\nsegment-10231929575853664160_1160_000_1180_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10235335145367115211_5420_000_5440_000_with_camera_labels.tfrecord\nsegment-10235335145367115211_5420_000_5440_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10241508783381919015_2889_360_2909_360_with_camera_labels.tfrecord\nsegment-10241508783381919015_2889_360_2909_360_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10275144660749673822_5755_561_5775_561_with_camera_labels.tfrecord\nsegment-10275144660749673822_5755_561_5775_561_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10327752107000040525_1120_000_1140_000_with_camera_labels.tfrecord\nsegment-10327752107000040525_1120_000_1140_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10391312872392849784_4099_400_4119_400_with_camera_labels.tfrecord\nsegment-10391312872392849784_4099_400_4119_400_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10444454289801298640_4360_000_4380_000_with_camera_labels.tfrecord\nsegment-10444454289801298640_4360_000_4380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/training_0000/segment-10455472356147194054_1560_000_1580_000_with_camera_labels.tfrecord\nsegment-10455472356147194054_1560_000_1580_000_with_camera_labels.tfrecord\n........................................202\nFolder name: validation_0000\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/validation_0000/segment-10203656353524179475_7625_000_7645_000_with_camera_labels.tfrecord\nsegment-10203656353524179475_7625_000_7645_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-1024360143612057520_3580_000_3600_000_with_camera_labels.tfrecord\nsegment-1024360143612057520_3580_000_3600_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-10247954040621004675_2180_000_2200_000_with_camera_labels.tfrecord\nsegment-10247954040621004675_2180_000_2200_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-10289507859301986274_4200_000_4220_000_with_camera_labels.tfrecord\nsegment-10289507859301986274_4200_000_4220_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-10335539493577748957_1372_870_1392_870_with_camera_labels.tfrecord\nsegment-10335539493577748957_1372_870_1392_870_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-10359308928573410754_720_000_740_000_with_camera_labels.tfrecord\nsegment-10359308928573410754_720_000_740_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-10448102132863604198_472_000_492_000_with_camera_labels.tfrecord\nsegment-10448102132863604198_472_000_492_000_with_camera_labels.tfrecord\n.....................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-10689101165701914459_2072_300_2092_300_with_camera_labels.tfrecord\nsegment-10689101165701914459_2072_300_2092_300_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-1071392229495085036_1844_790_1864_790_with_camera_labels.tfrecord\nsegment-1071392229495085036_1844_790_1864_790_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-10837554759555844344_6525_000_6545_000_with_camera_labels.tfrecord\nsegment-10837554759555844344_6525_000_6545_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-10868756386479184868_3000_000_3020_000_with_camera_labels.tfrecord\nsegment-10868756386479184868_3000_000_3020_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-11037651371539287009_77_670_97_670_with_camera_labels.tfrecord\nsegment-11037651371539287009_77_670_97_670_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-11048712972908676520_545_000_565_000_with_camera_labels.tfrecord\nsegment-11048712972908676520_545_000_565_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-1105338229944737854_1280_000_1300_000_with_camera_labels.tfrecord\nsegment-1105338229944737854_1280_000_1300_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-11356601648124485814_409_000_429_000_with_camera_labels.tfrecord\nsegment-11356601648124485814_409_000_429_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-11387395026864348975_3820_000_3840_000_with_camera_labels.tfrecord\nsegment-11387395026864348975_3820_000_3840_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-11406166561185637285_1753_750_1773_750_with_camera_labels.tfrecord\nsegment-11406166561185637285_1753_750_1773_750_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-11434627589960744626_4829_660_4849_660_with_camera_labels.tfrecord\nsegment-11434627589960744626_4829_660_4849_660_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-11450298750351730790_1431_750_1451_750_with_camera_labels.tfrecord\nsegment-11450298750351730790_1431_750_1451_750_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-11616035176233595745_3548_820_3568_820_with_camera_labels.tfrecord\nsegment-11616035176233595745_3548_820_3568_820_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-11660186733224028707_420_000_440_000_with_camera_labels.tfrecord\nsegment-11660186733224028707_420_000_440_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-11901761444769610243_556_000_576_000_with_camera_labels.tfrecord\nsegment-11901761444769610243_556_000_576_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-12102100359426069856_3931_470_3951_470_with_camera_labels.tfrecord\nsegment-12102100359426069856_3931_470_3951_470_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-12134738431513647889_3118_000_3138_000_with_camera_labels.tfrecord\nsegment-12134738431513647889_3118_000_3138_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0000/segment-12306251798468767010_560_000_580_000_with_camera_labels.tfrecord\nsegment-12306251798468767010_560_000_580_000_with_camera_labels.tfrecord\n........................................Folder name: validation_0001\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/validation_0001/segment-12358364923781697038_2232_990_2252_990_with_camera_labels.tfrecord\nsegment-12358364923781697038_2232_990_2252_990_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-12374656037744638388_1412_711_1432_711_with_camera_labels.tfrecord\nsegment-12374656037744638388_1412_711_1432_711_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-12496433400137459534_120_000_140_000_with_camera_labels.tfrecord\nsegment-12496433400137459534_120_000_140_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-12657584952502228282_3940_000_3960_000_with_camera_labels.tfrecord\nsegment-12657584952502228282_3940_000_3960_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-12820461091157089924_5202_916_5222_916_with_camera_labels.tfrecord\nsegment-12820461091157089924_5202_916_5222_916_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-12831741023324393102_2673_230_2693_230_with_camera_labels.tfrecord\nsegment-12831741023324393102_2673_230_2693_230_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-12866817684252793621_480_000_500_000_with_camera_labels.tfrecord\nsegment-12866817684252793621_480_000_500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-12940710315541930162_2660_000_2680_000_with_camera_labels.tfrecord\nsegment-12940710315541930162_2660_000_2680_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-13178092897340078601_5118_604_5138_604_with_camera_labels.tfrecord\nsegment-13178092897340078601_5118_604_5138_604_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-13184115878756336167_1354_000_1374_000_with_camera_labels.tfrecord\nsegment-13184115878756336167_1354_000_1374_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-13299463771883949918_4240_000_4260_000_with_camera_labels.tfrecord\nsegment-13299463771883949918_4240_000_4260_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-1331771191699435763_440_000_460_000_with_camera_labels.tfrecord\nsegment-1331771191699435763_440_000_460_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-13336883034283882790_7100_000_7120_000_with_camera_labels.tfrecord\nsegment-13336883034283882790_7100_000_7120_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-13356997604177841771_3360_000_3380_000_with_camera_labels.tfrecord\nsegment-13356997604177841771_3360_000_3380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-13415985003725220451_6163_000_6183_000_with_camera_labels.tfrecord\nsegment-13415985003725220451_6163_000_6183_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-13469905891836363794_4429_660_4449_660_with_camera_labels.tfrecord\nsegment-13469905891836363794_4429_660_4449_660_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-13573359675885893802_1985_970_2005_970_with_camera_labels.tfrecord\nsegment-13573359675885893802_1985_970_2005_970_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-13694146168933185611_800_000_820_000_with_camera_labels.tfrecord\nsegment-13694146168933185611_800_000_820_000_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-13941626351027979229_3363_930_3383_930_with_camera_labels.tfrecord\nsegment-13941626351027979229_3363_930_3383_930_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-13982731384839979987_1680_000_1700_000_with_camera_labels.tfrecord\nsegment-13982731384839979987_1680_000_1700_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-1405149198253600237_160_000_180_000_with_camera_labels.tfrecord\nsegment-1405149198253600237_160_000_180_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-14081240615915270380_4399_000_4419_000_with_camera_labels.tfrecord\nsegment-14081240615915270380_4399_000_4419_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-14107757919671295130_3546_370_3566_370_with_camera_labels.tfrecord\nsegment-14107757919671295130_3546_370_3566_370_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-14127943473592757944_2068_000_2088_000_with_camera_labels.tfrecord\nsegment-14127943473592757944_2068_000_2088_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0001/segment-14165166478774180053_1786_000_1806_000_with_camera_labels.tfrecord\nsegment-14165166478774180053_1786_000_1806_000_with_camera_labels.tfrecord\n........................................Folder name: validation_0002\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/validation_0002/segment-14244512075981557183_1226_840_1246_840_with_camera_labels.tfrecord\nsegment-14244512075981557183_1226_840_1246_840_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-14262448332225315249_1280_000_1300_000_with_camera_labels.tfrecord\nsegment-14262448332225315249_1280_000_1300_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-14300007604205869133_1160_000_1180_000_with_camera_labels.tfrecord\nsegment-14300007604205869133_1160_000_1180_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-14333744981238305769_5658_260_5678_260_with_camera_labels.tfrecord\nsegment-14333744981238305769_5658_260_5678_260_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-14383152291533557785_240_000_260_000_with_camera_labels.tfrecord\nsegment-14383152291533557785_240_000_260_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-14486517341017504003_3406_349_3426_349_with_camera_labels.tfrecord\nsegment-14486517341017504003_3406_349_3426_349_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-1457696187335927618_595_027_615_027_with_camera_labels.tfrecord\nsegment-1457696187335927618_595_027_615_027_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-14624061243736004421_1840_000_1860_000_with_camera_labels.tfrecord\nsegment-14624061243736004421_1840_000_1860_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-1464917900451858484_1960_000_1980_000_with_camera_labels.tfrecord\nsegment-1464917900451858484_1960_000_1980_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-14663356589561275673_935_195_955_195_with_camera_labels.tfrecord\nsegment-14663356589561275673_935_195_955_195_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-14687328292438466674_892_000_912_000_with_camera_labels.tfrecord\nsegment-14687328292438466674_892_000_912_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-14739149465358076158_4740_000_4760_000_with_camera_labels.tfrecord\nsegment-14739149465358076158_4740_000_4760_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-14811410906788672189_373_113_393_113_with_camera_labels.tfrecord\nsegment-14811410906788672189_373_113_393_113_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-14931160836268555821_5778_870_5798_870_with_camera_labels.tfrecord\nsegment-14931160836268555821_5778_870_5798_870_with_camera_labels.tfrecord\n.....................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-14956919859981065721_1759_980_1779_980_with_camera_labels.tfrecord\nsegment-14956919859981065721_1759_980_1779_980_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-15021599536622641101_556_150_576_150_with_camera_labels.tfrecord\nsegment-15021599536622641101_556_150_576_150_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-15028688279822984888_1560_000_1580_000_with_camera_labels.tfrecord\nsegment-15028688279822984888_1560_000_1580_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-1505698981571943321_1186_773_1206_773_with_camera_labels.tfrecord\nsegment-1505698981571943321_1186_773_1206_773_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-15096340672898807711_3765_000_3785_000_with_camera_labels.tfrecord\nsegment-15096340672898807711_3765_000_3785_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-15224741240438106736_960_000_980_000_with_camera_labels.tfrecord\nsegment-15224741240438106736_960_000_980_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-15396462829361334065_4265_000_4285_000_with_camera_labels.tfrecord\nsegment-15396462829361334065_4265_000_4285_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-15488266120477489949_3162_920_3182_920_with_camera_labels.tfrecord\nsegment-15488266120477489949_3162_920_3182_920_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-15496233046893489569_4551_550_4571_550_with_camera_labels.tfrecord\nsegment-15496233046893489569_4551_550_4571_550_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-15611747084548773814_3740_000_3760_000_with_camera_labels.tfrecord\nsegment-15611747084548773814_3740_000_3760_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0002/segment-15724298772299989727_5386_410_5406_410_with_camera_labels.tfrecord\nsegment-15724298772299989727_5386_410_5406_410_with_camera_labels.tfrecord\n........................................Folder name: validation_0003\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/validation_0003/segment-15948509588157321530_7187_290_7207_290_with_camera_labels.tfrecord\nsegment-15948509588157321530_7187_290_7207_290_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-15959580576639476066_5087_580_5107_580_with_camera_labels.tfrecord\nsegment-15959580576639476066_5087_580_5107_580_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-16204463896543764114_5340_000_5360_000_with_camera_labels.tfrecord\nsegment-16204463896543764114_5340_000_5360_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-16213317953898915772_1597_170_1617_170_with_camera_labels.tfrecord\nsegment-16213317953898915772_1597_170_1617_170_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-16229547658178627464_380_000_400_000_with_camera_labels.tfrecord\nsegment-16229547658178627464_380_000_400_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-16751706457322889693_4475_240_4495_240_with_camera_labels.tfrecord\nsegment-16751706457322889693_4475_240_4495_240_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-16767575238225610271_5185_000_5205_000_with_camera_labels.tfrecord\nsegment-16767575238225610271_5185_000_5205_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-16979882728032305374_2719_000_2739_000_with_camera_labels.tfrecord\nsegment-16979882728032305374_2719_000_2739_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17065833287841703_2980_000_3000_000_with_camera_labels.tfrecord\nsegment-17065833287841703_2980_000_3000_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17135518413411879545_1480_000_1500_000_with_camera_labels.tfrecord\nsegment-17135518413411879545_1480_000_1500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17136314889476348164_979_560_999_560_with_camera_labels.tfrecord\nsegment-17136314889476348164_979_560_999_560_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17152649515605309595_3440_000_3460_000_with_camera_labels.tfrecord\nsegment-17152649515605309595_3440_000_3460_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17244566492658384963_2540_000_2560_000_with_camera_labels.tfrecord\nsegment-17244566492658384963_2540_000_2560_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17344036177686610008_7852_160_7872_160_with_camera_labels.tfrecord\nsegment-17344036177686610008_7852_160_7872_160_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17539775446039009812_440_000_460_000_with_camera_labels.tfrecord\nsegment-17539775446039009812_440_000_460_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17612470202990834368_2800_000_2820_000_with_camera_labels.tfrecord\nsegment-17612470202990834368_2800_000_2820_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17626999143001784258_2760_000_2780_000_with_camera_labels.tfrecord\nsegment-17626999143001784258_2760_000_2780_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17694030326265859208_2340_000_2360_000_with_camera_labels.tfrecord\nsegment-17694030326265859208_2340_000_2360_000_with_camera_labels.tfrecord\n.......................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17703234244970638241_220_000_240_000_with_camera_labels.tfrecord\nsegment-17703234244970638241_220_000_240_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17763730878219536361_3144_635_3164_635_with_camera_labels.tfrecord\nsegment-17763730878219536361_3144_635_3164_635_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17791493328130181905_1480_000_1500_000_with_camera_labels.tfrecord\nsegment-17791493328130181905_1480_000_1500_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17860546506509760757_6040_000_6060_000_with_camera_labels.tfrecord\nsegment-17860546506509760757_6040_000_6060_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-17962792089966876718_2210_933_2230_933_with_camera_labels.tfrecord\nsegment-17962792089966876718_2210_933_2230_933_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-18024188333634186656_1566_600_1586_600_with_camera_labels.tfrecord\nsegment-18024188333634186656_1566_600_1586_600_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0003/segment-18045724074935084846_6615_900_6635_900_with_camera_labels.tfrecord\nsegment-18045724074935084846_6615_900_6635_900_with_camera_labels.tfrecord\n........................................Folder name: validation_0004\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/validation_0004/segment-18252111882875503115_378_471_398_471_with_camera_labels.tfrecord\nsegment-18252111882875503115_378_471_398_471_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-18305329035161925340_4466_730_4486_730_with_camera_labels.tfrecord\nsegment-18305329035161925340_4466_730_4486_730_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-18331704533904883545_1560_000_1580_000_with_camera_labels.tfrecord\nsegment-18331704533904883545_1560_000_1580_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-18333922070582247333_320_280_340_280_with_camera_labels.tfrecord\nsegment-18333922070582247333_320_280_340_280_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-18446264979321894359_3700_000_3720_000_with_camera_labels.tfrecord\nsegment-18446264979321894359_3700_000_3720_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-1906113358876584689_1359_560_1379_560_with_camera_labels.tfrecord\nsegment-1906113358876584689_1359_560_1379_560_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-191862526745161106_1400_000_1420_000_with_camera_labels.tfrecord\nsegment-191862526745161106_1400_000_1420_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-1943605865180232897_680_000_700_000_with_camera_labels.tfrecord\nsegment-1943605865180232897_680_000_700_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-2094681306939952000_2972_300_2992_300_with_camera_labels.tfrecord\nsegment-2094681306939952000_2972_300_2992_300_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-2105808889850693535_2295_720_2315_720_with_camera_labels.tfrecord\nsegment-2105808889850693535_2295_720_2315_720_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-2308204418431899833_3575_000_3595_000_with_camera_labels.tfrecord\nsegment-2308204418431899833_3575_000_3595_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-2335854536382166371_2709_426_2729_426_with_camera_labels.tfrecord\nsegment-2335854536382166371_2709_426_2729_426_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-2367305900055174138_1881_827_1901_827_with_camera_labels.tfrecord\nsegment-2367305900055174138_1881_827_1901_827_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-2506799708748258165_6455_000_6475_000_with_camera_labels.tfrecord\nsegment-2506799708748258165_6455_000_6475_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-2551868399007287341_3100_000_3120_000_with_camera_labels.tfrecord\nsegment-2551868399007287341_3100_000_3120_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-260994483494315994_2797_545_2817_545_with_camera_labels.tfrecord\nsegment-260994483494315994_2797_545_2817_545_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-2624187140172428292_73_000_93_000_with_camera_labels.tfrecord\nsegment-2624187140172428292_73_000_93_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-271338158136329280_2541_070_2561_070_with_camera_labels.tfrecord\nsegment-271338158136329280_2541_070_2561_070_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-272435602399417322_2884_130_2904_130_with_camera_labels.tfrecord\nsegment-272435602399417322_2884_130_2904_130_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-2736377008667623133_2676_410_2696_410_with_camera_labels.tfrecord\nsegment-2736377008667623133_2676_410_2696_410_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-2834723872140855871_1615_000_1635_000_with_camera_labels.tfrecord\nsegment-2834723872140855871_1615_000_1635_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-3015436519694987712_1300_000_1320_000_with_camera_labels.tfrecord\nsegment-3015436519694987712_1300_000_1320_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-3039251927598134881_1240_610_1260_610_with_camera_labels.tfrecord\nsegment-3039251927598134881_1240_610_1260_610_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-3077229433993844199_1080_000_1100_000_with_camera_labels.tfrecord\nsegment-3077229433993844199_1080_000_1100_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0004/segment-30779396576054160_1880_000_1900_000_with_camera_labels.tfrecord\nsegment-30779396576054160_1880_000_1900_000_with_camera_labels.tfrecord\n........................................Folder name: validation_0005\nNum of tfrecord file: 25\nextracting /data/cmpe295-liu/Waymo/validation_0005/segment-3126522626440597519_806_440_826_440_with_camera_labels.tfrecord\nsegment-3126522626440597519_806_440_826_440_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-346889320598157350_798_187_818_187_with_camera_labels.tfrecord\nsegment-346889320598157350_798_187_818_187_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-3577352947946244999_3980_000_4000_000_with_camera_labels.tfrecord\nsegment-3577352947946244999_3980_000_4000_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-3651243243762122041_3920_000_3940_000_with_camera_labels.tfrecord\nsegment-3651243243762122041_3920_000_3940_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-366934253670232570_2229_530_2249_530_with_camera_labels.tfrecord\nsegment-366934253670232570_2229_530_2249_530_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-3731719923709458059_1540_000_1560_000_with_camera_labels.tfrecord\nsegment-3731719923709458059_1540_000_1560_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-3915587593663172342_10_000_30_000_with_camera_labels.tfrecord\nsegment-3915587593663172342_10_000_30_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-4013125682946523088_3540_000_3560_000_with_camera_labels.tfrecord\nsegment-4013125682946523088_3540_000_3560_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-4195774665746097799_7300_960_7320_960_with_camera_labels.tfrecord\nsegment-4195774665746097799_7300_960_7320_960_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-4246537812751004276_1560_000_1580_000_with_camera_labels.tfrecord\nsegment-4246537812751004276_1560_000_1580_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-4409585400955983988_3500_470_3520_470_with_camera_labels.tfrecord\nsegment-4409585400955983988_3500_470_3520_470_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-4423389401016162461_4235_900_4255_900_with_camera_labels.tfrecord\nsegment-4423389401016162461_4235_900_4255_900_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-4426410228514970291_1620_000_1640_000_with_camera_labels.tfrecord\nsegment-4426410228514970291_1620_000_1640_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-447576862407975570_4360_000_4380_000_with_camera_labels.tfrecord\nsegment-447576862407975570_4360_000_4380_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-4490196167747784364_616_569_636_569_with_camera_labels.tfrecord\nsegment-4490196167747784364_616_569_636_569_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-4575389405178805994_4900_000_4920_000_with_camera_labels.tfrecord\nsegment-4575389405178805994_4900_000_4920_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-4612525129938501780_340_000_360_000_with_camera_labels.tfrecord\nsegment-4612525129938501780_340_000_360_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-4690718861228194910_1980_000_2000_000_with_camera_labels.tfrecord\nsegment-4690718861228194910_1980_000_2000_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-4759225533437988401_800_000_820_000_with_camera_labels.tfrecord\nsegment-4759225533437988401_800_000_820_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-4764167778917495793_860_000_880_000_with_camera_labels.tfrecord\nsegment-4764167778917495793_860_000_880_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-4816728784073043251_5273_410_5293_410_with_camera_labels.tfrecord\nsegment-4816728784073043251_5273_410_5293_410_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-4854173791890687260_2880_000_2900_000_with_camera_labels.tfrecord\nsegment-4854173791890687260_2880_000_2900_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-5183174891274719570_3464_030_3484_030_with_camera_labels.tfrecord\nsegment-5183174891274719570_3464_030_3484_030_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-5289247502039512990_2640_000_2660_000_with_camera_labels.tfrecord\nsegment-5289247502039512990_2640_000_2660_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0005/segment-5302885587058866068_320_000_340_000_with_camera_labels.tfrecord\nsegment-5302885587058866068_320_000_340_000_with_camera_labels.tfrecord\n........................................Folder name: validation_0006\nNum of tfrecord file: 26\nextracting /data/cmpe295-liu/Waymo/validation_0006/segment-5372281728627437618_2005_000_2025_000_with_camera_labels.tfrecord\nsegment-5372281728627437618_2005_000_2025_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-5373876050695013404_3817_170_3837_170_with_camera_labels.tfrecord\nsegment-5373876050695013404_3817_170_3837_170_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-5574146396199253121_6759_360_6779_360_with_camera_labels.tfrecord\nsegment-5574146396199253121_6759_360_6779_360_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-5772016415301528777_1400_000_1420_000_with_camera_labels.tfrecord\nsegment-5772016415301528777_1400_000_1420_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-5832416115092350434_60_000_80_000_with_camera_labels.tfrecord\nsegment-5832416115092350434_60_000_80_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-5847910688643719375_180_000_200_000_with_camera_labels.tfrecord\nsegment-5847910688643719375_180_000_200_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-5990032395956045002_6600_000_6620_000_with_camera_labels.tfrecord\nsegment-5990032395956045002_6600_000_6620_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-6001094526418694294_4609_470_4629_470_with_camera_labels.tfrecord\nsegment-6001094526418694294_4609_470_4629_470_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-6074871217133456543_1000_000_1020_000_with_camera_labels.tfrecord\nsegment-6074871217133456543_1000_000_1020_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-6161542573106757148_585_030_605_030_with_camera_labels.tfrecord\nsegment-6161542573106757148_585_030_605_030_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-6183008573786657189_5414_000_5434_000_with_camera_labels.tfrecord\nsegment-6183008573786657189_5414_000_5434_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-6324079979569135086_2372_300_2392_300_with_camera_labels.tfrecord\nsegment-6324079979569135086_2372_300_2392_300_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-6491418762940479413_6520_000_6540_000_with_camera_labels.tfrecord\nsegment-6491418762940479413_6520_000_6540_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-662188686397364823_3248_800_3268_800_with_camera_labels.tfrecord\nsegment-662188686397364823_3248_800_3268_800_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-6637600600814023975_2235_000_2255_000_with_camera_labels.tfrecord\nsegment-6637600600814023975_2235_000_2255_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-6680764940003341232_2260_000_2280_000_with_camera_labels.tfrecord\nsegment-6680764940003341232_2260_000_2280_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-6707256092020422936_2352_392_2372_392_with_camera_labels.tfrecord\nsegment-6707256092020422936_2352_392_2372_392_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-7119831293178745002_1094_720_1114_720_with_camera_labels.tfrecord\nsegment-7119831293178745002_1094_720_1114_720_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-7163140554846378423_2717_820_2737_820_with_camera_labels.tfrecord\nsegment-7163140554846378423_2717_820_2737_820_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-7253952751374634065_1100_000_1120_000_with_camera_labels.tfrecord\nsegment-7253952751374634065_1100_000_1120_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-7493781117404461396_2140_000_2160_000_with_camera_labels.tfrecord\nsegment-7493781117404461396_2140_000_2160_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-7650923902987369309_2380_000_2400_000_with_camera_labels.tfrecord\nsegment-7650923902987369309_2380_000_2400_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-7732779227944176527_2120_000_2140_000_with_camera_labels.tfrecord\nsegment-7732779227944176527_2120_000_2140_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-7799643635310185714_680_000_700_000_with_camera_labels.tfrecord\nsegment-7799643635310185714_680_000_700_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-7932945205197754811_780_000_800_000_with_camera_labels.tfrecord\nsegment-7932945205197754811_780_000_800_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0006/segment-7988627150403732100_1487_540_1507_540_with_camera_labels.tfrecord\nsegment-7988627150403732100_1487_540_1507_540_with_camera_labels.tfrecord\n........................................Folder name: validation_0007\nNum of tfrecord file: 26\nextracting /data/cmpe295-liu/Waymo/validation_0007/segment-8079607115087394458_1240_000_1260_000_with_camera_labels.tfrecord\nsegment-8079607115087394458_1240_000_1260_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-8133434654699693993_1162_020_1182_020_with_camera_labels.tfrecord\nsegment-8133434654699693993_1162_020_1182_020_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-8137195482049459160_3100_000_3120_000_with_camera_labels.tfrecord\nsegment-8137195482049459160_3100_000_3120_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-8302000153252334863_6020_000_6040_000_with_camera_labels.tfrecord\nsegment-8302000153252334863_6020_000_6040_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-8331804655557290264_4351_740_4371_740_with_camera_labels.tfrecord\nsegment-8331804655557290264_4351_740_4371_740_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-8398516118967750070_3958_000_3978_000_with_camera_labels.tfrecord\nsegment-8398516118967750070_3958_000_3978_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-8506432817378693815_4860_000_4880_000_with_camera_labels.tfrecord\nsegment-8506432817378693815_4860_000_4880_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-8679184381783013073_7740_000_7760_000_with_camera_labels.tfrecord\nsegment-8679184381783013073_7740_000_7760_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-8845277173853189216_3828_530_3848_530_with_camera_labels.tfrecord\nsegment-8845277173853189216_3828_530_3848_530_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-8888517708810165484_1549_770_1569_770_with_camera_labels.tfrecord\nsegment-8888517708810165484_1549_770_1569_770_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-8907419590259234067_1960_000_1980_000_with_camera_labels.tfrecord\nsegment-8907419590259234067_1960_000_1980_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-89454214745557131_3160_000_3180_000_with_camera_labels.tfrecord\nsegment-89454214745557131_3160_000_3180_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-8956556778987472864_3404_790_3424_790_with_camera_labels.tfrecord\nsegment-8956556778987472864_3404_790_3424_790_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-902001779062034993_2880_000_2900_000_with_camera_labels.tfrecord\nsegment-902001779062034993_2880_000_2900_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-9024872035982010942_2578_810_2598_810_with_camera_labels.tfrecord\nsegment-9024872035982010942_2578_810_2598_810_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-9041488218266405018_6454_030_6474_030_with_camera_labels.tfrecord\nsegment-9041488218266405018_6454_030_6474_030_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-9114112687541091312_1100_000_1120_000_with_camera_labels.tfrecord\nsegment-9114112687541091312_1100_000_1120_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-9164052963393400298_4692_970_4712_970_with_camera_labels.tfrecord\nsegment-9164052963393400298_4692_970_4712_970_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-9231652062943496183_1740_000_1760_000_with_camera_labels.tfrecord\nsegment-9231652062943496183_1740_000_1760_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-9243656068381062947_1297_428_1317_428_with_camera_labels.tfrecord\nsegment-9243656068381062947_1297_428_1317_428_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-9265793588137545201_2981_960_3001_960_with_camera_labels.tfrecord\nsegment-9265793588137545201_2981_960_3001_960_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-933621182106051783_4160_000_4180_000_with_camera_labels.tfrecord\nsegment-933621182106051783_4160_000_4180_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-9443948810903981522_6538_870_6558_870_with_camera_labels.tfrecord\nsegment-9443948810903981522_6538_870_6558_870_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-9472420603764812147_850_000_870_000_with_camera_labels.tfrecord\nsegment-9472420603764812147_850_000_870_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-9579041874842301407_1300_000_1320_000_with_camera_labels.tfrecord\nsegment-9579041874842301407_1300_000_1320_000_with_camera_labels.tfrecord\n........................................extracting /data/cmpe295-liu/Waymo/validation_0007/segment-967082162553397800_5102_900_5122_900_with_camera_labels.tfrecord\nsegment-967082162553397800_5102_900_5122_900_with_camera_labels.tfrecord\n........................................"
],
[
"!ls /data/cmpe295-liu/Waymo/WaymoCOCOsmall/Validation",
"13238419657658219864_4630_850_4650_850_1509148365103003_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148365602667_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148366102271_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148366602037_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148367101884_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148367601838_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148368101725_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148368601567_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148369101199_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148369600674_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148370099981_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148370599783_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148371100084_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148371600081_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148372099672_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148372599251_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148373098760_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148373598291_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148374097910_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148374597610_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148375097286_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148375597079_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148376096925_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148376596731_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148377096207_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148377595872_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148378095680_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148378595078_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148379094584_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148379594319_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148380093941_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148380593936_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148381093825_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148381593529_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148382093162_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148382592743_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148383092734_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148383592617_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148384092347_FRONT.jpg\n13238419657658219864_4630_850_4650_850_1509148384591997_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886652547617_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886653048009_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886653548349_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886654048234_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886654548315_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886655048161_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886655547745_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886656047333_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886656547206_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886657047390_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886657547475_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886658047354_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886658547342_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886659047498_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886659547723_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886660047704_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886660547592_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886661047327_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886661547173_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886662047241_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886662547569_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886663047903_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886663547902_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886664047681_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886664547381_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886665047201_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886665547236_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886666047403_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886666547668_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886667047674_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886667547677_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886668047566_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886668547470_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886669047492_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886669547491_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886670047548_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886670547549_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886671047402_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886671547363_FRONT.jpg\n13254498462985394788_980_000_1000_000_1557886672047350_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829340380961_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829340881371_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829341381741_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829341881723_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829342381614_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829342881423_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829343381116_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829343880641_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829344380226_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829344879889_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829345379366_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829345878956_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829346378628_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829346878274_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829347377777_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829347877322_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829348376784_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829348876243_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829349375970_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829349875703_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829350375481_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829350875418_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829351375422_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829351875420_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829352375353_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829352875239_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829353375379_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829353875571_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829354375708_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829354875962_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829355376392_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829355876911_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829356377354_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829356877833_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829357378411_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829357878830_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829358379237_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829358879878_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829359380578_FRONT.jpg\n13258835835415292197_965_000_985_000_1521829359881280_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465667337424_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465667837421_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465668337495_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465668837604_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465669337548_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465669837486_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465670337405_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465670837439_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465671337413_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465671837401_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465672337483_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465672837504_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465673337545_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465673837488_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465674337471_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465674837511_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465675337539_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465675837531_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465676337479_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465676837467_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465677337502_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465677837491_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465678337511_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465678837511_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465679337513_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465679837530_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465680337527_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465680837528_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465681337508_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465681837573_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465682337506_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465682837536_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465683337584_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465683837568_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465684337473_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465684837462_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465685337501_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465685837498_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465686337532_FRONT.jpg\n13271285919570645382_5320_000_5340_000_1544465686837556_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782347080008_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782347580233_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782348080296_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782348580086_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782349080114_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782349580046_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782350079871_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782350579984_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782351080129_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782351579972_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782352079903_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782352579821_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782353079885_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782353580147_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782354080411_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782354580244_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782355080027_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782355579793_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782356079780_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782356579930_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782357080104_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782357580218_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782358080248_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782358580220_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782359080202_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782359580113_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782360079880_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782360579685_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782361079982_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782361580241_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782362080354_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782362580312_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782363080069_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782363580003_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782364079871_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782364579763_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782365079841_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782365579941_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782366079663_FRONT.jpg\n13310437789759009684_2645_000_2665_000_1522782366579745_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089832301898_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089832801774_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089833301641_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089833801684_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089834301622_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089834801628_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089835301680_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089835801636_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089836301509_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089836801575_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089837301638_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089837801609_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089838301380_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089838801195_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089839300933_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089839800635_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089840300270_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089840800012_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089841299917_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089841799820_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089842299916_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089842800423_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089843301357_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089843802424_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089844303030_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089844803208_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089845303322_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089845803006_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089846302617_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089846802310_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089847302106_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089847801937_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089848301922_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089848801926_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089849302067_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089849802105_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089850301979_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089850801811_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089851301651_FRONT.jpg\n13355317306876751663_2200_000_2220_000_1550089851801407_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983349776854_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983350276917_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983350777045_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983351277116_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983351777003_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983352276909_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983352776755_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983353276772_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983353776807_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983354276812_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983354776678_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983355276674_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983355777079_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983356277913_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983356778623_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983357279784_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983357781473_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983358284014_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983358787299_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983359291032_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983359794624_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983360296689_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983360797671_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983361298139_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983361798251_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983362298361_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983362798276_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983363298110_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983363798151_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983364298336_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983364798634_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983365298904_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983365798997_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983366298755_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983366798550_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983367298619_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983367798933_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983368299180_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983368799270_FRONT.jpg\n13363977648531075793_343_000_363_000_1525983369299245_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684425881818_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684426381717_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684426881467_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684427381207_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684427881093_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684428381220_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684428881574_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684429381790_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684429881824_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684430381561_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684430881359_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684431381387_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684431881575_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684432381861_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684432881984_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684433381826_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684433881742_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684434381682_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684434881872_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684435381927_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684435881783_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684436381659_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684436881654_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684437381746_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684437881755_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684438381550_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684438881395_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684439381611_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684439882899_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684440385493_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684440889044_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684441393339_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684441897831_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684442401833_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684442904775_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684443406319_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684443906683_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684444406805_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684444907116_FRONT.jpg\n13390791323468600062_6718_570_6738_570_1508684445407481_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326766647318_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326767147365_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326767647440_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326768147467_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326768647462_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326769147454_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326769647466_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326770147504_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326770647630_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326771147677_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326771647856_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326772147766_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326772647559_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326773147482_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326773647569_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326774147707_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326774647919_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326775148023_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326775647977_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326776147915_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326776647906_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326777147939_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326777647935_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326778147840_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326778647793_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326779147681_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326779647607_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326780147575_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326780647533_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326781147529_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326781647539_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326782147435_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326782647476_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326783147639_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326783647858_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326784148087_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326784648432_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326785148464_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326785648499_FRONT.jpg\n13402473631986525162_5700_000_5720_000_1557326786148626_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002043729173_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002044228520_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002044727911_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002045227393_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002045727039_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002046226816_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002046726857_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002047226965_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002047727070_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002048227048_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002048726968_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002049227026_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002049727161_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002050227300_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002050727429_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002051227465_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002051727616_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002052228347_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002052729666_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002053231305_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002053733428_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002054236008_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002054738781_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002055242015_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002055745591_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002056248806_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002056750810_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002057251483_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002057751385_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002058251568_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002058751671_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002059251784_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002059751665_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002060251713_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002060751723_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002061251812_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002061751960_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002062252067_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002062752232_FRONT.jpg\n13476374534576730229_240_000_260_000_1550002063252355_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353834772401_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353835272453_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353835772536_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353836272745_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353836773091_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353837273337_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353837773459_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353838273441_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353838773292_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353839273103_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353839772910_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353840272725_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353840772489_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353841272265_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353841772232_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353842272289_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353842772486_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353843272673_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353843772601_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353844272506_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353844772282_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353845272093_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353845771912_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353846271679_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353846771668_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353847271729_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353847772332_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353848273200_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353848773496_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353849273243_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353849772964_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353850272731_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353850772503_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353851272396_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353851772347_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353852272342_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353852772460_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353853272583_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353853772496_FRONT.jpg\n13506499849906169066_120_000_140_000_1552353854272497_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526641847385_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526642347408_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526642847445_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526643347478_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526643847466_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526644347508_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526644847512_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526645347545_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526645847622_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526646347729_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526646847683_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526647347488_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526647847363_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526648347277_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526648847347_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526649347391_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526649847283_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526650347186_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526650847244_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526651347467_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526651847570_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526652347562_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526652847555_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526653347521_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526653847457_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526654347390_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526654847389_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526655347473_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526655847441_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526656347407_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526656847382_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526657347373_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526657847330_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526658347404_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526658847492_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526659347419_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526659847223_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526660347197_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526660847304_FRONT.jpg\n13517115297021862252_2680_000_2700_000_1547526661347392_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047242048967_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047242548997_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047243048912_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047243548768_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047244048590_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047244548556_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047245048467_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047245548656_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047246048857_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047246548979_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047247048834_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047247548936_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047248049117_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047248549159_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047249049073_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047249549031_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047250049108_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047250549174_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047251049107_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047251548941_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047252048729_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047252548553_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047253048503_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047253548466_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047254048570_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047254548853_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047255048985_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047255548930_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047256048875_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047256548881_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047257049036_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047257549134_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047258049214_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047258549174_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047259048251_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047259545800_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047260041953_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047260537839_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047261033862_FRONT.jpg\n13519445614718437933_4060_000_4080_000_1542047261530139_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069373679156_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069374179193_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069374679294_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069375179270_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069375679380_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069376179451_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069376679482_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069377179435_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069377679384_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069378179202_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069378679056_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069379179131_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069379679203_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069380179135_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069380679125_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069381179116_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069381679043_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069382178896_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069382678804_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069383178867_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069383679042_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069384179183_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069384679175_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069385179140_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069385679080_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069386178892_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069386678635_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069387178558_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069387678706_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069388178887_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069388679093_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069389179166_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069389679079_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069390178956_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069390678938_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069391178954_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069391679142_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069392179187_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069392679070_FRONT.jpg\n1352150727715827110_3710_250_3730_250_1507069393178680_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784786582249_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784787082502_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784787582597_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784788082432_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784788582137_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784789082239_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784789582315_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784790082113_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784790581989_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784791081992_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784791582102_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784792082011_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784792581860_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784793082016_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784793582307_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784794082437_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784794582430_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784795082448_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784795582306_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784796082265_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784796582068_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784797082121_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784797582367_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784798082500_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784798582474_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784799082385_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784799582282_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784800082347_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784800582382_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784801082270_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784801582327_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784802082497_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784802582684_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784803082789_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784803582612_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784804082382_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784804582298_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784805082228_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784805582074_FRONT.jpg\n1357883579772440606_2365_000_2385_000_1522784806081999_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940782492135_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940782992191_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940783491546_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940783990099_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940784488057_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940784985339_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940785482089_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940785978537_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940786475280_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940786972244_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940787470051_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940787968807_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940788468529_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940788968861_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940789469138_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940789969090_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940790468991_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940790968987_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940791468937_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940791968981_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940792469203_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940792969413_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940793469653_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940793969802_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940794469945_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940794970167_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940795470410_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940795970666_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940796470984_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940796971134_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940797471337_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940797971732_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940798472237_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940798972565_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940799472883_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940799973259_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940800473578_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940800973717_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940801473855_FRONT.jpg\n13585809231635721258_1910_770_1930_770_1507940801973994_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028862937246_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028863437090_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028863937061_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028864437078_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028864937134_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028865437186_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028865937189_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028866437404_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028866937522_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028867437219_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028867936870_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028868436827_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028868936763_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028869436876_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028869936981_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028870437108_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028870936974_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028871436857_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028871936920_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028872436940_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028872936990_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028873436953_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028873936632_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028874436324_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028874936325_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028875436686_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028875937050_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028876437015_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028876936672_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028877436478_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028877936523_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028878436804_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028878937083_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028879437165_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028879937149_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028880436949_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028880937088_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028881437202_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028881937085_FRONT.jpg\n13619063687271391084_1519_680_1539_680_1509028882436887_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212334686044_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212335186235_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212335686090_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212336185969_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212336685811_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212337185758_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212337685682_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212338185676_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212338685705_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212339185786_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212339685769_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212340185921_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212340686000_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212341185885_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212341685718_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212342185573_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212342685692_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212343185869_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212343685918_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212344185979_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212344685906_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212345185888_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212345685815_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212346185936_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212346686169_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212347186323_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212347686377_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212348186527_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212348686620_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212349186602_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212349686425_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212350186288_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212350686262_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212351186195_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212351686083_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212352186127_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212352686062_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212353185950_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212353685883_FRONT.jpg\n13622747960068272448_1678_930_1698_930_1509212354185982_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474378554332_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474379054212_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474379554136_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474380054267_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474380554491_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474381054597_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474381554589_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474382054505_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474382554341_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474383054071_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474383553836_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474384053828_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474384553986_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474385054201_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474385554276_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474386054209_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474386554137_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474387054082_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474387554054_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474388054068_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474388554112_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474389054215_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474389554284_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474390054330_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474390554312_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474391054263_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474391554187_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474392054232_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474392554167_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474393054157_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474393554039_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474394054068_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474394554173_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474395054244_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474395554224_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474396054140_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474396554173_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474397054201_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474397554141_FRONT.jpg\n13629997314951696814_1207_000_1227_000_1515474398054133_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605585286155_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605585786216_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605586286270_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605586786325_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605587286301_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605587786308_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605588286300_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605588786228_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605589286184_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605589786300_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605590286396_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605590786432_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605591286469_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605591786448_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605592286335_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605592786380_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605593286474_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605593786582_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605594286545_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605594786595_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605595286662_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605595786637_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605596286566_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605596786499_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605597286588_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605597786577_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605598286577_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605598786508_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605599286468_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605599786459_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605600286531_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605600786525_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605601286486_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605601786474_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605602286665_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605602786623_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605603286637_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605603786645_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605604286535_FRONT.jpg\n13667377240304615855_500_000_520_000_1553605604786525_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155316855095_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155317355123_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155317855293_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155318355475_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155318855429_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155319355469_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155319855342_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155320355442_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155320855646_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155321355822_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155321855723_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155322355332_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155322855143_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155323355078_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155323854803_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155324354351_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155324853853_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155325353485_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155325853021_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155326352385_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155326851703_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155327351062_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155327850609_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155328350261_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155328849865_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155329349556_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155329849107_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155330348622_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155330848137_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155331347696_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155331847593_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155332347735_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155332847875_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155333347855_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155333847582_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155334347415_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155334847570_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155335347639_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155335847668_FRONT.jpg\n13679757109245957439_4167_170_4187_170_1507155336347593_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857410637692_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857411137745_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857411637663_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857412137667_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857412637731_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857413137727_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857413637687_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857414137566_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857414637413_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857415137407_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857415637497_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857416137661_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857416637640_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857417137496_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857417637367_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857418137519_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857418637457_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857419137385_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857419637272_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857420137200_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857420637205_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857421137147_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857421637334_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857422137471_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857422637470_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857423137428_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857423637440_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857424137405_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857424637517_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857425137553_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857425637522_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857426137563_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857426637574_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857427137498_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857427637437_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857428137457_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857428637583_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857429137671_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857429637686_FRONT.jpg\n13731697468004921673_4920_000_4940_000_1557857430137684_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658109379460_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658109882916_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658110384683_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658110885767_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658111386488_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658111886603_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658112386716_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658112886648_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658113386464_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658113886230_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658114386257_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658114886376_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658115386424_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658115886380_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658116386179_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658116886079_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658117386102_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658117886231_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658118386271_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658118886322_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658119386340_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658119886261_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658120386106_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658120885958_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658121386118_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658121886298_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658122386364_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658122886257_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658123386040_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658123885919_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658124385836_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658124885950_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658125386173_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658125886363_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658126386522_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658126886384_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658127386088_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658127885932_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658128386055_FRONT.jpg\n13807633218762107566_6625_000_6645_000_1518658128886258_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265257062451_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265257562425_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265258062460_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265258562470_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265259062480_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265259562389_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265260062407_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265260562428_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265261062507_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265261562612_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265262062550_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265262562449_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265263062345_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265263562350_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265264062476_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265264562594_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265265062630_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265265562585_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265266062671_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265266562711_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265267062742_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265267562837_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265268062811_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265268562623_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265269062324_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265269562289_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265270062666_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265270562789_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265271062627_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265271562430_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265272062436_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265272562549_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265273062504_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265273562390_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265274062429_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265274562479_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265275062412_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265275562222_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265276062160_FRONT.jpg\n8079607115087394458_1240_000_1260_000_1557265276562183_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392535299850_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392535797764_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392536295653_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392536793421_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392537291331_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392537789621_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392538288125_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392538786902_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392539286276_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392539786103_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392540286640_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392540787762_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392541289449_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392541791038_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392542292103_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392542792589_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392543292773_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392543792870_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392544293031_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392544793249_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392545293485_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392545793592_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392546293489_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392546793336_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392547293235_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392547793228_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392548293291_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392548793258_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392549293400_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392549793485_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392550293515_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392550793621_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392551293795_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392551793899_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392552293806_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392552793736_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392553393807_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392553893831_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392554393772_FRONT.jpg\n8133434654699693993_1162_020_1182_020_1511392554893845_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962416312988_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962416812971_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962417312957_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962417812993_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962418313052_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962418813011_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962419312939_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962419813015_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962420313105_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962420813116_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962421313028_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962421813024_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962422313025_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962422813023_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962423313018_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962423812984_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962424313003_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962424812994_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962425312954_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962425813021_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962426313133_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962426813146_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962427313176_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962427813141_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962428313085_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962428813039_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962429313112_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962429813112_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962430313036_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962430812964_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962431312900_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962431812796_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962432312836_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962432812728_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962433312623_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962433812475_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962434312363_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962434812386_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962435312496_FRONT.jpg\n8137195482049459160_3100_000_3120_000_1557962435812570_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327086562222_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327087062152_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327087562025_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327088061944_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327088561912_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327089061915_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327089561921_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327090061955_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327090561883_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327091061920_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327091561952_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327092062010_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327092562057_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327093062057_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327093561959_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327094061885_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327094561962_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327095061966_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327095561885_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327096061956_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327096561973_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327097061943_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327097561925_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327098062069_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327098562097_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327099062135_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327099562059_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327100062037_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327100562062_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327101062049_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327101562120_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327102062272_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327102562480_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327103062593_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327103562619_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327104062491_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327104562471_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327105062422_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327105562285_FRONT.jpg\n8302000153252334863_6020_000_6040_000_1557327106062341_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939794765751_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939795266064_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939795766295_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939796266457_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939796766950_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939797267818_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939797768450_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939798268894_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939798768813_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939799268633_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939799768539_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939800268627_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939800768880_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939801269099_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939801769125_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939802268953_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939802768858_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939803268660_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939803768669_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939804268850_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939804769262_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939805269415_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939805769188_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939806268869_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939806768637_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939807268701_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939807768844_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939808269053_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939808769009_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939809268816_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939809768623_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939810268429_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939810768531_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939811268721_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939811768936_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939812269084_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939812769133_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939813269172_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939813769218_FRONT.jpg\n8331804655557290264_4351_740_4371_740_1507939814269131_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524948919649_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524949419903_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524949919788_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524950419564_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524950919330_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524951419194_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524951919052_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524952418994_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524952919174_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524953419338_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524953919428_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524954419432_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524954919387_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524955419380_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524955919388_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524956419447_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524956919601_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524957419720_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524957919610_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524958419476_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524958919366_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524959419443_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524959919472_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524960419600_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524960919659_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524961419624_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524961919592_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524962419616_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524962919886_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524963420235_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524963920193_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524964420088_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524964919951_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524965419915_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524965920120_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524966420300_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524966920532_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524967420760_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524967920919_FRONT.jpg\n8398516118967750070_3958_000_3978_000_1515524968421148_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857350637092_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857351137187_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857351637267_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857352137355_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857352637309_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857353136782_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857353636054_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857354135532_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857354635133_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857355134877_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857355634815_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857356134808_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857356634947_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857357135138_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857357635506_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857358135995_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857358636513_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857359136948_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857359637183_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857360137294_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857360637369_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857361137300_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857361637343_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857362137378_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857362637395_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857363137470_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857363637447_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857364137450_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857364637449_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857365137457_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857365637432_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857366137460_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857366637459_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857367137450_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857367637448_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857368137463_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857368637470_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857369137487_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857369637432_FRONT.jpg\n8506432817378693815_4860_000_4880_000_1557857370137196_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816058898881_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816059398880_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816059898872_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816060398842_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816060898879_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816061398873_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816061898859_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816062398872_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816062898937_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816063398943_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816063898909_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816064398870_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816064898889_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816065398891_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816065898902_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816066398894_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816066898908_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816067398885_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816067898879_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816068398899_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816068898971_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816069399108_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816069899089_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816070398965_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816070898763_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816071398596_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816071898424_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816072398463_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816072898499_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816073398463_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816073898483_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816074398574_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816074898668_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816075398659_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816075898538_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816076398472_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816076898511_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816077398587_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816077898596_FRONT.jpg\n8679184381783013073_7740_000_7760_000_1541816078398599_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976763875635_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976764377039_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976764878557_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976765480071_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976765981324_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976766482769_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976766984204_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976767485274_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976767986070_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976768486680_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976768987200_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976769487700_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976769988245_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976770488632_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976770988865_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976771489028_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976771988957_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976772488752_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976772988708_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976773488877_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976773989093_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976774489170_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976774988967_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976775488853_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976775988913_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976776488958_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976776989089_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976777489269_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976777989291_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976778489141_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976778988996_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976779488906_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976779989021_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976780489130_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976780989085_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976781489034_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976781989005_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976782488893_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976782988810_FRONT.jpg\n8845277173853189216_3828_530_3848_530_1508976783488887_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882803402822_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882803902875_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882804402834_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882804902949_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882805403013_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882805903072_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882806403356_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882806904080_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882807405461_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882807907576_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882808410584_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882808914152_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882809417995_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882809921376_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882810423969_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882810925597_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882811425990_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882811925872_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882812425806_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882812925699_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882813425504_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882813925132_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882814424712_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882814924194_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882815423568_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882815923072_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882816422510_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882816922127_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882817421625_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882817921176_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882818420835_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882818920369_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882819419880_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882819919470_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882820419119_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882820918632_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882821418146_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882821917728_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882822417201_FRONT.jpg\n8888517708810165484_1549_770_1569_770_1508882822916757_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557335994549412_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557335995049527_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557335995549487_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557335996049486_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557335996549440_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557335997049479_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557335997549440_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557335998049430_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557335998549437_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557335999049441_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557335999549397_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336000049431_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336000549508_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336001049551_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336001549389_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336002049342_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336002549348_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336003049428_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336003549464_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336004049483_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336004549499_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336005049483_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336005549497_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336006049764_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336006550483_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336007051189_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336007551317_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336008051080_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336008550866_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336009050612_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336009550252_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336010049793_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336010549392_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336011049272_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336011549327_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336012049436_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336012549396_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336013049218_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336013549119_FRONT.jpg\n8907419590259234067_1960_000_1980_000_1557336014049201_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962476313120_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962476813076_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962477313122_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962477813236_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962478313356_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962478813359_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962479313304_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962479813270_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962480313193_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962480813158_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962481313148_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962481813235_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962482313317_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962482813311_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962483313279_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962483813333_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962484313403_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962484813284_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962485313207_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962485813270_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962486313316_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962486813266_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962487313243_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962487813276_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962488313269_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962488813267_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962489313354_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962489813348_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962490313352_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962490813422_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962491313453_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962491813498_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962492313615_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962492813800_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962493314210_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962493815336_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962494317796_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962494821343_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962495325495_FRONT.jpg\n89454214745557131_3160_000_3180_000_1557962495830055_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450821409557_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450821909675_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450822409840_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450822909905_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450823409798_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450823909425_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450824408669_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450824907839_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450825406939_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450825906010_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450826405296_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450826904691_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450827404001_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450827903496_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450828403388_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450828903849_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450829404540_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450829905363_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450830406159_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450830906562_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450831407062_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450831907618_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450832508163_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450833008429_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450833508626_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450834008757_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450834508602_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450835008355_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450835508250_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450836008491_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450836508835_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450837009198_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450837509229_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450838008963_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450838508743_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450839008646_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450839508535_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450840008533_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450840508630_FRONT.jpg\n8956556778987472864_3404_790_3424_790_1513450841008818_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628764405147_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628764908465_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628765412932_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628765917568_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628766421237_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628766923120_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628767424258_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628767924750_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628768424679_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628768924523_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628769424368_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628769924217_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628770424204_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628770924229_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628771424264_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628771924004_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628772423883_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628772924040_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628773424192_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628773924204_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628774424216_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628774924232_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628775424318_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628775924351_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628776424470_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628776924430_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628777424175_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628777924113_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628778424015_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628778923848_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628779423810_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628779923965_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628780423957_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628780924012_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628781424057_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628781924094_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628782424187_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628782924200_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628783424225_FRONT.jpg\n902001779062034993_2880_000_2900_000_1553628783924265_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009206591452_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009207091446_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009207591299_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009208091330_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009208591444_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009209091494_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009209591724_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009210091957_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009210591858_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009211091680_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009211591427_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009212091322_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009212591338_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009213091472_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009213591544_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009214091578_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009214591477_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009215091474_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009215591506_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009216091588_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009216591768_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009217091689_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009217591590_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009218091560_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009218591484_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009219091368_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009219591324_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009220091469_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009220591745_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009221092004_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009221591970_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009222091796_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009222591651_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009223091569_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009223591754_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009224091912_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009224591907_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009225091829_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009225591815_FRONT.jpg\n9024872035982010942_2578_810_2598_810_1510009226091860_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979389225932_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979389726068_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979390226146_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979390726246_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979391226230_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979391726040_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979392226035_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979392725896_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979393225902_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979393725975_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979394226030_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979394726006_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979395226146_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979395726400_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979396226435_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979396726177_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979397226001_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979397726080_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979398226261_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979398725885_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979399225441_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979399724938_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979400224290_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979400723365_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979401222393_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979401721541_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979402320684_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979402820107_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979403319599_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979403819100_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979404318837_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979404818573_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979405318219_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979405817759_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979406317047_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979406816215_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979407315582_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979407815136_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979408314503_FRONT.jpg\n9041488218266405018_6454_030_6474_030_1508979408813976_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144324716280_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144325216299_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144325716300_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144326216286_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144326716290_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144327216278_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144327716284_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144328216300_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144328716280_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144329216283_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144329716272_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144330216284_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144330816285_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144331316268_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144331816256_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144332316253_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144332816253_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144333316285_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144333816269_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144334316230_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144334816198_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144335316215_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144335816195_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144336316269_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144336816339_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144337316333_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144337816224_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144338316151_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144338816151_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144339316236_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144339816354_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144340316344_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144340816348_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144341316314_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144341816280_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144342316340_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144342816305_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144343316292_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144343816197_FRONT.jpg\n9114112687541091312_1100_000_1120_000_1544144344316123_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769121227332_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769121727580_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769122227517_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769122727468_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769123227279_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769123727510_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769124227505_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769124727401_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769125227553_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769125727479_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769126227564_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769126727526_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769127227408_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769127727585_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769128227550_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769128727705_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769129227927_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769129727484_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769130227606_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769130727799_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769131227803_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769131727644_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769132227677_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769132727772_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769133227745_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769133727666_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769134227495_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769134727568_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769135227532_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769135727554_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769136227681_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769136727675_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769137227712_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769137727721_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769138227679_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769138727779_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769139227677_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769139727604_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769140227649_FRONT.jpg\n9164052963393400298_4692_970_4712_970_1516769140727758_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277734936364_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277735436106_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277735936409_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277736436899_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277736937430_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277737437757_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277737937807_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277738437924_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277738938036_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277739438065_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277739938098_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277740437976_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277740937748_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277741437587_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277741937530_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277742437505_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277742937398_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277743437332_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277743937365_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277744437548_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277744937608_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277745437509_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277745937397_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277746437377_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277746937380_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277747437495_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277747937573_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277748437532_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277748937502_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277749437537_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277749937536_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277750437432_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277750937550_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277751437632_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277751937612_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277752437642_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277752937601_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277753437466_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277753937334_FRONT.jpg\n9231652062943496183_1740_000_1760_000_1557277754437342_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793798112721_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793798612649_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793799112758_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793799612896_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793800112953_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793800612758_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793801112591_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793801612568_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793802112823_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793802612726_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793803112393_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793803612204_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793804112385_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793804612500_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793805112643_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793805612761_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793806112864_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793806612906_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793807112870_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793807612636_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793808112263_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793808611410_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793809111496_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793809612118_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793810112738_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793810613075_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793811112877_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793811612582_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793812112361_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793812612484_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793813112762_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793813612930_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793814112726_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793814612585_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793815112573_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793815612458_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793816112418_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793816612426_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793817112385_FRONT.jpg\n9243656068381062947_1297_428_1317_428_1508793817612281_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077410614134_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077411117767_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077411621589_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077412124870_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077412626789_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077413127387_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077413627378_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077414126963_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077414626363_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077415126021_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077415625401_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077416124815_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077416624453_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077417124377_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077417624459_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077418124543_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077418624684_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077419124571_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077419624618_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077420124646_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077420624672_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077421124710_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077421624915_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077422125308_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077422625737_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077423126169_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077423626743_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077424127107_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077424627371_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077425127748_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077425628216_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077426128717_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077426629366_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077427129961_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077427630369_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077428130720_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077428630759_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077429130977_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077429631423_FRONT.jpg\n9265793588137545201_2981_960_3001_960_1507077430131897_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516670272409_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516670772638_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516671272780_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516671772783_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516672272504_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516672772394_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516673272452_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516673772469_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516674272630_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516674772724_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516675272630_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516675772499_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516676272426_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516676772403_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516677272349_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516677772377_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516678272423_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516678772443_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516679272590_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516679772740_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516680272804_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516680772741_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516681272611_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516681772503_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516682272525_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516682772615_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516683272615_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516683772644_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516684272735_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516684772668_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516685272552_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516685772579_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516686272690_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516686772721_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516687272709_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516687772629_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516688272598_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516688772624_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516689272642_FRONT.jpg\n933621182106051783_4160_000_4180_000_1543516689772638_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665773610097_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665774109734_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665774609482_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665775109440_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665775609379_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665776109280_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665776609203_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665777109155_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665777609333_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665778109519_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665778609508_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665779109432_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665779609485_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665780109663_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665780609733_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665781109853_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665781610014_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665782110002_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665782610067_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665783110074_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665783610248_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665784110446_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665784610469_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665785110539_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665785610826_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665786111672_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665786613303_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665787116089_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665787619473_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665788123024_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665788626321_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665789128988_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665789631146_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665790132548_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665790633287_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665791133649_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665791633680_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665792133525_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665792633487_FRONT.jpg\n9443948810903981522_6538_870_6558_870_1507665793133561_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925646870147_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925647369883_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925647869563_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925648369507_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925648869775_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925649370013_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925649870195_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925650370502_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925650870820_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925651370881_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925651871143_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925652371331_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925652871424_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925653371451_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925653871491_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925654371521_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925654871745_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925655372069_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925655872368_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925656372470_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925656872671_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925657372930_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925657873109_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925658373095_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925658873074_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925659372978_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925659872877_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925660373086_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925660873367_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925661373430_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925661873354_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925662373259_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925662873162_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925663373198_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925663873374_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925664373603_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925664873650_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925665373536_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925665873535_FRONT.jpg\n9472420603764812147_850_000_870_000_1519925666373658_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544976797437_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544977297434_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544977797467_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544978297406_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544978797449_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544979297456_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544979797376_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544980297361_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544980797469_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544981297474_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544981797407_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544982297410_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544982797459_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544983297495_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544983797539_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544984297580_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544984797478_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544985297304_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544985797293_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544986297325_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544986797320_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544987297454_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544987797491_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544988297419_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544988797330_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544989297323_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544989797326_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544990297377_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544990797370_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544991297411_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544991797403_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544992297356_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544992797390_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544993297357_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544993797361_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544994297418_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544994797425_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544995297381_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544995797432_FRONT.jpg\n9579041874842301407_1300_000_1320_000_1557544996297392_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598016251352_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598016751016_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598017250835_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598017750932_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598018251141_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598018751322_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598019251234_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598019751105_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598020251141_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598020751164_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598021251210_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598021751202_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598022251160_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598022750970_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598023250861_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598023751076_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598024251291_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598024751438_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598025251322_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598025751200_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598026251051_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598026750919_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598027250838_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598027750977_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598028251066_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598028750961_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598029250903_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598029750986_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598030250905_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598030750569_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598031249969_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598031749467_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598032248786_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598032748105_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598033247444_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598033746880_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598034246381_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598034745815_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598035245180_FRONT.jpg\n967082162553397800_5102_900_5122_900_1507598035744663_FRONT.jpg\nannotations.json\n"
],
[
"!pwd",
"/home/010796032/Waymo\n"
],
[
"!python /home/010796032/PytorchWork/WaymoDetectron2Train.py",
"1.5.1+cu101\n0.6.1+cu101\nCUDA is available! Training on GPU ...\n\u001b[32m[09/15 16:13:27 d2.engine.defaults]: \u001b[0mModel:\nGeneralizedRCNN(\n (backbone): FPN(\n (fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))\n (fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))\n (fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))\n (fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))\n (fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (top_block): LastLevelMaxPool()\n (bottom_up): ResNet(\n (stem): BasicStem(\n (conv1): Conv2d(\n 3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False\n (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)\n )\n )\n (res2): Sequential(\n (0): BottleneckBlock(\n (shortcut): Conv2d(\n 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)\n )\n (conv1): Conv2d(\n 64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)\n )\n (conv2): Conv2d(\n 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)\n )\n (conv3): Conv2d(\n 256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)\n )\n )\n (1): BottleneckBlock(\n (conv1): Conv2d(\n 256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)\n )\n (conv2): Conv2d(\n 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)\n )\n (conv3): Conv2d(\n 256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)\n )\n )\n (2): BottleneckBlock(\n (conv1): Conv2d(\n 256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)\n )\n (conv2): Conv2d(\n 256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)\n )\n (conv3): Conv2d(\n 256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)\n )\n )\n )\n (res3): Sequential(\n (0): BottleneckBlock(\n (shortcut): Conv2d(\n 256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False\n (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)\n )\n (conv1): Conv2d(\n 256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)\n )\n (conv2): Conv2d(\n 512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)\n )\n (conv3): Conv2d(\n 512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)\n )\n )\n (1): BottleneckBlock(\n (conv1): Conv2d(\n 512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)\n )\n (conv2): Conv2d(\n 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)\n )\n (conv3): Conv2d(\n 512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)\n )\n )\n (2): BottleneckBlock(\n (conv1): Conv2d(\n 512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)\n )\n (conv2): Conv2d(\n 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)\n )\n (conv3): Conv2d(\n 512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)\n )\n )\n (3): BottleneckBlock(\n (conv1): Conv2d(\n 512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)\n )\n (conv2): Conv2d(\n 512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)\n )\n (conv3): Conv2d(\n 512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)\n )\n )\n )\n (res4): Sequential(\n (0): BottleneckBlock(\n (shortcut): Conv2d(\n 512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv1): Conv2d(\n 512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (1): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (2): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (3): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (4): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (5): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (6): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (7): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (8): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (9): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (10): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (11): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (12): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (13): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (14): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (15): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (16): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (17): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (18): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (19): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (20): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (21): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n (22): BottleneckBlock(\n (conv1): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv2): Conv2d(\n 1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n (conv3): Conv2d(\n 1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)\n )\n )\n )\n (res5): Sequential(\n (0): BottleneckBlock(\n (shortcut): Conv2d(\n 1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False\n (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)\n )\n (conv1): Conv2d(\n 1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)\n )\n (conv2): Conv2d(\n 2048, 2048, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)\n )\n (conv3): Conv2d(\n 2048, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)\n )\n )\n (1): BottleneckBlock(\n (conv1): Conv2d(\n 2048, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)\n )\n (conv2): Conv2d(\n 2048, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)\n )\n (conv3): Conv2d(\n 2048, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)\n )\n )\n (2): BottleneckBlock(\n (conv1): Conv2d(\n 2048, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)\n )\n (conv2): Conv2d(\n 2048, 2048, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False\n (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)\n )\n (conv3): Conv2d(\n 2048, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False\n (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)\n )\n )\n )\n )\n )\n (proposal_generator): RPN(\n (anchor_generator): DefaultAnchorGenerator(\n (cell_anchors): BufferList()\n )\n (rpn_head): StandardRPNHead(\n (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1))\n (anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1))\n )\n )\n (roi_heads): StandardROIHeads(\n (box_pooler): ROIPooler(\n (level_poolers): ModuleList(\n (0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True)\n (1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True)\n (2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True)\n (3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True)\n )\n )\n (box_head): FastRCNNConvFCHead(\n (fc1): Linear(in_features=12544, out_features=1024, bias=True)\n (fc2): Linear(in_features=1024, out_features=1024, bias=True)\n )\n (box_predictor): FastRCNNOutputLayers(\n (cls_score): Linear(in_features=1024, out_features=13, bias=True)\n (bbox_pred): Linear(in_features=1024, out_features=48, bias=True)\n )\n )\n)\n\u001b[32m[09/15 16:13:27 d2.data.datasets.coco]: \u001b[0mLoaded 1998 images in COCO format from /data/cmpe295-liu/Waymo/WaymoCOCOtest/Training/annotations.json\n\u001b[32m[09/15 16:13:28 d2.data.build]: \u001b[0mRemoved 38 images with no usable annotations. 1960 images left.\n\u001b[32m[09/15 16:13:28 d2.data.build]: \u001b[0mDistribution of instances among all 5 categories:\n\u001b[36m| category | #instances | category | #instances | category | #instances |\n|:----------:|:-------------|:----------:|:-------------|:----------:|:-------------|\n| unknown | 0 | vehicle | 36834 | pedestrian | 8297 |\n| sign | 0 | cyclist | 200 | | |\n| total | 45331 | | | | |\u001b[0m\n\u001b[32m[09/15 16:13:28 d2.data.common]: \u001b[0mSerializing 1960 elements to byte tensors and concatenating them all ...\n\u001b[32m[09/15 16:13:28 d2.data.common]: \u001b[0mSerialized dataset takes 1.74 MiB\n\u001b[32m[09/15 16:13:28 d2.data.build]: \u001b[0mUsing training sampler TrainingSampler\nUnable to load 'roi_heads.box_predictor.cls_score.weight' to the model due to incompatible shapes: (81, 1024) in the checkpoint but (13, 1024) in the model!\nUnable to load 'roi_heads.box_predictor.cls_score.bias' to the model due to incompatible shapes: (81,) in the checkpoint but (13,) in the model!\nUnable to load 'roi_heads.box_predictor.bbox_pred.weight' to the model due to incompatible shapes: (320, 1024) in the checkpoint but (48, 1024) in the model!\nUnable to load 'roi_heads.box_predictor.bbox_pred.bias' to the model due to incompatible shapes: (320,) in the checkpoint but (48,) in the model!\n\u001b[32m[09/15 16:13:32 d2.engine.train_loop]: \u001b[0mStarting training from iteration 0\n\u001b[32m[09/15 16:14:02 d2.utils.events]: \u001b[0m eta: 1 day, 7:35:01 iter: 19 total_loss: 3.951 loss_cls: 2.491 loss_box_reg: 0.773 loss_rpn_cls: 0.220 loss_rpn_loc: 0.476 time: 1.4279 data_time: 0.4799 lr: 0.000005 max_mem: 7244M\n^C\n\u001b[32m[09/15 16:14:11 d2.engine.hooks]: \u001b[0mOverall training speed: 24 iterations in 0:00:35 (1.4844 s / it)\n\u001b[32m[09/15 16:14:11 d2.engine.hooks]: \u001b[0mTotal training time: 0:00:35 (0:00:00 on hooks)\nTraceback (most recent call last):\n File \"/home/010796032/PytorchWork/WaymoDetectron2Train.py\", line 450, in <module>\n trainer.train()\n File \"/home/010796032/.local/lib/python3.6/site-packages/detectron2/engine/defaults.py\", line 401, in train\n super().train(self.start_iter, self.max_iter)\n File \"/home/010796032/.local/lib/python3.6/site-packages/detectron2/engine/train_loop.py\", line 132, in train\n self.run_step()\n File \"/home/010796032/.local/lib/python3.6/site-packages/detectron2/engine/train_loop.py\", line 228, in run_step\n losses.backward()\n File \"/home/010796032/.local/lib/python3.6/site-packages/torch/tensor.py\", line 198, in backward\n torch.autograd.backward(self, gradient, retain_graph, create_graph)\n File \"/home/010796032/.local/lib/python3.6/site-packages/torch/autograd/__init__.py\", line 100, in backward\n"
],
[
"FULL_LABEL_CLASSES = ['unknown', 'vehicle', 'pedestrian', 'sign', 'cyclist']\nlen(FULL_LABEL_CLASSES)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d055eb812fc1a7c5b0418995088dff9675efe8a1 | 333,240 | ipynb | Jupyter Notebook | 001-000-general-overview/run.ipynb | devlabmexico/reporte-covid | 7756875d14e1ff67d2f821e3ef0a15aa8ad3dd87 | [
"MIT"
] | null | null | null | 001-000-general-overview/run.ipynb | devlabmexico/reporte-covid | 7756875d14e1ff67d2f821e3ef0a15aa8ad3dd87 | [
"MIT"
] | 1 | 2021-07-16T15:06:48.000Z | 2021-07-16T15:06:48.000Z | 001-000-general-overview/run.ipynb | devlabmexico/reporte-covid | 7756875d14e1ff67d2f821e3ef0a15aa8ad3dd87 | [
"MIT"
] | null | null | null | 504.909091 | 91,776 | 0.93409 | [
[
[
"### from datetime import datetime\nfrom os import environ\nfrom os.path import join\n\nimport json\n\n# YES/NO data dictionary\n\nYES = 1\nNO = 2\nNOT_APPLY = 97\nIGNORED = 98\nNOT_SPECIFIED = 99\n\n\n# Laboratory result dictionary\nLAB_POSITIVE = 1\nLAB_NO_POSITIVE = 2\nLAB_PENDING_RESULT = 3\nLAB_WRONG_RESULT = 4\nLAB_NOT_APPLY = 97 # CASO SIN MUESTRA\n\n\nmonths = [\"\",\n \"Enero\",\n \"Febrero\",\n \"Marzo\",\n \"Abril\",\n \"Mayo\",\n \"Junio\",\n \"Julio\",\n \"Agosto\",\n \"Septiembre\",\n \"Octubre\",\n \"Noviembre\",\n \"Diciembre\"]\n",
"_____no_output_____"
],
[
"input_folder = environ.get('CROSSCOMPUTE_INPUT_FOLDER', 'tests/standard/input')\noutput_folder = environ.get('CROSSCOMPUTE_OUTPUT_FOLDER', 'tests/standard/output')\nsettings_path = join(input_folder, 'settings.json')\nd = json.load(open(settings_path, 'rt'))\nd",
"_____no_output_____"
],
[
"from datetime import datetime\nnow = datetime.now()\nreport_day = f'{now.day} de {months[now.month]} del {now.year}'\n\nwith open(join(output_folder, 'report_date.txt'), 'wt') as report_date_file:\n report_date_file.write(report_day)",
"_____no_output_____"
],
[
"import pandas as pd\n\npd.options.display.float_format = '{:,.2f}'.format\n\ncovid_zip_data = 'data/datos_abiertos_covid19.zip'\n\ncovid_pd = pd.read_csv(covid_zip_data, compression='zip', header=0, )\ncovid_pd.set_index('ID_REGISTRO')\n\n# covid_pd.groupby('RESULTADO_LAB').size()\ncovid_pd.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 6852082 entries, 0 to 6852081\nData columns (total 40 columns):\n # Column Dtype \n--- ------ ----- \n 0 FECHA_ACTUALIZACION object\n 1 ID_REGISTRO object\n 2 ORIGEN int64 \n 3 SECTOR int64 \n 4 ENTIDAD_UM int64 \n 5 SEXO int64 \n 6 ENTIDAD_NAC int64 \n 7 ENTIDAD_RES int64 \n 8 MUNICIPIO_RES int64 \n 9 TIPO_PACIENTE int64 \n 10 FECHA_INGRESO object\n 11 FECHA_SINTOMAS object\n 12 FECHA_DEF object\n 13 INTUBADO int64 \n 14 NEUMONIA int64 \n 15 EDAD int64 \n 16 NACIONALIDAD int64 \n 17 EMBARAZO int64 \n 18 HABLA_LENGUA_INDIG int64 \n 19 INDIGENA int64 \n 20 DIABETES int64 \n 21 EPOC int64 \n 22 ASMA int64 \n 23 INMUSUPR int64 \n 24 HIPERTENSION int64 \n 25 OTRA_COM int64 \n 26 CARDIOVASCULAR int64 \n 27 OBESIDAD int64 \n 28 RENAL_CRONICA int64 \n 29 TABAQUISMO int64 \n 30 OTRO_CASO int64 \n 31 TOMA_MUESTRA_LAB int64 \n 32 RESULTADO_LAB int64 \n 33 TOMA_MUESTRA_ANTIGENO int64 \n 34 RESULTADO_ANTIGENO int64 \n 35 CLASIFICACION_FINAL int64 \n 36 MIGRANTE int64 \n 37 PAIS_NACIONALIDAD object\n 38 PAIS_ORIGEN object\n 39 UCI int64 \ndtypes: int64(33), object(7)\nmemory usage: 2.0+ GB\n"
]
],
[
[
"# Total de Casos y Mortalidad padecimiento",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\ncv19_confirmed_cases = covid_pd[covid_pd['RESULTADO_LAB'] == YES]\n\npneumonia_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['NEUMONIA'] == YES]\ndiabetes_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['DIABETES'] == YES]\nepoc_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['EPOC'] == YES]\nasma_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['ASMA'] == YES]\ninmusupr_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['INMUSUPR'] == YES]\nhyper_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['HIPERTENSION'] == YES]\n# others_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['OTRAS_COM'] == YES]\ncardio_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['CARDIOVASCULAR'] == YES]\nobesity_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['OBESIDAD'] == YES]\nrenal_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['RENAL_CRONICA'] == YES]\n\n# \nsmoking_confirmed_cases = cv19_confirmed_cases[cv19_confirmed_cases['TABAQUISMO'] == YES]\n\n\n\nTOTAL_POSITIVE_COV19_CASES = cv19_confirmed_cases.shape[0] # len(list(filter(lambda x: x, covid_pd['RESULTADO_LAB'] == YES)))\nTOTAL_PNEUMONIA_CASES = pneumonia_confirmed_cases.shape[0]\n\nprint(TOTAL_POSITIVE_COV19_CASES)\n\ndef percentage_died(df):\n part = who_died(df).shape[0]\n whole = df.shape[0]\n percentage = 100 * float(part)/float(whole)\n return f'{int(percentage)}%'\n\ndef who_died(df):\n return df[df['FECHA_DEF'] != '9999-99-99']\n\ndiseases_dfs = [\n diabetes_confirmed_cases,\n # pneumonia_confirmed_cases,\n epoc_confirmed_cases, \n asma_confirmed_cases,\n inmusupr_confirmed_cases,\n hyper_confirmed_cases,\n cardio_confirmed_cases,\n obesity_confirmed_cases,\n renal_confirmed_cases,\n smoking_confirmed_cases,\n]\n\n\n_ = lambda value: '{:,.2f}'.format(value).split('.')[0] if type(value) != str else value\n\ncases_by_disease = pd.DataFrame.from_dict({\n 'Padecimiento': ['Diabetes', \n # 'Neumonía', \n 'EPOC', 'Asma', 'Inmunosupresión', 'Hipertensión', 'Cardiovascular', \n 'Obesidad', 'Renal Crónica', 'Tabaquismo'],\n 'Positivos': [\n diabetes_confirmed_cases.shape[0],\n # pneumonia_confirmed_cases.shape[0],\n epoc_confirmed_cases.shape[0], \n asma_confirmed_cases.shape[0],\n inmusupr_confirmed_cases.shape[0],\n hyper_confirmed_cases.shape[0],\n cardio_confirmed_cases.shape[0],\n obesity_confirmed_cases.shape[0],\n renal_confirmed_cases.shape[0],\n smoking_confirmed_cases.shape[0],\n ],\n 'Muertes': [\n who_died(diabetes_confirmed_cases).shape[0],\n # who_died(pneumonia_confirmed_cases).shape[0],\n who_died(epoc_confirmed_cases).shape[0], \n who_died(asma_confirmed_cases).shape[0],\n who_died(inmusupr_confirmed_cases).shape[0],\n who_died(hyper_confirmed_cases).shape[0],\n who_died(cardio_confirmed_cases).shape[0],\n who_died(obesity_confirmed_cases).shape[0],\n who_died(renal_confirmed_cases).shape[0],\n who_died(smoking_confirmed_cases).shape[0],\n ],\n 'Porcentaje de Muerte': [\n percentage_died(diabetes_confirmed_cases),\n # percentage_died(pneumonia_confirmed_cases),\n percentage_died(epoc_confirmed_cases), \n percentage_died(asma_confirmed_cases),\n percentage_died(inmusupr_confirmed_cases),\n percentage_died(hyper_confirmed_cases),\n percentage_died(cardio_confirmed_cases),\n percentage_died(obesity_confirmed_cases),\n percentage_died(renal_confirmed_cases),\n percentage_died(smoking_confirmed_cases),\n ],\n})\n\ncases_by_disease = cases_by_disease.set_index('Padecimiento')\n# cases_by_disease = cases_by_disease.astype({'Positivos': float, 'Muertes' : float})\ncases_by_disease.applymap(_).to_csv(join(output_folder, 'table1.csv'))\n\n\ncases_by_disease.applymap(_)",
"1673168\n"
],
[
"import matplotlib.pyplot as plt\nfrom matplotlib.ticker import FormatStrFormatter, StrMethodFormatter\n\n\ncases_by_disease\n\nax = cases_by_disease.plot.bar(rot=0, figsize=(15,5))\n\nplt.yticks(fontsize = 13)\nplt.xlabel('Casos positivos y defunciones por padecimiento', fontsize = 18)\n\n\n\n# add value label to each bar, displayng its height\nfor p in ax.patches:\n ax.annotate(p.get_height(),\n (p.get_x() + p.get_width()/2., p.get_height()),\n ha = 'center', va = 'center', xytext = (0,7), textcoords = 'offset points', size=9)\n \nax.yaxis.set_major_formatter(StrMethodFormatter('{x:,}'))\n\nplt.tight_layout()\n\n# save Figure 7 as an image\nplt.savefig(join(output_folder, 'figure1.png'))",
"_____no_output_____"
],
[
"from matplotlib_venn import venn3, venn3_circles\nfrom matplotlib.pyplot import gca\n\nmajor_diseases = [set(diabetes_confirmed_cases['ID_REGISTRO']), \n set(hyper_confirmed_cases['ID_REGISTRO']), \n set(obesity_confirmed_cases['ID_REGISTRO'])]\n\nmajor_diseases_deaths = [set(who_died(diabetes_confirmed_cases)['ID_REGISTRO']), \n set(who_died(hyper_confirmed_cases)['ID_REGISTRO']), \n set(who_died(obesity_confirmed_cases)['ID_REGISTRO'])]\nfig, axes = plt.subplots(1, 2, figsize=(15, 15))\n\n\n\nvenn3(major_diseases,\n set_colors=('#3E64AF', '#3EAF5D', '#D74E3B'), \n set_labels = ('Diabetes', \n 'Hipertensión',\n 'Obesidad',\n ),\n alpha=0.75,\n )\n\n\nvenn3_circles(major_diseases, lw=0.7)\n\n\nplt.subplot(1, 2, 1)\n\nvenn3(major_diseases_deaths,\n set_colors=('#3E64AF', '#3EAF5D', '#D74E3B'), \n set_labels = ('Fallecimientos por \\nDiabetes', \n 'Fallecimientos por \\nHipertensión',\n 'Fallecimientos por \\nObesidad'),\n alpha=0.75)\n\n\nvenn3_circles(major_diseases_deaths, lw=0.7)\n\nplt.show()\n\nplt.tight_layout()\n\nplt.savefig(join(output_folder, 'figure2.png'), bbox_inches='tight')\n\naxes",
"_____no_output_____"
],
[
"fig, axes = plt.subplots(3, 3, figsize=(10, 10), dpi=100)\n\ncolors = ['tab:red', 'tab:blue', 'tab:green', 'tab:pink', 'tab:olive']\n\ndisease_title = [\n 'Diabetes',\n 'EPOC',\n 'Asma', \n 'Inmunosuprecion',\n 'Hipertension',\n 'Cardiovascular',\n 'Obesidad',\n 'Insuficiencia renal',\n 'Tabaquismo'\n \n]\n\nfor i, (ax, df) in enumerate(zip(axes.flatten(), diseases_dfs)):\n ax.hist(df['EDAD'], alpha=0.5, bins=100, density=True, stacked=True, label=disease_title[i], color=colors[ i % 4])\n ax.set_xlabel(\"Edad\")\n ax.set_ylabel(\"Frecuencia\")\n ax.legend(loc='upper left', frameon=False)\n\n # ax.set_title(disease_title[i])\n ax.set_xlim(0, 90);\n\n \nplt.suptitle('Afectacion de pacientes con enfermadad preexistente por edad ', y=1.05, size=16)\n\n\nplt.tight_layout();\n\nplt.savefig(join(output_folder, 'figure3.png'), bbox_inches='tight')\n\n#diabetes_confirmed_cases",
"_____no_output_____"
],
[
"fig, axes = plt.subplots(3, 3, figsize=(10, 10), dpi=100)\n\ndiseases_dfs = [\n who_died(diabetes_confirmed_cases),\n who_died(pneumonia_confirmed_cases),\n who_died(epoc_confirmed_cases), \n who_died(asma_confirmed_cases),\n who_died(inmusupr_confirmed_cases),\n who_died(hyper_confirmed_cases),\n who_died(cardio_confirmed_cases),\n who_died(obesity_confirmed_cases),\n who_died(renal_confirmed_cases),\n who_died(smoking_confirmed_cases),\n]\n\n\nfor i, (ax, df) in enumerate(zip(axes.flatten(), diseases_dfs)):\n ax.hist(df['EDAD'], alpha=0.5, bins=100, density=True, stacked=True, label=disease_title[i], color=colors[ i % 4])\n # ax.set_title(disease_title[i])\n ax.set_xlabel(\"Edad\")\n ax.set_ylabel(\"Frecuencia\")\n ax.legend(loc='upper left', frameon=False)\n ax.set_xlim(0, 90);\n\n \nplt.suptitle('Afectacion de fallecidos con enfermadad preexistente por edad ', y=1.05, size=16)\n \nplt.tight_layout();\n\nplt.savefig(join(output_folder, 'figure4.png'), bbox_inches='tight')\n\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d055f34ae76d63aa102f5281a70ec003da3860aa | 61,936 | ipynb | Jupyter Notebook | 1_Preliminaries.ipynb | zhulingchen/CVND---Image-Captioning-Project | 30ae8693d833f20837e380f1fe2f43fb1bf53a8e | [
"MIT"
] | null | null | null | 1_Preliminaries.ipynb | zhulingchen/CVND---Image-Captioning-Project | 30ae8693d833f20837e380f1fe2f43fb1bf53a8e | [
"MIT"
] | null | null | null | 1_Preliminaries.ipynb | zhulingchen/CVND---Image-Captioning-Project | 30ae8693d833f20837e380f1fe2f43fb1bf53a8e | [
"MIT"
] | null | null | null | 48.274357 | 704 | 0.544449 | [
[
[
"# Computer Vision Nanodegree\n\n## Project: Image Captioning\n\n---\n\nIn this notebook, you will learn how to load and pre-process data from the [COCO dataset](http://cocodataset.org/#home). You will also design a CNN-RNN model for automatically generating image captions.\n\nNote that **any amendments that you make to this notebook will not be graded**. However, you will use the instructions provided in **Step 3** and **Step 4** to implement your own CNN encoder and RNN decoder by making amendments to the **models.py** file provided as part of this project. Your **models.py** file **will be graded**. \n\nFeel free to use the links below to navigate the notebook:\n- [Step 1](#step1): Explore the Data Loader\n- [Step 2](#step2): Use the Data Loader to Obtain Batches\n- [Step 3](#step3): Experiment with the CNN Encoder\n- [Step 4](#step4): Implement the RNN Decoder",
"_____no_output_____"
],
[
"<a id='step1'></a>\n## Step 1: Explore the Data Loader\n\nWe have already written a [data loader](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader) that you can use to load the COCO dataset in batches. \n\nIn the code cell below, you will initialize the data loader by using the `get_loader` function in **data_loader.py**. \n\n> For this project, you are not permitted to change the **data_loader.py** file, which must be used as-is.\n\nThe `get_loader` function takes as input a number of arguments that can be explored in **data_loader.py**. Take the time to explore these arguments now by opening **data_loader.py** in a new window. Most of the arguments must be left at their default values, and you are only allowed to amend the values of the arguments below:\n1. **`transform`** - an [image transform](http://pytorch.org/docs/master/torchvision/transforms.html) specifying how to pre-process the images and convert them to PyTorch tensors before using them as input to the CNN encoder. For now, you are encouraged to keep the transform as provided in `transform_train`. You will have the opportunity later to choose your own image transform to pre-process the COCO images.\n2. **`mode`** - one of `'train'` (loads the training data in batches) or `'test'` (for the test data). We will say that the data loader is in training or test mode, respectively. While following the instructions in this notebook, please keep the data loader in training mode by setting `mode='train'`.\n3. **`batch_size`** - determines the batch size. When training the model, this is number of image-caption pairs used to amend the model weights in each training step.\n4. **`vocab_threshold`** - the total number of times that a word must appear in the in the training captions before it is used as part of the vocabulary. Words that have fewer than `vocab_threshold` occurrences in the training captions are considered unknown words. \n5. **`vocab_from_file`** - a Boolean that decides whether to load the vocabulary from file. \n\nWe will describe the `vocab_threshold` and `vocab_from_file` arguments in more detail soon. For now, run the code cell below. Be patient - it may take a couple of minutes to run!",
"_____no_output_____"
]
],
[
[
"# install PixieDebugger - A Visual Python Debugger for Jupyter Notebooks\n# https://medium.com/codait/the-visual-python-debugger-for-jupyter-notebooks-youve-always-wanted-761713babc62\n# https://www.analyticsvidhya.com/blog/2018/07/pixie-debugger-python-debugging-tool-jupyter-notebooks-data-scientist-must-use/\n!pip install pixiedust\n\n# install other toolboxes\n!pip install tqdm==4.14 # https://stackoverflow.com/questions/59109313/tqdm-tqdm-tqdmkeyerror-unknown-arguments-unit-divisor-1024\n!pip install nltk\n!pip install torch==1.2.0 torchvision==0.4.0\n!pip install torchsummary",
"Requirement already satisfied: pixiedust in /opt/conda/lib/python3.6/site-packages (1.1.18)\nRequirement already satisfied: markdown in /opt/conda/lib/python3.6/site-packages (from pixiedust) (2.6.9)\nRequirement already satisfied: mpld3 in /opt/conda/lib/python3.6/site-packages (from pixiedust) (0.5.1)\nRequirement already satisfied: colour in /opt/conda/lib/python3.6/site-packages (from pixiedust) (0.1.5)\nRequirement already satisfied: requests in /opt/conda/lib/python3.6/site-packages (from pixiedust) (2.18.4)\nRequirement already satisfied: astunparse in /opt/conda/lib/python3.6/site-packages (from pixiedust) (1.6.3)\nRequirement already satisfied: lxml in /opt/conda/lib/python3.6/site-packages (from pixiedust) (4.1.1)\nRequirement already satisfied: geojson in /opt/conda/lib/python3.6/site-packages (from pixiedust) (2.5.0)\nRequirement already satisfied: matplotlib in /opt/conda/lib/python3.6/site-packages (from mpld3->pixiedust) (2.1.0)\nRequirement already satisfied: jinja2 in /opt/conda/lib/python3.6/site-packages (from mpld3->pixiedust) (2.10)\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/conda/lib/python3.6/site-packages (from requests->pixiedust) (3.0.4)\nRequirement already satisfied: idna<2.7,>=2.5 in /opt/conda/lib/python3.6/site-packages (from requests->pixiedust) (2.6)\nRequirement already satisfied: urllib3<1.23,>=1.21.1 in /opt/conda/lib/python3.6/site-packages (from requests->pixiedust) (1.22)\nRequirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.6/site-packages (from requests->pixiedust) (2019.11.28)\nRequirement already satisfied: wheel<1.0,>=0.23.0 in /opt/conda/lib/python3.6/site-packages (from astunparse->pixiedust) (0.30.0)\nRequirement already satisfied: six<2.0,>=1.6.1 in /opt/conda/lib/python3.6/site-packages (from astunparse->pixiedust) (1.11.0)\nRequirement already satisfied: numpy>=1.7.1 in /opt/conda/lib/python3.6/site-packages (from matplotlib->mpld3->pixiedust) (1.12.1)\nRequirement already satisfied: python-dateutil>=2.0 in /opt/conda/lib/python3.6/site-packages (from matplotlib->mpld3->pixiedust) (2.6.1)\nRequirement already satisfied: pytz in /opt/conda/lib/python3.6/site-packages (from matplotlib->mpld3->pixiedust) (2017.3)\nRequirement already satisfied: cycler>=0.10 in /opt/conda/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from matplotlib->mpld3->pixiedust) (0.10.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /opt/conda/lib/python3.6/site-packages (from matplotlib->mpld3->pixiedust) (2.2.0)\nRequirement already satisfied: MarkupSafe>=0.23 in /opt/conda/lib/python3.6/site-packages (from jinja2->mpld3->pixiedust) (1.0)\nRequirement already satisfied: tqdm==4.14 in /opt/conda/lib/python3.6/site-packages (4.14.0)\nRequirement already satisfied: nltk in /opt/conda/lib/python3.6/site-packages (3.2.5)\nRequirement already satisfied: six in /opt/conda/lib/python3.6/site-packages (from nltk) (1.11.0)\nRequirement already satisfied: torch==1.2.0 in /opt/conda/lib/python3.6/site-packages (1.2.0)\nCollecting torchvision==0.4.0\n Using cached https://files.pythonhosted.org/packages/06/e6/a564eba563f7ff53aa7318ff6aaa5bd8385cbda39ed55ba471e95af27d19/torchvision-0.4.0-cp36-cp36m-manylinux1_x86_64.whl\nRequirement already satisfied: numpy in /opt/conda/lib/python3.6/site-packages (from torch==1.2.0) (1.12.1)\nRequirement already satisfied: pillow>=4.1.1 in /opt/conda/lib/python3.6/site-packages (from torchvision==0.4.0) (5.2.0)\nRequirement already satisfied: six in /opt/conda/lib/python3.6/site-packages (from torchvision==0.4.0) (1.11.0)\nInstalling collected packages: torchvision\n Found existing installation: torchvision 0.2.1\n\u001b[31mCannot remove entries from nonexistent file /opt/conda/lib/python3.6/site-packages/easy-install.pth\u001b[0m\nRequirement already satisfied: torchsummary in /opt/conda/lib/python3.6/site-packages (1.5.1)\n"
],
[
"import sys\nsys.path.append('/opt/cocoapi/PythonAPI')\nfrom pycocotools.coco import COCO\nimport nltk\nnltk.download('punkt')\nfrom data_loader import get_loader\nimport torch\nprint('PyTorch Version:', torch.__version__)\nprint('CUDA available:', torch.cuda.is_available())\nfrom torchvision import transforms\nfrom torchsummary import summary\nimport pixiedust\n\n# Define a transform to pre-process the training images.\ntransform_train = transforms.Compose([ \n transforms.Resize(256), # smaller edge of image resized to 256\n transforms.RandomCrop(224), # get 224x224 crop from random location\n transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5\n transforms.ToTensor(), # convert the PIL Image to a tensor\n transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model\n (0.229, 0.224, 0.225))])\n\n# Set the minimum word count threshold.\nvocab_threshold = 5\n\n# Specify the batch size.\nbatch_size = 64\n\n# Obtain the data loader.\ndata_loader = get_loader(transform=transform_train,\n mode='train',\n batch_size=batch_size,\n vocab_threshold=vocab_threshold,\n vocab_from_file=False)",
"[nltk_data] Downloading package punkt to /root/nltk_data...\n[nltk_data] Package punkt is already up-to-date!\nPyTorch Version: 1.2.0\nCUDA available: True\nPixiedust database opened successfully\n"
]
],
[
[
"When you ran the code cell above, the data loader was stored in the variable `data_loader`. \n\nYou can access the corresponding dataset as `data_loader.dataset`. This dataset is an instance of the `CoCoDataset` class in **data_loader.py**. If you are unfamiliar with data loaders and datasets, you are encouraged to review [this PyTorch tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html).\n\n### Exploring the `__getitem__` Method\n\nThe `__getitem__` method in the `CoCoDataset` class determines how an image-caption pair is pre-processed before being incorporated into a batch. This is true for all `Dataset` classes in PyTorch; if this is unfamiliar to you, please review [the tutorial linked above](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html). \n\nWhen the data loader is in training mode, this method begins by first obtaining the filename (`path`) of a training image and its corresponding caption (`caption`).\n\n#### Image Pre-Processing \n\nImage pre-processing is relatively straightforward (from the `__getitem__` method in the `CoCoDataset` class):\n```python\n# Convert image to tensor and pre-process using transform\nimage = Image.open(os.path.join(self.img_folder, path)).convert('RGB')\nimage = self.transform(image)\n```\nAfter loading the image in the training folder with name `path`, the image is pre-processed using the same transform (`transform_train`) that was supplied when instantiating the data loader. \n\n#### Caption Pre-Processing \n\nThe captions also need to be pre-processed and prepped for training. In this example, for generating captions, we are aiming to create a model that predicts the next token of a sentence from previous tokens, so we turn the caption associated with any image into a list of tokenized words, before casting it to a PyTorch tensor that we can use to train the network.\n\nTo understand in more detail how COCO captions are pre-processed, we'll first need to take a look at the `vocab` instance variable of the `CoCoDataset` class. The code snippet below is pulled from the `__init__` method of the `CoCoDataset` class:\n```python\ndef __init__(self, transform, mode, batch_size, vocab_threshold, vocab_file, start_word, \n end_word, unk_word, annotations_file, vocab_from_file, img_folder):\n ...\n self.vocab = Vocabulary(vocab_threshold, vocab_file, start_word,\n end_word, unk_word, annotations_file, vocab_from_file)\n ...\n```\nFrom the code snippet above, you can see that `data_loader.dataset.vocab` is an instance of the `Vocabulary` class from **vocabulary.py**. Take the time now to verify this for yourself by looking at the full code in **data_loader.py**. \n\nWe use this instance to pre-process the COCO captions (from the `__getitem__` method in the `CoCoDataset` class):\n\n```python\n# Convert caption to tensor of word ids.\ntokens = nltk.tokenize.word_tokenize(str(caption).lower()) # line 1\ncaption = [] # line 2\ncaption.append(self.vocab(self.vocab.start_word)) # line 3\ncaption.extend([self.vocab(token) for token in tokens]) # line 4\ncaption.append(self.vocab(self.vocab.end_word)) # line 5\ncaption = torch.Tensor(caption).long() # line 6\n```\n\nAs you will see soon, this code converts any string-valued caption to a list of integers, before casting it to a PyTorch tensor. To see how this code works, we'll apply it to the sample caption in the next code cell.",
"_____no_output_____"
]
],
[
[
"sample_caption = 'A person doing a trick on a rail while riding a skateboard.'",
"_____no_output_____"
]
],
[
[
"In **`line 1`** of the code snippet, every letter in the caption is converted to lowercase, and the [`nltk.tokenize.word_tokenize`](http://www.nltk.org/) function is used to obtain a list of string-valued tokens. Run the next code cell to visualize the effect on `sample_caption`.",
"_____no_output_____"
]
],
[
[
"sample_tokens = nltk.tokenize.word_tokenize(str(sample_caption).lower())\nprint(sample_tokens)",
"['a', 'person', 'doing', 'a', 'trick', 'on', 'a', 'rail', 'while', 'riding', 'a', 'skateboard', '.']\n"
]
],
[
[
"In **`line 2`** and **`line 3`** we initialize an empty list and append an integer to mark the start of a caption. The [paper](https://arxiv.org/pdf/1411.4555.pdf) that you are encouraged to implement uses a special start word (and a special end word, which we'll examine below) to mark the beginning (and end) of a caption.\n\nThis special start word (`\"<start>\"`) is decided when instantiating the data loader and is passed as a parameter (`start_word`). You are **required** to keep this parameter at its default value (`start_word=\"<start>\"`).\n\nAs you will see below, the integer `0` is always used to mark the start of a caption.",
"_____no_output_____"
]
],
[
[
"sample_caption = []\n\nstart_word = data_loader.dataset.vocab.start_word\nprint('Special start word:', start_word)\nsample_caption.append(data_loader.dataset.vocab(start_word))\nprint(sample_caption)",
"Special start word: <start>\n[0]\n"
]
],
[
[
"In **`line 4`**, we continue the list by adding integers that correspond to each of the tokens in the caption.",
"_____no_output_____"
]
],
[
[
"sample_caption.extend([data_loader.dataset.vocab(token) for token in sample_tokens])\nprint(sample_caption)",
"[0, 3, 98, 754, 3, 396, 39, 3, 1009, 207, 139, 3, 753, 18]\n"
]
],
[
[
"In **`line 5`**, we append a final integer to mark the end of the caption. \n\nIdentical to the case of the special start word (above), the special end word (`\"<end>\"`) is decided when instantiating the data loader and is passed as a parameter (`end_word`). You are **required** to keep this parameter at its default value (`end_word=\"<end>\"`).\n\nAs you will see below, the integer `1` is always used to mark the end of a caption.",
"_____no_output_____"
]
],
[
[
"end_word = data_loader.dataset.vocab.end_word\nprint('Special end word:', end_word)\n\nsample_caption.append(data_loader.dataset.vocab(end_word))\nprint(sample_caption)",
"Special end word: <end>\n[0, 3, 98, 754, 3, 396, 39, 3, 1009, 207, 139, 3, 753, 18, 1]\n"
]
],
[
[
"Finally, in **`line 6`**, we convert the list of integers to a PyTorch tensor and cast it to [long type](http://pytorch.org/docs/master/tensors.html#torch.Tensor.long). You can read more about the different types of PyTorch tensors on the [website](http://pytorch.org/docs/master/tensors.html).",
"_____no_output_____"
]
],
[
[
"sample_caption = torch.Tensor(sample_caption).long()\nprint(sample_caption)",
"tensor([ 0, 3, 98, 754, 3, 396, 39, 3, 1009, 207, 139, 3,\n 753, 18, 1])\n"
]
],
[
[
"And that's it! In summary, any caption is converted to a list of tokens, with _special_ start and end tokens marking the beginning and end of the sentence:\n```\n[<start>, 'a', 'person', 'doing', 'a', 'trick', 'while', 'riding', 'a', 'skateboard', '.', <end>]\n```\nThis list of tokens is then turned into a list of integers, where every distinct word in the vocabulary has an associated integer value:\n```\n[0, 3, 98, 754, 3, 396, 207, 139, 3, 753, 18, 1]\n```\nFinally, this list is converted to a PyTorch tensor. All of the captions in the COCO dataset are pre-processed using this same procedure from **`lines 1-6`** described above. \n\nAs you saw, in order to convert a token to its corresponding integer, we call `data_loader.dataset.vocab` as a function. The details of how this call works can be explored in the `__call__` method in the `Vocabulary` class in **vocabulary.py**. \n\n```python\ndef __call__(self, word):\n if not word in self.word2idx:\n return self.word2idx[self.unk_word]\n return self.word2idx[word]\n```\n\nThe `word2idx` instance variable is a Python [dictionary](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) that is indexed by string-valued keys (mostly tokens obtained from training captions). For each key, the corresponding value is the integer that the token is mapped to in the pre-processing step.\n\nUse the code cell below to view a subset of this dictionary.",
"_____no_output_____"
]
],
[
[
"# Preview the word2idx dictionary.\ndict(list(data_loader.dataset.vocab.word2idx.items())[:10])",
"_____no_output_____"
]
],
[
[
"We also print the total number of keys.",
"_____no_output_____"
]
],
[
[
"# Print the total number of keys in the word2idx dictionary.\nprint('Total number of tokens in vocabulary:', len(data_loader.dataset.vocab))",
"Total number of tokens in vocabulary: 8855\n"
]
],
[
[
"As you will see if you examine the code in **vocabulary.py**, the `word2idx` dictionary is created by looping over the captions in the training dataset. If a token appears no less than `vocab_threshold` times in the training set, then it is added as a key to the dictionary and assigned a corresponding unique integer. You will have the option later to amend the `vocab_threshold` argument when instantiating your data loader. Note that in general, **smaller** values for `vocab_threshold` yield a **larger** number of tokens in the vocabulary. You are encouraged to check this for yourself in the next code cell by decreasing the value of `vocab_threshold` before creating a new data loader. ",
"_____no_output_____"
]
],
[
[
"# Modify the minimum word count threshold.\nvocab_threshold = 4\n\n# Obtain the data loader.\ndata_loader = get_loader(transform=transform_train,\n mode='train',\n batch_size=batch_size,\n vocab_threshold=vocab_threshold,\n vocab_from_file=False)",
"loading annotations into memory...\nDone (t=0.90s)\ncreating index...\nindex created!\n[0/414113] Tokenizing captions...\n[100000/414113] Tokenizing captions...\n[200000/414113] Tokenizing captions...\n[300000/414113] Tokenizing captions...\n[400000/414113] Tokenizing captions...\nloading annotations into memory...\n"
],
[
"# Print the total number of keys in the word2idx dictionary.\nprint('Total number of tokens in vocabulary:', len(data_loader.dataset.vocab))",
"Total number of tokens in vocabulary: 9955\n"
]
],
[
[
"There are also a few special keys in the `word2idx` dictionary. You are already familiar with the special start word (`\"<start>\"`) and special end word (`\"<end>\"`). There is one more special token, corresponding to unknown words (`\"<unk>\"`). All tokens that don't appear anywhere in the `word2idx` dictionary are considered unknown words. In the pre-processing step, any unknown tokens are mapped to the integer `2`.",
"_____no_output_____"
]
],
[
[
"unk_word = data_loader.dataset.vocab.unk_word\nprint('Special unknown word:', unk_word)\n\nprint('All unknown words are mapped to this integer:', data_loader.dataset.vocab(unk_word))",
"Special unknown word: <unk>\nAll unknown words are mapped to this integer: 2\n"
]
],
[
[
"Check this for yourself below, by pre-processing the provided nonsense words that never appear in the training captions. ",
"_____no_output_____"
]
],
[
[
"print(data_loader.dataset.vocab('jfkafejw'))\nprint(data_loader.dataset.vocab('ieowoqjf'))",
"2\n2\n"
]
],
[
[
"The final thing to mention is the `vocab_from_file` argument that is supplied when creating a data loader. To understand this argument, note that when you create a new data loader, the vocabulary (`data_loader.dataset.vocab`) is saved as a [pickle](https://docs.python.org/3/library/pickle.html) file in the project folder, with filename `vocab.pkl`.\n\nIf you are still tweaking the value of the `vocab_threshold` argument, you **must** set `vocab_from_file=False` to have your changes take effect. \n\nBut once you are happy with the value that you have chosen for the `vocab_threshold` argument, you need only run the data loader *one more time* with your chosen `vocab_threshold` to save the new vocabulary to file. Then, you can henceforth set `vocab_from_file=True` to load the vocabulary from file and speed the instantiation of the data loader. Note that building the vocabulary from scratch is the most time-consuming part of instantiating the data loader, and so you are strongly encouraged to set `vocab_from_file=True` as soon as you are able.\n\nNote that if `vocab_from_file=True`, then any supplied argument for `vocab_threshold` when instantiating the data loader is completely ignored.",
"_____no_output_____"
]
],
[
[
"# Obtain the data loader (from file). Note that it runs much faster than before!\ndata_loader = get_loader(transform=transform_train,\n mode='train',\n batch_size=batch_size,\n vocab_from_file=True)",
"Vocabulary successfully loaded from vocab.pkl file!\nloading annotations into memory...\n"
]
],
[
[
"In the next section, you will learn how to use the data loader to obtain batches of training data.",
"_____no_output_____"
],
[
"<a id='step2'></a>\n## Step 2: Use the Data Loader to Obtain Batches\n\nThe captions in the dataset vary greatly in length. You can see this by examining `data_loader.dataset.caption_lengths`, a Python list with one entry for each training caption (where the value stores the length of the corresponding caption). \n\nIn the code cell below, we use this list to print the total number of captions in the training data with each length. As you will see below, the majority of captions have length 10. Likewise, very short and very long captions are quite rare. ",
"_____no_output_____"
]
],
[
[
"from collections import Counter\n\n# Tally the total number of training captions with each length.\ncounter = Counter(data_loader.dataset.caption_lengths)\nlengths = sorted(counter.items(), key=lambda pair: pair[1], reverse=True)\nfor value, count in lengths:\n print('value: %2d --- count: %5d' % (value, count))",
"value: 10 --- count: 86334\nvalue: 11 --- count: 79948\nvalue: 9 --- count: 71934\nvalue: 12 --- count: 57637\nvalue: 13 --- count: 37645\nvalue: 14 --- count: 22335\nvalue: 8 --- count: 20771\nvalue: 15 --- count: 12841\nvalue: 16 --- count: 7729\nvalue: 17 --- count: 4842\nvalue: 18 --- count: 3104\nvalue: 19 --- count: 2014\nvalue: 7 --- count: 1597\nvalue: 20 --- count: 1451\nvalue: 21 --- count: 999\nvalue: 22 --- count: 683\nvalue: 23 --- count: 534\nvalue: 24 --- count: 383\nvalue: 25 --- count: 277\nvalue: 26 --- count: 215\nvalue: 27 --- count: 159\nvalue: 28 --- count: 115\nvalue: 29 --- count: 86\nvalue: 30 --- count: 58\nvalue: 31 --- count: 49\nvalue: 32 --- count: 44\nvalue: 34 --- count: 39\nvalue: 37 --- count: 32\nvalue: 33 --- count: 31\nvalue: 35 --- count: 31\nvalue: 36 --- count: 26\nvalue: 38 --- count: 18\nvalue: 39 --- count: 18\nvalue: 43 --- count: 16\nvalue: 44 --- count: 16\nvalue: 48 --- count: 12\nvalue: 45 --- count: 11\nvalue: 42 --- count: 10\nvalue: 40 --- count: 9\nvalue: 49 --- count: 9\nvalue: 46 --- count: 9\nvalue: 47 --- count: 7\nvalue: 50 --- count: 6\nvalue: 51 --- count: 6\nvalue: 41 --- count: 6\nvalue: 52 --- count: 5\nvalue: 54 --- count: 3\nvalue: 56 --- count: 2\nvalue: 6 --- count: 2\nvalue: 53 --- count: 2\nvalue: 55 --- count: 2\nvalue: 57 --- count: 1\n"
]
],
[
[
"To generate batches of training data, we begin by first sampling a caption length (where the probability that any length is drawn is proportional to the number of captions with that length in the dataset). Then, we retrieve a batch of size `batch_size` of image-caption pairs, where all captions have the sampled length. This approach for assembling batches matches the procedure in [this paper](https://arxiv.org/pdf/1502.03044.pdf) and has been shown to be computationally efficient without degrading performance.\n\nRun the code cell below to generate a batch. The `get_train_indices` method in the `CoCoDataset` class first samples a caption length, and then samples `batch_size` indices corresponding to training data points with captions of that length. These indices are stored below in `indices`.\n\nThese indices are supplied to the data loader, which then is used to retrieve the corresponding data points. The pre-processed images and captions in the batch are stored in `images` and `captions`.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport torch.utils.data as data\n\n# Randomly sample a caption length, and sample indices with that length.\nindices = data_loader.dataset.get_train_indices()\nprint('selected caption length:', set(data_loader.dataset.caption_lengths[i] for i in indices))\nprint('batch size:', data_loader.dataset.batch_size)\nprint('sampled indices:', indices)\n\n# Create and assign a batch sampler to retrieve a batch with the sampled indices.\nnew_sampler = data.sampler.SubsetRandomSampler(indices=indices)\ndata_loader.batch_sampler.sampler = new_sampler\n \n# Obtain the batch.\nimages, captions = next(iter(data_loader))\n \nprint('images.shape:', images.shape)\nprint('captions.shape:', captions.shape)",
"selected caption length: {11}\nbatch size: 64\nsampled indices: [163258, 37144, 380255, 317957, 192582, 360740, 10195, 2809, 162865, 309252, 293693, 333283, 35401, 403582, 103488, 93114, 234377, 135463, 281449, 85137, 73144, 43331, 279550, 9538, 215758, 166348, 288499, 375568, 226201, 77114, 139807, 66138, 349567, 316866, 200844, 302747, 78815, 342849, 273002, 58477, 229691, 22617, 172296, 86417, 241012, 201450, 404151, 231331, 202059, 347401, 374039, 220502, 32122, 246526, 157367, 186080, 139093, 410879, 240537, 296696, 208667, 360735, 224908, 87710]\nimages.shape: torch.Size([64, 3, 224, 224])\ncaptions.shape: torch.Size([64, 13])\n"
]
],
[
[
"Each time you run the code cell above, a different caption length is sampled, and a different batch of training data is returned. Run the code cell multiple times to check this out!\n\nYou will train your model in the next notebook in this sequence (**2_Training.ipynb**). This code for generating training batches will be provided to you.\n\n> Before moving to the next notebook in the sequence (**2_Training.ipynb**), you are strongly encouraged to take the time to become very familiar with the code in **data_loader.py** and **vocabulary.py**. **Step 1** and **Step 2** of this notebook are designed to help facilitate a basic introduction and guide your understanding. However, our description is not exhaustive, and it is up to you (as part of the project) to learn how to best utilize these files to complete the project. __You should NOT amend any of the code in either *data_loader.py* or *vocabulary.py*.__\n\nIn the next steps, we focus on learning how to specify a CNN-RNN architecture in PyTorch, towards the goal of image captioning.",
"_____no_output_____"
],
[
"<a id='step3'></a>\n## Step 3: Experiment with the CNN Encoder\n\nRun the code cell below to import `EncoderCNN` and `DecoderRNN` from **model.py**. ",
"_____no_output_____"
]
],
[
[
"# Watch for any changes in model.py, and re-load it automatically.\n% load_ext autoreload\n% autoreload 2",
"_____no_output_____"
]
],
[
[
"In the next code cell we define a `device` that you will use move PyTorch tensors to GPU (if CUDA is available). Run this code cell before continuing.",
"_____no_output_____"
]
],
[
[
"device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")",
"_____no_output_____"
]
],
[
[
"Run the code cell below to instantiate the CNN encoder in `encoder`. \n\nThe pre-processed images from the batch in **Step 2** of this notebook are then passed through the encoder, and the output is stored in `features`.",
"_____no_output_____"
]
],
[
[
"from model import EncoderCNN\n\n# Specify the dimensionality of the image embedding.\nembed_size = 256\n\n#-#-#-# Do NOT modify the code below this line. #-#-#-#\n\n# Initialize the encoder. (Optional: Add additional arguments if necessary.)\nencoder = EncoderCNN(embed_size)\n\n# Move the encoder to GPU if CUDA is available.\nencoder.to(device)\n \n# Move last batch of images (from Step 2) to GPU if CUDA is available. \nimages = images.to(device)\n\n# Print encoder summary\nsummary(encoder, images.cpu().data.numpy().shape[1:])\n\n# Pass the images through the encoder.\nfeatures = encoder(images)\n\nprint('type(features):', type(features))\nprint('features.shape:', features.shape)\n\n# Check that your encoder satisfies some requirements of the project! :D\nassert type(features)==torch.Tensor, \"Encoder output needs to be a PyTorch Tensor.\" \nassert (features.shape[0]==batch_size) & (features.shape[1]==embed_size), \"The shape of the encoder output is incorrect.\"",
"----------------------------------------------------------------\n Layer (type) Output Shape Param #\n================================================================\n Conv2d-1 [-1, 64, 112, 112] 9,408\n BatchNorm2d-2 [-1, 64, 112, 112] 128\n ReLU-3 [-1, 64, 112, 112] 0\n MaxPool2d-4 [-1, 64, 56, 56] 0\n Conv2d-5 [-1, 64, 56, 56] 4,096\n BatchNorm2d-6 [-1, 64, 56, 56] 128\n ReLU-7 [-1, 64, 56, 56] 0\n Conv2d-8 [-1, 64, 56, 56] 36,864\n BatchNorm2d-9 [-1, 64, 56, 56] 128\n ReLU-10 [-1, 64, 56, 56] 0\n Conv2d-11 [-1, 256, 56, 56] 16,384\n BatchNorm2d-12 [-1, 256, 56, 56] 512\n Conv2d-13 [-1, 256, 56, 56] 16,384\n BatchNorm2d-14 [-1, 256, 56, 56] 512\n ReLU-15 [-1, 256, 56, 56] 0\n Bottleneck-16 [-1, 256, 56, 56] 0\n Conv2d-17 [-1, 64, 56, 56] 16,384\n BatchNorm2d-18 [-1, 64, 56, 56] 128\n ReLU-19 [-1, 64, 56, 56] 0\n Conv2d-20 [-1, 64, 56, 56] 36,864\n BatchNorm2d-21 [-1, 64, 56, 56] 128\n ReLU-22 [-1, 64, 56, 56] 0\n Conv2d-23 [-1, 256, 56, 56] 16,384\n BatchNorm2d-24 [-1, 256, 56, 56] 512\n ReLU-25 [-1, 256, 56, 56] 0\n Bottleneck-26 [-1, 256, 56, 56] 0\n Conv2d-27 [-1, 64, 56, 56] 16,384\n BatchNorm2d-28 [-1, 64, 56, 56] 128\n ReLU-29 [-1, 64, 56, 56] 0\n Conv2d-30 [-1, 64, 56, 56] 36,864\n BatchNorm2d-31 [-1, 64, 56, 56] 128\n ReLU-32 [-1, 64, 56, 56] 0\n Conv2d-33 [-1, 256, 56, 56] 16,384\n BatchNorm2d-34 [-1, 256, 56, 56] 512\n ReLU-35 [-1, 256, 56, 56] 0\n Bottleneck-36 [-1, 256, 56, 56] 0\n Conv2d-37 [-1, 128, 56, 56] 32,768\n BatchNorm2d-38 [-1, 128, 56, 56] 256\n ReLU-39 [-1, 128, 56, 56] 0\n Conv2d-40 [-1, 128, 28, 28] 147,456\n BatchNorm2d-41 [-1, 128, 28, 28] 256\n ReLU-42 [-1, 128, 28, 28] 0\n Conv2d-43 [-1, 512, 28, 28] 65,536\n BatchNorm2d-44 [-1, 512, 28, 28] 1,024\n Conv2d-45 [-1, 512, 28, 28] 131,072\n BatchNorm2d-46 [-1, 512, 28, 28] 1,024\n ReLU-47 [-1, 512, 28, 28] 0\n Bottleneck-48 [-1, 512, 28, 28] 0\n Conv2d-49 [-1, 128, 28, 28] 65,536\n BatchNorm2d-50 [-1, 128, 28, 28] 256\n ReLU-51 [-1, 128, 28, 28] 0\n Conv2d-52 [-1, 128, 28, 28] 147,456\n BatchNorm2d-53 [-1, 128, 28, 28] 256\n ReLU-54 [-1, 128, 28, 28] 0\n Conv2d-55 [-1, 512, 28, 28] 65,536\n BatchNorm2d-56 [-1, 512, 28, 28] 1,024\n ReLU-57 [-1, 512, 28, 28] 0\n Bottleneck-58 [-1, 512, 28, 28] 0\n Conv2d-59 [-1, 128, 28, 28] 65,536\n BatchNorm2d-60 [-1, 128, 28, 28] 256\n ReLU-61 [-1, 128, 28, 28] 0\n Conv2d-62 [-1, 128, 28, 28] 147,456\n BatchNorm2d-63 [-1, 128, 28, 28] 256\n ReLU-64 [-1, 128, 28, 28] 0\n Conv2d-65 [-1, 512, 28, 28] 65,536\n BatchNorm2d-66 [-1, 512, 28, 28] 1,024\n ReLU-67 [-1, 512, 28, 28] 0\n Bottleneck-68 [-1, 512, 28, 28] 0\n Conv2d-69 [-1, 128, 28, 28] 65,536\n BatchNorm2d-70 [-1, 128, 28, 28] 256\n ReLU-71 [-1, 128, 28, 28] 0\n Conv2d-72 [-1, 128, 28, 28] 147,456\n BatchNorm2d-73 [-1, 128, 28, 28] 256\n ReLU-74 [-1, 128, 28, 28] 0\n Conv2d-75 [-1, 512, 28, 28] 65,536\n BatchNorm2d-76 [-1, 512, 28, 28] 1,024\n ReLU-77 [-1, 512, 28, 28] 0\n Bottleneck-78 [-1, 512, 28, 28] 0\n Conv2d-79 [-1, 256, 28, 28] 131,072\n BatchNorm2d-80 [-1, 256, 28, 28] 512\n ReLU-81 [-1, 256, 28, 28] 0\n Conv2d-82 [-1, 256, 14, 14] 589,824\n BatchNorm2d-83 [-1, 256, 14, 14] 512\n ReLU-84 [-1, 256, 14, 14] 0\n Conv2d-85 [-1, 1024, 14, 14] 262,144\n BatchNorm2d-86 [-1, 1024, 14, 14] 2,048\n Conv2d-87 [-1, 1024, 14, 14] 524,288\n BatchNorm2d-88 [-1, 1024, 14, 14] 2,048\n ReLU-89 [-1, 1024, 14, 14] 0\n Bottleneck-90 [-1, 1024, 14, 14] 0\n Conv2d-91 [-1, 256, 14, 14] 262,144\n BatchNorm2d-92 [-1, 256, 14, 14] 512\n ReLU-93 [-1, 256, 14, 14] 0\n Conv2d-94 [-1, 256, 14, 14] 589,824\n BatchNorm2d-95 [-1, 256, 14, 14] 512\n ReLU-96 [-1, 256, 14, 14] 0\n Conv2d-97 [-1, 1024, 14, 14] 262,144\n BatchNorm2d-98 [-1, 1024, 14, 14] 2,048\n ReLU-99 [-1, 1024, 14, 14] 0\n Bottleneck-100 [-1, 1024, 14, 14] 0\n Conv2d-101 [-1, 256, 14, 14] 262,144\n BatchNorm2d-102 [-1, 256, 14, 14] 512\n ReLU-103 [-1, 256, 14, 14] 0\n Conv2d-104 [-1, 256, 14, 14] 589,824\n BatchNorm2d-105 [-1, 256, 14, 14] 512\n ReLU-106 [-1, 256, 14, 14] 0\n Conv2d-107 [-1, 1024, 14, 14] 262,144\n BatchNorm2d-108 [-1, 1024, 14, 14] 2,048\n ReLU-109 [-1, 1024, 14, 14] 0\n Bottleneck-110 [-1, 1024, 14, 14] 0\n Conv2d-111 [-1, 256, 14, 14] 262,144\n BatchNorm2d-112 [-1, 256, 14, 14] 512\n ReLU-113 [-1, 256, 14, 14] 0\n Conv2d-114 [-1, 256, 14, 14] 589,824\n BatchNorm2d-115 [-1, 256, 14, 14] 512\n ReLU-116 [-1, 256, 14, 14] 0\n Conv2d-117 [-1, 1024, 14, 14] 262,144\n BatchNorm2d-118 [-1, 1024, 14, 14] 2,048\n ReLU-119 [-1, 1024, 14, 14] 0\n Bottleneck-120 [-1, 1024, 14, 14] 0\n Conv2d-121 [-1, 256, 14, 14] 262,144\n BatchNorm2d-122 [-1, 256, 14, 14] 512\n ReLU-123 [-1, 256, 14, 14] 0\n Conv2d-124 [-1, 256, 14, 14] 589,824\n BatchNorm2d-125 [-1, 256, 14, 14] 512\n ReLU-126 [-1, 256, 14, 14] 0\n Conv2d-127 [-1, 1024, 14, 14] 262,144\n BatchNorm2d-128 [-1, 1024, 14, 14] 2,048\n ReLU-129 [-1, 1024, 14, 14] 0\n Bottleneck-130 [-1, 1024, 14, 14] 0\n Conv2d-131 [-1, 256, 14, 14] 262,144\n BatchNorm2d-132 [-1, 256, 14, 14] 512\n ReLU-133 [-1, 256, 14, 14] 0\n Conv2d-134 [-1, 256, 14, 14] 589,824\n BatchNorm2d-135 [-1, 256, 14, 14] 512\n ReLU-136 [-1, 256, 14, 14] 0\n Conv2d-137 [-1, 1024, 14, 14] 262,144\n BatchNorm2d-138 [-1, 1024, 14, 14] 2,048\n ReLU-139 [-1, 1024, 14, 14] 0\n Bottleneck-140 [-1, 1024, 14, 14] 0\n Conv2d-141 [-1, 512, 14, 14] 524,288\n BatchNorm2d-142 [-1, 512, 14, 14] 1,024\n ReLU-143 [-1, 512, 14, 14] 0\n Conv2d-144 [-1, 512, 7, 7] 2,359,296\n BatchNorm2d-145 [-1, 512, 7, 7] 1,024\n ReLU-146 [-1, 512, 7, 7] 0\n Conv2d-147 [-1, 2048, 7, 7] 1,048,576\n BatchNorm2d-148 [-1, 2048, 7, 7] 4,096\n Conv2d-149 [-1, 2048, 7, 7] 2,097,152\n BatchNorm2d-150 [-1, 2048, 7, 7] 4,096\n ReLU-151 [-1, 2048, 7, 7] 0\n Bottleneck-152 [-1, 2048, 7, 7] 0\n Conv2d-153 [-1, 512, 7, 7] 1,048,576\n BatchNorm2d-154 [-1, 512, 7, 7] 1,024\n ReLU-155 [-1, 512, 7, 7] 0\n Conv2d-156 [-1, 512, 7, 7] 2,359,296\n BatchNorm2d-157 [-1, 512, 7, 7] 1,024\n ReLU-158 [-1, 512, 7, 7] 0\n Conv2d-159 [-1, 2048, 7, 7] 1,048,576\n BatchNorm2d-160 [-1, 2048, 7, 7] 4,096\n ReLU-161 [-1, 2048, 7, 7] 0\n Bottleneck-162 [-1, 2048, 7, 7] 0\n Conv2d-163 [-1, 512, 7, 7] 1,048,576\n BatchNorm2d-164 [-1, 512, 7, 7] 1,024\n ReLU-165 [-1, 512, 7, 7] 0\n Conv2d-166 [-1, 512, 7, 7] 2,359,296\n BatchNorm2d-167 [-1, 512, 7, 7] 1,024\n ReLU-168 [-1, 512, 7, 7] 0\n Conv2d-169 [-1, 2048, 7, 7] 1,048,576\n BatchNorm2d-170 [-1, 2048, 7, 7] 4,096\n ReLU-171 [-1, 2048, 7, 7] 0\n Bottleneck-172 [-1, 2048, 7, 7] 0\n AvgPool2d-173 [-1, 2048, 1, 1] 0\n Linear-174 [-1, 256] 524,544\n================================================================\nTotal params: 24,032,576\nTrainable params: 524,544\nNon-trainable params: 23,508,032\n----------------------------------------------------------------\nInput size (MB): 0.57\nForward/backward pass size (MB): 286.55\nParams size (MB): 91.68\nEstimated Total Size (MB): 378.80\n----------------------------------------------------------------\ntype(features): <class 'torch.Tensor'>\nfeatures.shape: torch.Size([64, 256])\n"
]
],
[
[
"The encoder that we provide to you uses the pre-trained ResNet-50 architecture (with the final fully-connected layer removed) to extract features from a batch of pre-processed images. The output is then flattened to a vector, before being passed through a `Linear` layer to transform the feature vector to have the same size as the word embedding.\n\n\n\nYou are welcome (and encouraged) to amend the encoder in **model.py**, to experiment with other architectures. In particular, consider using a [different pre-trained model architecture](http://pytorch.org/docs/master/torchvision/models.html). You may also like to [add batch normalization](http://pytorch.org/docs/master/nn.html#normalization-layers). \n\n> You are **not** required to change anything about the encoder.\n\nFor this project, you **must** incorporate a pre-trained CNN into your encoder. Your `EncoderCNN` class must take `embed_size` as an input argument, which will also correspond to the dimensionality of the input to the RNN decoder that you will implement in Step 4. When you train your model in the next notebook in this sequence (**2_Training.ipynb**), you are welcome to tweak the value of `embed_size`.\n\nIf you decide to modify the `EncoderCNN` class, save **model.py** and re-execute the code cell above. If the code cell returns an assertion error, then please follow the instructions to modify your code before proceeding. The assert statements ensure that `features` is a PyTorch tensor with shape `[batch_size, embed_size]`.",
"_____no_output_____"
],
[
"<a id='step4'></a>\n## Step 4: Implement the RNN Decoder\n\nBefore executing the next code cell, you must write `__init__` and `forward` methods in the `DecoderRNN` class in **model.py**. (Do **not** write the `sample` method yet - you will work with this method when you reach **3_Inference.ipynb**.)\n\n> The `__init__` and `forward` methods in the `DecoderRNN` class are the only things that you **need** to modify as part of this notebook. You will write more implementations in the notebooks that appear later in the sequence.\n\nYour decoder will be an instance of the `DecoderRNN` class and must accept as input:\n- the PyTorch tensor `features` containing the embedded image features (outputted in Step 3, when the last batch of images from Step 2 was passed through `encoder`), along with\n- a PyTorch tensor corresponding to the last batch of captions (`captions`) from Step 2.\n\nNote that the way we have written the data loader should simplify your code a bit. In particular, every training batch will contain pre-processed captions where all have the same length (`captions.shape[1]`), so **you do not need to worry about padding**. \n> While you are encouraged to implement the decoder described in [this paper](https://arxiv.org/pdf/1411.4555.pdf), you are welcome to implement any architecture of your choosing, as long as it uses at least one RNN layer, with hidden dimension `hidden_size`. \n\nAlthough you will test the decoder using the last batch that is currently stored in the notebook, your decoder should be written to accept an arbitrary batch (of embedded image features and pre-processed captions [where all captions have the same length]) as input. \n\n\n\nIn the code cell below, `outputs` should be a PyTorch tensor with size `[batch_size, captions.shape[1], vocab_size]`. Your output should be designed such that `outputs[i,j,k]` contains the model's predicted score, indicating how likely the `j`-th token in the `i`-th caption in the batch is the `k`-th token in the vocabulary. In the next notebook of the sequence (**2_Training.ipynb**), we provide code to supply these scores to the [`torch.nn.CrossEntropyLoss`](http://pytorch.org/docs/master/nn.html#torch.nn.CrossEntropyLoss) optimizer in PyTorch.",
"_____no_output_____"
]
],
[
[
"from model import DecoderRNN\n\n# Specify the number of features in the hidden state of the RNN decoder.\nhidden_size = 512\n\n#-#-#-# Do NOT modify the code below this line. #-#-#-#\n\n# Store the size of the vocabulary.\nvocab_size = len(data_loader.dataset.vocab)\n\n# Initialize the decoder.\ndecoder = DecoderRNN(embed_size, hidden_size, vocab_size)\n\n# Move the decoder to GPU if CUDA is available.\ndecoder.to(device)\n \n# Move last batch of captions (from Step 1) to GPU if CUDA is available \ncaptions = captions.to(device)\n\n# Pass the encoder output and captions through the decoder.\nprint('features.shape:', features.shape)\nprint('captions.shape:', captions.shape)\nprint(decoder)\n\noutputs = decoder(features, captions)\n\nprint('type(outputs):', type(outputs))\nprint('outputs.shape:', outputs.shape)\n\n# Check that your decoder satisfies some requirements of the project! :D\nassert type(outputs)==torch.Tensor, \"Decoder output needs to be a PyTorch Tensor.\"\nassert (outputs.shape[0]==batch_size) & (outputs.shape[1]==captions.shape[1]) & (outputs.shape[2]==vocab_size), \"The shape of the decoder output is incorrect.\"",
"features.shape: torch.Size([64, 256])\ncaptions.shape: torch.Size([64, 13])\nDecoderRNN(\n (embedding): Embedding(9955, 256)\n (lstm): LSTM(256, 512, batch_first=True)\n (linear): Linear(in_features=512, out_features=9955, bias=True)\n)\ntype(outputs): <class 'torch.Tensor'>\noutputs.shape: torch.Size([64, 13, 9955])\n"
]
],
[
[
"When you train your model in the next notebook in this sequence (**2_Training.ipynb**), you are welcome to tweak the value of `hidden_size`.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d055fa242152744141920f2540358d6810e70e52 | 1,192 | ipynb | Jupyter Notebook | unsupervised-learning-in-python/4. Discovering interpretable features/notebook_section_04.ipynb | nhutnamhcmus/datacamp-playground | 25457e813b1145e1d335562286715eeddd1c1a7b | [
"MIT"
] | 1 | 2021-05-08T11:09:27.000Z | 2021-05-08T11:09:27.000Z | unsupervised-learning-in-python/4. Discovering interpretable features/notebook_section_04.ipynb | nhutnamhcmus/datacamp-playground | 25457e813b1145e1d335562286715eeddd1c1a7b | [
"MIT"
] | 1 | 2022-03-12T15:42:14.000Z | 2022-03-12T15:42:14.000Z | unsupervised-learning-in-python/4. Discovering interpretable features/notebook_section_04.ipynb | nhutnamhcmus/datacamp-playground | 25457e813b1145e1d335562286715eeddd1c1a7b | [
"MIT"
] | 1 | 2021-04-30T18:24:19.000Z | 2021-04-30T18:24:19.000Z | 17.028571 | 53 | 0.514262 | [
[
[
"# Section 4: Discovering interpretable features",
"_____no_output_____"
],
[
"## Non-negative matrix factorization (NMF)",
"_____no_output_____"
],
[
"## NMF learns interpretable parts",
"_____no_output_____"
],
[
"## Building recommender system using NMF",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d05611b8c9b239cfadcf83a8d19c75d425108270 | 60,245 | ipynb | Jupyter Notebook | tests/bac_analysis_preprocess.ipynb | dodonut/cnn_tf | e027ecbe3235373613efecb9dbffafb3bb3fde42 | [
"MIT"
] | null | null | null | tests/bac_analysis_preprocess.ipynb | dodonut/cnn_tf | e027ecbe3235373613efecb9dbffafb3bb3fde42 | [
"MIT"
] | null | null | null | tests/bac_analysis_preprocess.ipynb | dodonut/cnn_tf | e027ecbe3235373613efecb9dbffafb3bb3fde42 | [
"MIT"
] | null | null | null | 125.510417 | 42,766 | 0.826176 | [
[
[
"from scipy.signal import savgol_filter\nfrom math import factorial\nfrom sklearn.cluster import KMeans\nimport os\nimport numpy as np\nfrom spectral import *\nimport matplotlib.pyplot as plt\nimport math\nfrom scipy.io import loadmat\nfrom sklearn.decomposition import PCA\nfrom sklearn import preprocessing\nimport pickle\nimport pandas as pd\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nDATASTORE = 'D:\\\\TCC\\\\Datasets\\\\bacterias_new'\nSAVESTORE = 'D:\\\\TCC\\\\Datasets\\\\preprocess_bac_new'\nspectral.settings.envi_support_nonlowercase_params = True\n\nJoin = os.path.join\n\n",
"_____no_output_____"
],
[
"# PLOT_COLORS = ['b','g','r','c','m','y','k']\n# PLOT_SHAPES = ['-',':','--','-.','+']\nLABELS = ['Bacillusscereus', 'Bacillussubtilis', 'Coryniumbacteriumlutaminum',\n 'Enterobactearerogenes', 'Enterobactercloacal', 'Enterococcusfaecalis', 'Escheriachiacoli',\n 'Klesbsialapneumonial', 'Micrococcusluteus', 'Proteusmirabilis', 'Pseudomonasaeoruginosas', 'Salmonellaspp',\n 'Serratiamarcences', 'Staphylococcusaureus_6538', 'Staphylococcusaureus_25923', 'Staphylococcusepidemides']\n\nCOLORS = {\n 'Bacillusscereus': '#ff1900',\n 'Bacillussubtilis': '#c27c51',\n 'Coryniumbacteriumlutaminum': '#7d5e20',\n 'Enterobactearerogenes': '#dbcf5c',\n 'Enterobactercloacal': '#9db031',\n 'Enterococcusfaecalis': '#9dff00',\n 'Escheriachiacoli': '#b58ad4',\n 'Klesbsialapneumonial': '#f200ff',\n 'Micrococcusluteus': '#6e9669',\n 'Proteusmirabilis': '#11521d',\n 'Pseudomonasaeoruginosas': '#85868c',\n 'Salmonellaspp': '#17e68f',\n 'Serratiamarcences': '#4ad9d9',\n 'Staphylococcusaureus_6538': '#1aaeb0',\n 'Staphylococcusaureus_25923': '#9117cf',\n 'Staphylococcusepidemides': '#bf324b',\n}\n\ndef get_fewer_lines(mat, ammount):\n n_mat = []\n r, _, _ = mat.shape\n for i in range(0, r, int(r/ammount)):\n n_mat.append(mat[i, :, :])\n return np.array(n_mat)\n\ndef calibration(I, W, D):\n row,column,wave = I.shape\n arr = np.copy(I)\n\n meanw = np.mean(W, axis=0)\n meand = np.mean(D, axis=0)\n\n for z in range(wave):\n if (z % 30 == 0):\n print('CAMADAS {}-{}'.format(z, 256 if z+30>256 else z+30))\n for x in range(row):\n for y in range(column):\n w = meanw[0,y,z]\n d = meand[0,y,z]\n s = I[x,y,z]\n\n den = w-d\n num = s-d\n if den and num/den > 0:\n arr[x,y,z] = -math.log10(num / den)\n else:\n arr[x,y,z] = 0\n return arr\n\ndef hsi2matrix(arr):\n if len(arr.shape) != 3:\n raise BaseException('A entrada deve possuir 3 dimensões')\n\n r, c, w = arr.shape\n return np.reshape(arr, (r*c, w))\n\ndef mat2hsi(mat, shape):\n return np.reshape(mat, (-1, shape[1], shape[2]))\n\ndef pca_95(x):\n scaled_data = preprocessing.scale(x)\n\n return PCA(n_components=0.95).fit_transform(scaled_data)\n\ndef get_clusters(x):\n pca_data = pca_95(x)\n km = KMeans(n_clusters=2).fit(pca_data)\n return km\n\ndef get_layer(hsi, layer):\n return hsi[:,:,layer]\n\n\ndef savitzky_golay_filter(y, window_size, order, deriv=0, rate=1):\n order_range = range(order+1)\n half_window = (window_size - 1) // 2\n b = np.mat([[k**i for i in order_range]\n for k in range(-half_window, half_window+1)])\n m = np.linalg.pinv(b).A[deriv] * rate**deriv * factorial(deriv)\n firstvals = y[0] - np.abs(y[1:half_window+1][::-1] - y[0])\n lastvals = y[-1] + np.abs(y[-half_window-1:-1][::-1] - y[-1])\n y = np.concatenate((firstvals, y, lastvals))\n return np.convolve(m[::-1], y, mode='valid')\n\ndef snv_filter(mat):\n nmat = np.copy(mat)\n mean = np.mean(mat, axis=1)\n std = np.std(mat, axis=1)\n for i in range(mat.shape[0]):\n nmat[i] = (nmat[i] - mean[i])/std[i]\n return nmat\n\ndef remove_pixels(cube, side, amount):\n cpy_cube = np.copy(cube)\n if side == 'top':\n cpy_cube[0:amount,:,:]=0\n elif side == 'left':\n cpy_cube[:, 0:amount, :] = 0\n elif side == 'right':\n cpy_cube[:,-amount:,:]=0\n else:\n cpy_cube[-amount:, :, :] = 0\n return cpy_cube\n\n\ndef remove_pixels_from_all_dir(cube, ammount_top, ammount_left, ammount_right, ammount_down):\n cpy_cube = np.copy(cube)\n if ammount_top != 0:\n cpy_cube = remove_pixels(cpy_cube, 'top', ammount_top)\n if ammount_left != 0:\n cpy_cube = remove_pixels(cpy_cube, 'left', ammount_left)\n if ammount_right != 0:\n cpy_cube = remove_pixels(cpy_cube, 'right', ammount_right)\n if ammount_down != 0:\n cpy_cube = remove_pixels(cpy_cube, 'down', ammount_down)\n return cpy_cube\n\ndef apply_mask(km,mat):\n mask1 = np.copy(mat)\n mask2 = np.copy(mat)\n lab = km.labels_\n for i in range(mat.shape[0]):\n if lab[i] == 0:\n mask1[i,:] = 0\n else:\n mask2[i,:] = 0\n \n return (mat2hsi(mask1, mat.shape) ,mat2hsi(mask2, mat.shape))\n\n\ndef hsi_remove_background(mat):\n mat_cpy = apply_filters(mat)\n km = get_clusters(mat_cpy)\n m1, m2 = apply_mask(km, mat)\n return (m1,m2)\n \ndef which_cluster_to_mantain(mask1, mask2):\n plt.figure()\n plt.title(\"FIGURE 1\")\n plt.imshow(get_layer(mask1,10), cmap='gray')\n plt.figure()\n plt.title(\"FIGURE 2\")\n plt.imshow(get_layer(mask2, 10), cmap='gray')\n plt.show()\n \n resp = int(input('Qual cluster deseja manter? (1/2)'))\n if resp != 1 and resp != 2:\n raise BaseException(\"Selected option not available.\")\n \n return resp - 1\n \ndef get_hsi_data(path):\n orig_name = [a for a in os.listdir(path) if '.hdr' in a and 'DARK' not in a and 'WHITE' not in a]\n dark_name = [a for a in os.listdir(path) if '.hdr' in a and 'DARK' in a]\n white_name = [a for a in os.listdir(path) if '.hdr' in a and 'WHITE' in a]\n\n I = open_image(os.path.join(path, orig_name[0]))\n W = open_image(os.path.join(path, white_name[0]))\n D = open_image(os.path.join(path, dark_name[0]))\n\n return (I.load(), W.load(), D.load())\n\ndef get_no_background_pixels(mat):\n return np.where(mat != 0)\n\ndef apply_filters(mat):\n mat_cpy = np.copy(mat)\n for i in range(mat.shape[0]):\n mat_cpy[i] = savgol_filter(mat_cpy[i], 21, 2, 1)\n # mat_cpy[i] = savgol_filter(mat_cpy[i], 25, 3, 2)\n\n return snv_filter(mat_cpy)\n\n\ndef preprocess_training_data_full(choose_bac: int, semipath: str):\n \"\"\"\n choose_bac is the bacteria to process (since takes forever to do all at once)\n returns a calibrated array based on dark and white hdr's, the pixels containing the bacteria (with no background) and the label for that bacteria\n \"\"\"\n\n bac_dirs = os.listdir(DATASTORE)\n\n for ind, bac in enumerate(bac_dirs):\n if (choose_bac == ind):\n\n individual_bac_dir = os.path.join(os.path.join(DATASTORE, bac), semipath)\n\n I, W, D = get_hsi_data(individual_bac_dir)\n\n W = get_fewer_lines(W, 25)\n D = get_fewer_lines(D, 25)\n\n arr_calib = calibration(I, W, D)\n\n cube = preprocess_training_data_from_calibration(arr_calib)\n return [arr_calib, cube]\n\ndef get_file_cube_from_folder_to_train(folder, bac_index, filename = 'calib.pickle'):\n bacs = os.path.join(SAVESTORE, folder)\n for i, bac in enumerate(os.listdir(bacs)):\n if i == bac_index:\n ind_bac_dir = os.path.join(bacs, bac)\n calib = load_pickle(filename, ind_bac_dir)\n return calib\n\ndef preprocess_training_data_from_calibration(arr_calib):\n cube = replace_median(arr_calib)\n\n mat = hsi2matrix(cube)\n\n mask1, mask2 = hsi_remove_background(mat)\n mask1 = mat2hsi(mask1, arr_calib.shape)\n mask2 = mat2hsi(mask2, arr_calib.shape)\n\n cluster = which_cluster_to_mantain(mask1, mask2)\n retCube = mask1\n if cluster == 1:\n retCube = mask2\n\n return retCube[:, :, 1:256-14]\n\ndef replace_zero_in_background(originalCube, maskedCube):\n cubecpy = np.copy(originalCube)\n for i in range(cubecpy.shape[0]):\n for j in range(cubecpy.shape[1]):\n if maskedCube[i,j,0] == 0:\n cubecpy[i,j,:] = 0\n return cubecpy\n\ndef preprocess_training_data_from_calibration_no_filters(arr_calib):\n cube = replace_median(arr_calib)\n\n mat = hsi2matrix(cube)\n\n mask1, mask2 = hsi_remove_background(mat)\n mask1 = mat2hsi(mask1, arr_calib.shape)\n mask2 = mat2hsi(mask2, arr_calib.shape)\n\n cluster = which_cluster_to_mantain(mask1, mask2)\n retCube = cube\n if cluster == 0:\n retCube = replace_zero_in_background(retCube, mask1)\n else:\n retCube = replace_zero_in_background(retCube, mask2)\n\n return retCube[:, :, 1:256-14]\n\ndef replace_median(cube):\n x,y,z = cube.shape\n for i in range(z):\n rows, cols = np.where(cube[:,:,i] == 0)\n for j in range(len(rows)):\n if rows[j] > 1 and cols[j] > 1 and rows[j] < x - 1 and cols[j] < y - 1:\n wdn = cube[rows[j]-1:rows[j]+2, cols[j]-1: cols[j]+2, i]\n r, _ = np.where(wdn == 0)\n if len(r) == 1:\n wdn = np.where(wdn != 0)\n cube[rows[j], cols[j], i] = np.median(wdn)\n return cube\n\ndef remove_mean_of_spectre(mat):\n return mat - np.mean(mat)\n\n################################ HELPERS #######################################\n\ndef save_pickle(path, filename, p):\n pickle_out = open(os.path.join(path, filename), \"wb\")\n pickle.dump(p, pickle_out)\n pickle_out.close()\n\ndef save_all(path, calib, masked):\n try:\n os.makedirs(path)\n except:\n print(\"Skipped - Directory already created!\")\n\n save_pickle(path, 'calib.pickle', calib)\n save_pickle(path, 'masked.pickle', masked)\n\ndef load_pickle(filename, dirpath):\n path = os.path.join(dirpath, filename)\n pickle_in = open(path, \"rb\")\n return pickle.load(pickle_in)\n\ndef plot_dif_spectrum_refs(refs: list,labels:list, ismat=False, plotTest=True,onlyCurves=False, saveDir = None):\n mats = refs\n if not ismat:\n for i in refs:\n mats.append(hsi2matrix(i))\n\n xmin = mats[0].shape[0]\n for i in mats:\n xmin = min(xmin, i.shape[0])\n\n means = []\n for i in range(len(mats)):\n mats[i] = mats[i][:xmin,:]\n # mats[i] = mats[i] - np.mean(mats[i])\n means.append(np.mean(mats[i], axis=0))\n\n s = \"\"\n if not onlyCurves:\n for i in range(0,len(mats),2):\n s += \"BAC: {}\\n\".format(labels[i//2])\n s += \"RMSE: {}\\nMean: {}\\n\\n\".format(\n math.sqrt(np.mean(np.square(mats[i] - mats[i+1]))), np.mean(mats[i]) - np.mean(mats[i+1]))\n\n plt.figure(figsize=(10,10))\n x = np.linspace(0, mats[0].shape[1], mats[0].shape[1])\n for i in range(len(means)):\n # line, name = \"-\", \"Spt\"\n if plotTest:\n line = '--' if i % 2 == 0 else '-'\n name = 'Train' if i % 2 == 0 else 'Test'\n plt.plot(x, means[i], line, color=COLORS[labels[i//2]],linewidth=2,\n label='{}-{}'.format(name,labels[i//2]))\n\n plt.figlegend(bbox_to_anchor=(1.05, 1), loc='upper left',\n borderaxespad=0., fontsize=12)\n plt.text(175, -0.25, s, size=12)\n \n # s = \"{}\".format(labels[0])\n # for i in range(1,len(labels)):\n # s += \"-x-{}\".format(labels[i])\n\n plt.title(s)\n plt.show()\n if saveDir is not None:\n plt.savefig(saveDir)\n\ndef get_cube_by_index(path, index, filename):\n bac = get_dir_name(path, index)\n return load_pickle(filename, Join(path, bac))\n \n\ndef get_dir_name(path, index):\n return os.listdir(path)[index]\n\ndef show_img_on_wave(cube, layer):\n mat = get_layer(cube, layer)\n plt.imshow(mat, cmap='gray')\n plt.show()\n\ndef plot_spectre(cube, isCube=True):\n mat = cube\n if isCube:\n mat = hsi2matrix(cube)\n nn = np.mean(mat, axis=0)\n x = np.linspace(0, mat.shape[1], mat.shape[1])\n plt.xlabel(\"Comprimento de onda (nm)\")\n plt.ylabel(\"Pseudo-absortância\")\n plt.plot(x,nn)\n\n\ndef remove_blank_lines(mat):\n return mat[~np.all(mat == 0, axis=1)]\n\ndef remove_spectrum(x, s=-1,f=-1):\n ss, ff = 50,210\n if s != -1:\n ss = s\n if f != -1:\n ff = f\n return x[:,ss:ff]\n ",
"_____no_output_____"
],
[
"testpath = Join(SAVESTORE, 'Test')\ntrainpath = Join(SAVESTORE, 'Train')\nindx = [7]\n\nbac_names = []\nmats = []\nfor i in indx:\n tr = get_cube_by_index(trainpath, i, 'mat_nobg.pickle')\n tt = get_cube_by_index(testpath, i, 'mat_nobg.pickle')\n\n mats.append((tr))\n mats.append((tt))\n\nfor i in indx:\n bac_names.append(LABELS[i])\n\n\nplot_dif_spectrum_refs(mats, bac_names, ismat=True, plotTest=True, onlyCurves=True)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d0561aa093980fc02a245c99d26674ff87daf682 | 578,624 | ipynb | Jupyter Notebook | Optim_Project.ipynb | iladan0/Abalone_Age_Prediction | 7012d78d60a673d8031a740ce99442ba2f8b1512 | [
"MIT"
] | null | null | null | Optim_Project.ipynb | iladan0/Abalone_Age_Prediction | 7012d78d60a673d8031a740ce99442ba2f8b1512 | [
"MIT"
] | null | null | null | Optim_Project.ipynb | iladan0/Abalone_Age_Prediction | 7012d78d60a673d8031a740ce99442ba2f8b1512 | [
"MIT"
] | null | null | null | 176.517389 | 61,734 | 0.845411 | [
[
[
"**Student BENREKIA Mohamed Ali (IASD 2021-2022)**",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nfrom scipy.linalg import norm \n\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%load_ext autoreload\n%autoreload 2",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
]
],
[
[
"# Loading data",
"_____no_output_____"
]
],
[
[
"!wget https://raw.githubusercontent.com/nishitpatel01/predicting-age-of-abalone-using-regression/master/Abalone_data.csv",
"_____no_output_____"
],
[
"# Use this code to read from a CSV file.\nimport pandas as pd\nU = pd.read_csv('/content/Abalone_data.csv')",
"_____no_output_____"
],
[
"U.shape",
"_____no_output_____"
],
[
"U.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 4176 entries, 0 to 4175\nData columns (total 9 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Sex 4176 non-null object \n 1 Length 4176 non-null int64 \n 2 Diameter 4176 non-null int64 \n 3 Height 4176 non-null int64 \n 4 Whole_weight 4176 non-null float64\n 5 Shucked_weight 4176 non-null float64\n 6 Viscera_weight 4176 non-null float64\n 7 Shell_weight 4176 non-null float64\n 8 Rings 4176 non-null int64 \ndtypes: float64(4), int64(4), object(1)\nmemory usage: 293.8+ KB\n"
],
[
"U.head()",
"_____no_output_____"
],
[
"U.tail()",
"_____no_output_____"
],
[
"U.Sex=U.Sex.astype('category').cat.codes\nU.head()",
"_____no_output_____"
],
[
"U.describe(include='all')",
"_____no_output_____"
],
[
"U.sample(10)",
"_____no_output_____"
],
[
"U.isnull().sum()",
"_____no_output_____"
],
[
"U.dtypes",
"_____no_output_____"
],
[
"U.hist(figsize=(10,15))",
"_____no_output_____"
],
[
"corr = U.corr()\ncorr",
"_____no_output_____"
],
[
"sns.heatmap(corr, annot=False)",
"_____no_output_____"
],
[
"# split train - validation\n\nshuffle_df = U.sample(frac=1)\n\n# Define a size for your train set \ntrain_size = int(0.8 * len(U))\n\n# Split your dataset \ntrain_set = shuffle_df[:train_size]\nvalid_set = shuffle_df[train_size:]\n\n#split feature target\n\nx_train = train_set.drop(\"Rings\",axis=1).to_numpy()\ny_train = train_set[\"Rings\"]\n\nx_valid = valid_set.drop(\"Rings\",axis=1)\ny_valid = valid_set[\"Rings\"]",
"_____no_output_____"
],
[
"#no need\nmA = x_train.mean(axis=0)\nsA = x_train.std(axis=0)\nx_train = (x_train-mA)/sA\nx_valid = (x_valid-mA)/sA",
"_____no_output_____"
],
[
"# no need\nm = y_train.mean()\ny_train = y_train-m\ny_valid = y_valid-m",
"_____no_output_____"
],
[
"x_train.shape[1]",
"_____no_output_____"
]
],
[
[
"# Problem definition (Linear regression)",
"_____no_output_____"
]
],
[
[
"class RegPb(object):\n '''\n A class for regression problems with linear models.\n \n Attributes:\n X: Data matrix (features)\n y: Data vector (labels)\n n,d: Dimensions of X\n loss: Loss function to be considered in the regression\n 'l2': Least-squares loss\n lbda: Regularization parameter\n '''\n \n # Instantiate the class\n def __init__(self, X, y,lbda=0,loss='l2'):\n self.X = X\n self.y = y\n self.n, self.d = X.shape\n self.loss = loss\n self.lbda = lbda\n \n \n # Objective value\n def fun(self, w):\n if self.loss=='l2':\n return np.square(self.X.dot(w) - self.y).mean() + self.lbda * norm(w) ** 2\n else:\n return np.square(self.X.dot(w) - self.y).mean()\n\n\n \"\"\"\n # Partial objective value\n def f_i(self, i, w):\n if self.loss=='l2':\n return norm(self.X[i].dot(w) - self.y[i]) ** 2 / (2.) + self.lbda * norm(w) ** 2\n else:\n return norm(self.X[i].dot(w) - self.y[i]) ** 2 / (2.)\n \"\"\"\n \n # Full gradient computation\n def grad(self, w):\n if self.loss=='l2':\n return self.X.T.dot(self.X.dot(w) - self.y) * (2/self.n) + 2 * self.lbda * w\n else:\n return self.X.T.dot(self.X.dot(w) - self.y) * (2/self.n)\n \n # Partial gradient\n def grad_i(self,i,w):\n x_i = self.X[i]\n if self.loss=='l2':\n return (2/self.n) * (x_i.dot(w) - self.y[i]) * x_i + 2 * self.lbda*w\n else:\n return (2/self.n) * (x_i.dot(w) - self.y[i]) * x_i\n\n \"\"\"\n # Lipschitz constant for the gradient\n def lipgrad(self):\n if self.loss=='l2':\n L = norm(self.X, ord=2) ** 2 / self.n + self.lbda\n \"\"\"\n",
"_____no_output_____"
],
[
"lda = 1. / x_train.shape[0] ** (0.5)\npblinreg = RegPb(x_train, y_train, lbda=lda, loss='l2')",
"_____no_output_____"
]
],
[
[
"**PCA**",
"_____no_output_____"
]
],
[
[
"U, s, V = np.linalg.svd(x_train.T.dot(x_train))",
"_____no_output_____"
],
[
"eig_values, eig_vectors = s, U\nexplained_variance=(eig_values / np.sum(eig_values))*100\nplt.figure(figsize=(8,4))\nplt.bar(range(8), explained_variance, alpha=0.6)\nplt.ylabel('Percentage of explained variance')\nplt.xlabel('Dimensions')",
"_____no_output_____"
],
[
"\n\n# calculating our new axis\npc1 = x_train.dot(eig_vectors[:,0])\npc2 = x_train.dot(eig_vectors[:,1])\n\n",
"_____no_output_____"
],
[
"plt.plot(pc1, pc2, '.')\nplt.axis('equal');",
"_____no_output_____"
]
],
[
[
"# Btach Gradietn Descent",
"_____no_output_____"
]
],
[
[
"def batch_grad(w0,problem, stepchoice=0, lr= 0.01, n_iter=1000,verbose=False):\n \n # objective history\n objvals = []\n # Number of samples\n n = problem.n\n \n # Initial value of current iterate \n w = w0.copy()\n nw = norm(w)\n # Current objective\n obj = problem.fun(w) \n objvals.append(obj);\n # Initialize iteration counter\n k=0\n \n # Plot initial quantities of interest\n if verbose:\n print(\"Gradient Descent\")\n print(' | '.join([name.center(8) for name in [\"iter\", \"MSE_Loss\"]]))\n print(' | '.join([(\"%d\" % k).rjust(8),(\"%.2e\" % obj).rjust(8)]))\n # Main loop\n while (k < n_iter ):#and nw < 10**100\n # gradient calculation\n gr = np.zeros(d)\n gr = problem.grad(w)\n \n\n \n \n if stepchoice==0:\n w[:] = w - lr * gr\n elif stepchoice>0:\n if (k*nb*10) % n == 0:\n sk = float(lr/stepchoice)\n w[:] = w - sk * gr\n \n nw = norm(w) #Computing the norm to measure divergence \n obj = problem.fun(w)\n \n \n \n k += 1\n # Plot quantities of interest at the end of every epoch only\n objvals.append(obj)\n if verbose:\n print(' | '.join([(\"%d\" % k).rjust(8),(\"%.2e\" % obj).rjust(8)])) \n \n # End of main loop\n #################\n \n # Plot quantities of interest for the last iterate (if needed)\n if k % n_iter > 0:\n objvals.append(obj)\n if verbose:\n print(' | '.join([(\"%d\" % k).rjust(8),(\"%.2e\" % obj).rjust(8)])) \n \n # Outputs\n \n w_output = w.copy()\n \n return w_output, np.array(objvals)",
"_____no_output_____"
]
],
[
[
"**Different Learning rates**",
"_____no_output_____"
]
],
[
[
"nb_epochs = 100\nn = pblinreg.n\nd = pblinreg.d\nw0 = np.zeros(d)\nvalsstep0 = [0.1,0.01,0.001,0.0001,0.00001]\nnvals = len(valsstep0)\n\nobjs = np.zeros((nvals,nb_epochs+1))\n\nfor val in range(nvals):\n w_temp, objs_temp = batch_grad(w0,pblinreg, lr=valsstep0[val], n_iter=nb_epochs)\n objs[val] = objs_temp",
"_____no_output_____"
],
[
"epochs = range(1,102)\nplt.figure(figsize=(7, 5))\n\nfor val in range(nvals):\n plt.plot(epochs, objs[val], label=\"BG - \"+str(valsstep0[val]), lw=2)\nplt.title(\"Convergence plot\", fontsize=16)\nplt.xlabel(\"#epochs\", fontsize=14)\nplt.ylabel(\"Objective\", fontsize=14)\nplt.legend()\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Accelerated Gradient Descent",
"_____no_output_____"
]
],
[
[
"def accelerated_grad(w_0,problem,lr=0.001,method=\"nesterov\",momentum=None,n_iter=100,verbose=False): \n \"\"\"\n A generic code for Nesterov's accelerated gradient method.\n \n Inputs:\n w0: Initial vector\n problem: Problem structure\n lr: Learning rate\n method: Type of acceleration technique that is used\n 'nesterov': Accelerated gradient for convex functions (Nesterov)\n momentum: Constant value for the momentum parameter (only used if method!='nesterov')\n n_iter: Number of iterations\n verbose: Boolean value indicating whether the outcome of every iteration should be displayed\n \n Outputs:\n z_output: Final iterate of the method\n objvals: History of function values in z (output as a Numpy array of length n_iter+1)\n \"\"\"\n \n ############\n # Initial step: Compute and plot some initial quantities\n\n # objective history\n objvals = []\n \n \n # Initial value of current and next iterates \n w = w0.copy()\n w_new = w0.copy()\n z = w0.copy()\n \n if method=='nesterov':\n # Initialize parameter sequence\n tk = 0\n tkp1 = 1\n momentum = 0\n \n # Initialize iteration counter\n k=0\n \n # Initial objective\n obj = problem.fun(z)\n objvals.append(obj);\n \n # Plot the initial values if required\n if verbose:\n print(\"Accelerated Gradient/\"+method)\n print(' | '.join([name.center(8) for name in [\"iter\", \"fval\"]]))\n print(' | '.join([(\"%d\" % k).rjust(8),(\"%.2e\" % obj).rjust(8)]))\n \n #######################\n # Main loop\n while (k < n_iter):\n \n # Perform the accelerated iteration\n \n # Gradient step\n g = problem.grad(z)\n w_new[:] = z - lr * g\n # Momentum step\n z[:] = w_new + momentum*(w_new-w)\n # Update sequence\n w[:] = w_new[:]\n \n \n # Adjusting the momentum parameter if needed\n if method=='nesterov':\n tkp1 = 0.5*(1+np.sqrt(1+4*(tk**2)))\n momentum = (tk-1)/tkp1\n tk = tkp1\n \n # Compute and plot the new objective value and distance to the minimum\n \n obj = problem.fun(z)\n objvals.append(obj)\n \n # Plot these values if required\n if verbose:\n print(' | '.join([(\"%d\" % k).rjust(8),(\"%.2e\" % obj).rjust(8)])) \n \n # Increment the iteration counter\n k += 1\n \n # End loop\n #######################\n \n \n # Output\n z_output = z.copy()\n \n return z_output, np.array(objvals)",
"_____no_output_____"
]
],
[
[
"**GD Vs NAGD**",
"_____no_output_____"
]
],
[
[
"nb_epochs = 100\nn = pblinreg.n\nd = pblinreg.d\nw0 = np.zeros(d)\n\nlearning_rate = 0.01\n\nw_g, obj_g = batch_grad(w0,pblinreg, lr=learning_rate, n_iter=nb_epochs)\nw_n, obj_n = accelerated_grad(w0,pblinreg, lr=learning_rate, n_iter=nb_epochs)\n",
"_____no_output_____"
],
[
"epochs = range(1,102)\nplt.figure(figsize=(7, 5))\n\nplt.plot(epochs, obj_g, label=\"GD\", lw=2)\nplt.plot(epochs, obj_n, label=\"NAGD\", lw=2)\nplt.title(\"Convergence plot\", fontsize=16)\nplt.xlabel(\"#epochs\", fontsize=14)\nplt.ylabel(\"Objective\", fontsize=14)\nplt.legend()\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Stochastic gradient Descent",
"_____no_output_____"
]
],
[
[
"def stoch_grad(w0,problem, stepchoice=0, lr= 0.01, n_iter=1000,nb=1,average=0,scaling=0,with_replace=False,verbose=False): \n \"\"\"\n A code for gradient descent with various step choices.\n \n Inputs:\n w0: Initial vector\n problem: Problem structure\n problem.fun() returns the objective function, which is assumed to be a finite sum of functions\n problem.n returns the number of components in the finite sum\n problem.grad_i() returns the gradient of a single component f_i\n stepchoice: Strategy for computing the stepsize \n 0: Constant step size equal to lr\n 1: Step size decreasing in lr/ stepchoice\n lr: Learning rate\n n_iter: Number of iterations, used as stopping criterion\n nb: Number of components drawn per iteration/Batch size \n 1: Classical stochastic gradient algorithm (default value)\n problem.n: Classical gradient descent (default value)\n average: Indicates whether the method computes the average of the iterates \n 0: No averaging (default)\n 1: With averaging\n scaling: Use a diagonal scaling\n 0: No scaling (default)\n 1: Average of magnitudes (RMSProp)\n 2: Normalization with magnitudes (Adagrad)\n with_replace: Boolean indicating whether components are drawn with or without replacement\n True: Components drawn with replacement\n False: Components drawn without replacement (Default)\n verbose: Boolean indicating whether information should be plot at every iteration (Default: False)\n \n Outputs:\n w_output: Final iterate of the method (or average if average=1)\n objvals: History of function values (Numpy array of length n_iter at most)\n \"\"\"\n ############\n # Initial step: Compute and plot some initial quantities\n\n # objective history\n objvals = []\n \n # iterates distance to the minimum history\n normits = []\n \"\"\"\n # Lipschitz constant\n L = problem.lipgrad()\n \"\"\"\n # Number of samples\n n = problem.n\n \n # Initial value of current iterate \n w = w0.copy()\n nw = norm(w)\n \n # Average (if needed)\n if average:\n wavg=np.zeros(len(w))\n \n #Scaling values\n if scaling>0:\n mu=1/(2 *(n ** (0.5)))\n v = np.zeros(d)\n beta = 0.8\n\n # Initialize iteration counter\n k=0\n \n # Current objective\n obj = problem.fun(w) \n objvals.append(obj);\n\n \n # Plot initial quantities of interest\n if verbose:\n print(\"Stochastic Gradient, batch size=\",nb,\"/\",n)\n print(' | '.join([name.center(8) for name in [\"iter\", \"MSE_Loss\"]]))\n print(' | '.join([(\"%d\" % k).rjust(8),(\"%.2e\" % obj).rjust(8)]))\n \n ################\n # Main loop\n while (k < n_iter ):#and nw < 10**100\n # Draw the batch indices\n ik = np.random.choice(n,nb,replace=with_replace)# Batch gradient\n # Stochastic gradient calculation\n sg = np.zeros(d)\n for j in range(nb):\n gi = problem.grad_i(ik[j],w)\n sg = sg + gi\n sg = (1/nb)*sg\n \n if scaling>0:\n if scaling==1:\n # RMSProp update\n v = beta*v + (1-beta)*sg*sg\n elif scaling==2:\n # Adagrad update\n v = v + sg*sg \n sg = sg/(np.sqrt(v+mu))\n\n \n \n if stepchoice==0:\n w[:] = w - lr * sg\n elif stepchoice>0:\n if (k*nb*10) % n == 0:\n sk = float(lr/stepchoice)\n w[:] = w - sk * sg\n \n nw = norm(w) #Computing the norm to measure divergence \n \n if average:\n # If average, compute the average of the iterates\n wavg = k/(k+1) *wavg + w/(k+1) \n obj = problem.fun(wavg)\n else:\n obj = problem.fun(w)\n \n \n \n k += 1\n # Plot quantities of interest at the end of every epoch only\n if k % int(n/nb) == 0:\n objvals.append(obj)\n if verbose:\n print(' | '.join([(\"%d\" % k).rjust(8),(\"%.2e\" % obj).rjust(8)])) \n \n # End of main loop\n #################\n \n # Plot quantities of interest for the last iterate (if needed)\n if (k*nb) % n > 0:\n objvals.append(obj)\n if verbose:\n print(' | '.join([(\"%d\" % k).rjust(8),(\"%.2e\" % obj).rjust(8)])) \n \n # Outputs\n if average:\n w_output = wavg.copy()\n else:\n w_output = w.copy()\n \n return w_output, np.array(objvals)",
"_____no_output_____"
]
],
[
[
"**Constant Vs Decreasing LR**",
"_____no_output_____"
]
],
[
[
"nb_epochs = 60\nn = pblinreg.n\nd = pblinreg.d\nw0 = np.zeros(d)\n\n# Run a - GD with constant stepsize\nw_a, obj_a = stoch_grad(w0,pblinreg, n_iter=nb_epochs,nb=n)\n\n\n# Run b - Stochastic gradient with constant stepsize\n# The version below may diverges, in which case the bound on norm(w) in the code will be triggered\nw_b, obj_b = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1)\n\n# Run Gradient descent with decreasing stepsize\nw_c, obj_c = stoch_grad(w0,pblinreg, stepchoice=0.5, lr=0.2, n_iter=nb_epochs,nb=n)\n# Run Stochastic gradient with decreasing stepsize\nw_d, obj_d = stoch_grad(w0,pblinreg, stepchoice=0.5, lr=0.2, n_iter=nb_epochs*n,nb=1)",
"_____no_output_____"
],
[
"epochs = range(1,62)\n\nplt.figure(figsize=(7, 5))\nplt.plot(epochs, obj_a, label=\"GD - const-lbda\", lw=2)\nplt.plot(epochs, obj_b, label=\"SG - const-lbda\", lw=2)\nplt.plot(epochs, obj_c, label=\"GD - decr-lbda\", lw=2)\nplt.plot(epochs, obj_d, label=\"SG - decr-lbda\", lw=2)\nplt.title(\"Convergence plot\", fontsize=16)\nplt.xlabel(\"#epochs\", fontsize=14)\nplt.ylabel(\"Objective MSE\", fontsize=14)\nplt.legend()\n\n\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"**Different Constant LR**",
"_____no_output_____"
]
],
[
[
"nb_epochs = 60\nn = pblinreg.n\nd = pblinreg.d\nw0 = np.zeros(d)\nvalsstep0 = [0.01,0.001,0.0001,0.00001]\nnvals = len(valsstep0)\n\nobjs = np.zeros((nvals,nb_epochs+1))\n\nfor val in range(nvals):\n w_temp, objs_temp = stoch_grad(w0,pblinreg, lr=valsstep0[val], n_iter=nb_epochs*n,nb=1)\n objs[val] = objs_temp",
"_____no_output_____"
],
[
"plt.figure(figsize=(7, 5))\n\nfor val in range(nvals):\n plt.plot(epochs, objs[val], label=\"SG - \"+str(valsstep0[val]), lw=2)\nplt.title(\"Convergence plot\", fontsize=16)\nplt.xlabel(\"#epochs\", fontsize=14)\nplt.ylabel(\"Objective\", fontsize=14)\nplt.legend()\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"**Different decreasing LR**",
"_____no_output_____"
]
],
[
[
"nb_epochs = 60\nn = pblinreg.n\nnbset = 1\nw0 = np.zeros(d)\n\ndecstep = [1,2,10,20,100]\nnvals = len(decstep)\n\nobjs = np.zeros((nvals,nb_epochs+1))\n\nfor val in range(nvals):\n _, objs[val] = stoch_grad(w0,pblinreg,stepchoice=decstep[val],lr=0.02, n_iter=nb_epochs*n,nb=1)\n\n",
"_____no_output_____"
],
[
"plt.figure(figsize=(7, 5))\n\nfor val in range(nvals):\n plt.semilogy(epochs, objs[val], label=\"SG - \"+str(decstep[val]), lw=2)\nplt.title(\"Convergence plot\", fontsize=16)\nplt.xlabel(\"#epochs\", fontsize=14)\nplt.ylabel(\"Objective\", fontsize=14)\nplt.legend()\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"**Different Batch size**",
"_____no_output_____"
]
],
[
[
"nb_epochs = 100\nn = pblinreg.n\nw0 = np.zeros(d)\n\n\n\n# Stochastic gradient (batch size 1)\nw_a, obj_a= stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1)\n# Batch stochastic gradient (batch size n/100)\nnbset=int(n/100)\nw_b, obj_b = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*100,nb=nbset)\n# Batch stochastic gradient (batch size n/10)\nnbset=int(n/10)\nw_c, obj_c = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=int(nb_epochs*10),nb=nbset)\n# Batch stochastic gradient (batch size n/2)\nnbset=int(n/2)\nw_d, obj_d = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=int(nb_epochs*2),nb=nbset)\n\n# Gradient descent (batch size n, taken without replacement)\nw_f, obj_f = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=int(nb_epochs),nb=n)",
"_____no_output_____"
],
[
"nbset=int(n/100)\nw_b, obj_b = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=int(nb_epochs*100),nb=nbset,verbose=True)\nprint(len(obj_b))",
"_____no_output_____"
],
[
"epochs = range(1,102)\nplt.figure(figsize=(7, 5))\nplt.semilogy(epochs, obj_a, label=\"SG (batch=1)\", lw=2)\nplt.semilogy(epochs, obj_b, label=\"Batch SG - n/100\", lw=2)\nplt.semilogy(epochs, obj_c, label=\"Batch SG - n/10\", lw=2)\nplt.semilogy(epochs, obj_d, label=\"Batch SG - n/2\", lw=2)\nplt.semilogy(epochs, obj_f, label=\"GD\", lw=2)\nplt.title(\"Convergence plot\", fontsize=16)\nplt.xlabel(\"#epochs\", fontsize=14)\nplt.ylabel(\"Objective\", fontsize=14)\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize=(7, 5))\nplt.plot(epochs, obj_a, label=\"SG (batch=1)\", lw=2)\nplt.plot(epochs, obj_b, label=\"Batch SG - n/100\", lw=2)\nplt.plot(epochs, obj_c, label=\"Batch SG - n/10\", lw=2)\nplt.plot(epochs, obj_d, label=\"Batch SG - n/2\", lw=2)\nplt.plot(epochs, obj_f, label=\"GD\", lw=2)\nplt.title(\"Convergence plot\", fontsize=16)\nplt.xlabel(\"#epochs\", fontsize=14)\nplt.ylabel(\"Objective\", fontsize=14)\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"# Other variants for SGD",
"_____no_output_____"
],
[
"**batch with replacement**",
"_____no_output_____"
]
],
[
[
"#Batch with replacement for GD, SGD and Batch SGD\nnb_epochs = 100\nn = pblinreg.n\nw0 = np.zeros(d)\n\nnruns = 3\n\nfor i in range(nruns):\n # Run standard stochastic gradient (batch size 1)\n _, obj_a= stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1,with_replace=True)\n # Batch stochastic gradient (batch size n/10)\n nbset=int(n/2)\n _, obj_b= stoch_grad(w0,pblinreg, lr=0.0001, n_iter=int(nb_epochs*n/nbset),nb=nbset,with_replace=True)\n # Batch stochastic gradient (batch size n, with replacement)\n nbset=n\n _, obj_c=stoch_grad(w0,pblinreg, lr=0.0001, n_iter=int(nb_epochs*n/nbset),nb=nbset,with_replace=True)\n if i<nruns-1:\n plt.semilogy(obj_a,color='orange',lw=2)\n plt.semilogy(obj_b,color='green', lw=2)\n plt.semilogy(obj_c,color='blue', lw=2)\nplt.semilogy(obj_a,label=\"SG\",color='orange',lw=2)\nplt.semilogy(obj_b,label=\"batch n/2\",color='green', lw=2)\nplt.semilogy(obj_c,label=\"batch n\",color='blue', lw=2) \n\nplt.title(\"Convergence plot\", fontsize=16)\nplt.xlabel(\"#epochs \", fontsize=14)\nplt.ylabel(\"Objective \", fontsize=14)\nplt.legend()",
"_____no_output_____"
]
],
[
[
"**Averaging**",
"_____no_output_____"
]
],
[
[
"# Comparison of stochastic gradient with and without averaging\nnb_epochs = 100\nn = pblinreg.n\nw0 = np.zeros(d)\n\n\n # Run standard stochastic gradient without averaging\n_, obj_a =stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1)\n # Run stochastic gradient with averaging\n_, obj_b =stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1,average=1)\n\n# Plot the results\nplt.figure(figsize=(7, 5))\n\nplt.semilogy(obj_a,label='SG',color='orange',lw=2)\nplt.semilogy(obj_b,label='SG+averaging',color='red', lw=2)\n \nplt.title(\"Convergence plot\", fontsize=16)\nplt.xlabel(\"#epochs (log scale)\", fontsize=14)\nplt.ylabel(\"Objective (log scale)\", fontsize=14)\nplt.legend()",
"_____no_output_____"
]
],
[
[
"**Diagonal Scaling**",
"_____no_output_____"
]
],
[
[
"# Comparison of stochastic gradient with and without diagonal scaling\n\nnb_epochs = 60\nn = pblinreg.n\nw0 = np.zeros(d)\n\n# Stochastic gradient (batch size 1) without diagonal scaling\nw_a, obj_a= stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1)\n# Stochastic gradient (batch size 1) with RMSProp diagonal scaling\nw_b, obj_b = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1,average=0,scaling=1)\n# Stochastic gradient (batch size 1) with Adagrad diagonal scaling - Constant step size\nw_c, obj_c = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1,average=0,scaling=2)\n# Stochastic gradient (batch size 1) with Adagrad diagonal scaling - Decreasing step size\nw_d, obj_d = stoch_grad(w0,pblinreg, lr=0.0001, n_iter=nb_epochs*n,nb=1,average=0,scaling=2)",
"_____no_output_____"
],
[
"# Plot the results - Comparison of stochastic gradient with and without diagonal scaling\n# In terms of objective value (logarithmic scale)\nplt.figure(figsize=(7, 5))\nplt.semilogy(obj_a, label=\"SG\", lw=2)\nplt.semilogy(obj_b, label=\"SG/RMSProp\", lw=2)\nplt.semilogy(obj_c, label=\"SG/Adagrad (Cst)\", lw=2)\nplt.semilogy(obj_d, label=\"SG/Adagrad (Dec)\", lw=2)\nplt.title(\"Convergence plot\", fontsize=16)\nplt.xlabel(\"#epochs (log scale)\", fontsize=14)\nplt.ylabel(\"Objective (log scale)\", fontsize=14)\nplt.legend()\nplt.show",
"_____no_output_____"
]
],
[
[
"# Regression (Lasso with iterative soft thersholding)",
"_____no_output_____"
],
[
"**Lasso regression with ISTA**",
"_____no_output_____"
]
],
[
[
"#Minimization fucntion with l1 norm (Lasso regression)\ndef cost(w, X, y, lbda):\n return np.square(X.dot(w) - y).mean() + lbda * norm(w,1) ",
"_____no_output_____"
],
[
"def ista_solve( A, d, lbdaa ):\n \"\"\"\n Iterative soft-thresholding solves the minimization problem\n Minimize |Ax-d|_2^2 + lambda*|x|_1 (Lasso regression)\n \"\"\"\n max_iter = 300\n objvals = []\n tol = 10**(-3)\n tau = 1.5/np.linalg.norm(A,2)**2\n n = A.shape[1]\n w = np.zeros((n,1))\n for j in range(max_iter):\n z = w - tau*(A.T@(A@w-d))\n w_old = w\n w = np.sign(z) * np.maximum(np.abs(z)-tau*lbdaa, np.zeros(z.shape))\n if j % 100 == 0:\n obj = cost(w,A,d,lbdaa)\n objvals.append(obj)\n if np.linalg.norm(w - w_old) < tol:\n break\n return w, objvals",
"_____no_output_____"
],
[
"#we iterate over multiple values of lambda\nlmbdas = [0.000001, 0.000002, 0.00001, 0.00002, 0.0001, 0.0002, 0.001, 0.002, 0.01, 0.02, 0.1, 0.2, 1, 2, 10, 20]\nmse_list=[]\nfor lda in lmbdas:\n w_star, obj_x = ista_solve_hot( x_train, y_train, lda)\n mse_list.append(obj_x[-1])",
"_____no_output_____"
],
[
"x_range = range(1,len(lmbdas)+1)\nplt.figure(figsize=(7, 5))\nplt.plot(x_range,mse_list, label=\"Lasso-ISTA\", lw=2)\n\nplt.title(\"Best Lambda factor\", fontsize=16)\nplt.xlabel(\"Lambda\", fontsize=14)\nplt.xticks(np.arange(len(lmbdas)),lmbdas,rotation=40)\nplt.ylabel(\"Objective Lasso reg\", fontsize=14)\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"w_star, obj_x = ista_solve_hot( x_train, y_train, 0.00001)",
"_____no_output_____"
]
],
[
[
"# Performance on Test set",
"_____no_output_____"
]
],
[
[
"#MSE on lasso-ISTA\ncost(w_star, x_valid, y_valid, 0.00001)",
"_____no_output_____"
],
[
"# MSE on best sgd algo\ncost(w_b, x_valid, y_valid, 0.00001)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d05624363b2bc1f0dd2e3cdcd459d53f563e0fef | 124,170 | ipynb | Jupyter Notebook | GammaTransport.ipynb | Tatiana-Krivosheev/Radiation-Transport-with-Monte-Carlo | 71de9060ea5dd3ddb443a1362f68eb4bebf62efe | [
"MIT"
] | null | null | null | GammaTransport.ipynb | Tatiana-Krivosheev/Radiation-Transport-with-Monte-Carlo | 71de9060ea5dd3ddb443a1362f68eb4bebf62efe | [
"MIT"
] | null | null | null | GammaTransport.ipynb | Tatiana-Krivosheev/Radiation-Transport-with-Monte-Carlo | 71de9060ea5dd3ddb443a1362f68eb4bebf62efe | [
"MIT"
] | null | null | null | 125.805471 | 33,764 | 0.858452 | [
[
[
"# The Monte Carlo Simulation of Radiation Transport",
"_____no_output_____"
],
[
"WE will discuss essentiall physics and method to do gamma quanta (photons with high enough energy) radiation transport using Monte Carlo methods. We will covers interactions processes, basics of radiation passing through matter as well as Monte Carlo method and how it helps with radiation propagation. ",
"_____no_output_____"
],
[
"## Glossary\n- $h$ Plank's constant\n- $\\hbar$ reduced Plank's constant, $h/2\\pi$\n- $\\omega$ photon circular frequency, \n- $\\hbar \\omega$ photon energy\n- $\\lambda$ photon wavelength\n- $\\theta$ scattering angle, between incoming and outgoing photon\n- $\\phi$ azimuthal angle\n- $c$ speed of light in vacuum\n- $m_e$ electron mass\n- $r_e$ classical electron radius\n- $N_A$ Avogadro Constant, 6.02214076$\\times$10$^{23}$ mol$^{-1}$",
"_____no_output_____"
],
[
"## Basic physics",
"_____no_output_____"
],
[
"We would cover typical energies and wave length when photons are behaving like a point-like particle interaction with matter.",
"_____no_output_____"
],
[
"### Units",
"_____no_output_____"
],
[
"Common unit for a photon energy would be electron-volt (eV). This is the kinetic energy electron aquire when it moves in electric field (say, between plates of the capacitor) with potential difference 1Volt. This is very small energy and is equal to about $1.6\\times10^{-19}$Joules. Typical energies we are interested inare in the 1keV to 100MeV range.",
"_____no_output_____"
],
[
"### Spatial size and wave length",
"_____no_output_____"
],
[
"Photons are massless particles, and it is very easy to compute photon \"size\" which is photon wavelength.\n$$ \\lambda = \\frac{hc}{E_\\gamma} = \\frac{hc}{\\hbar \\omega} = \\frac{2 \\pi c}{\\omega}$$\nwhere $\\lambda$ is wavelength, $h$ is Plank's constant, $c$ is speed of light and $E_\\gamma$ is photon energy. For example, lets compute wavelength for photon with energy 1eV.",
"_____no_output_____"
]
],
[
[
"h = 6.625e-34\nc = 3e8\nhw = 1.0 * 1.6e-19 # eV\nλ = h*c/hw\nprint(f\"Photon wavelength = {λ*1.0e9} nanometers\")",
"Photon wavelength = 1242.1875 nanometers\n"
]
],
[
[
"Thus, for 1keV photon we will get wave length about 1.2 nm, and for 1MeV photon we will get wave length about $1.2\\times10^{-3}$nm.",
"_____no_output_____"
],
[
"FOr comparison, typical atom size is from 0.1nm (He) to 0.4nm (Fr and other heavy). Therefore, for most interactions between photon and atoms in our enery range we could consider it particles, not waves.",
"_____no_output_____"
],
[
"## Basics of Monte Carlo methods",
"_____no_output_____"
],
[
"Was first introduced by Conte du Buffon, as needle dropping experiment to calculate value of $\\pi$. Laplace extended the example of the CduB by using sampling in the square to calculate value of $\\pi$. It is a very general method of stochastic integration of the function. Was successfully applied to the particles (neutron in this case) transport by Enrico Fermi. Since growing applications of computers it is growing exponentially in use - finances, radiation therapy, machine learning, astrophysics, optimizations, younameit.",
"_____no_output_____"
],
[
"Let's try to calculate $\\pi$ with the Laplace method, namely sampe points uniformly in the ",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nN = 1000 # number of points to sample\n\nx = 2.0*np.random.random(N) - 1.0\ny = 2.0*np.random.random(N) - 1.0\n\nunitCircle = plt.Circle((0, 0), 1.0, color='r', fill=False)\n\nfig, ax = plt.subplots(1, 1)\n\nax.plot(x, y, 'bo', label='Sampling in square')\nax.add_artist(unitCircle)\nplt.axhline(0, color='grey')\nplt.axvline(0, color='grey')\nplt.title(\"Sampling in square\")\nplt.show()\n",
"_____no_output_____"
],
[
"r = np.sqrt(x*x + y*y)\n#print(r)\npinside = r[r<=1.0]\nNinside = len(pinside)\nprint(4.0*Ninside/N)",
"3.08\n"
]
],
[
[
"Result shall be close to $\\pi$",
"_____no_output_____"
],
[
"## Basic Photons Interactions with atoms",
"_____no_output_____"
],
[
"There are several interaction processess of photons with media.",
"_____no_output_____"
],
[
"### Compton Scattering",
"_____no_output_____"
],
[
"Compton scattering is described by Klein-Nishina formula with energy of scattered photon directly tied to incoming energy and scattering angle\n$$\n\\hbar \\omega'=\\frac{\\hbar\\omega}{1+\\frac{\\hbar \\omega}{m_e c^2} (1 - \\cos{\\theta})}\n$$\nwhere prime marks particle after scattering. It is clear to see that for backscattering photon ($\\theta=\\pi$, $\\cos{\\theta}=-1$) the energy of scattered photon reach minimum, which means scattered photon energy has limits\n$$\n\\frac{\\hbar \\omega }{1 + 2\\hbar\\omega/m_ec^2} \\le \\hbar\\omega' \\le \\hbar\\omega\n$$",
"_____no_output_____"
],
[
"Scattering cross-section (you could think of this as denormalized probability to be scattered to a given enegy)\n$$\n\\frac{d\\sigma}{d\\hbar\\omega'} = \\pi r_e^2 \\frac{m_ec^2}{(\\hbar\\omega)^2} \\lbrace \\frac{\\hbar\\omega}{\\hbar\\omega'} + \\frac{\\hbar\\omega'}{\\hbar\\omega} +\n\\left ( \\frac{m_ec^2}{\\hbar\\omega'} - \\frac{m_ec^2}{\\hbar\\omega} \\right )^2 - \n2m_ec^2 \\left ( \\frac{1}{\\hbar\\omega'} - \\frac{1}{\\hbar\\omega} \\right ) \\rbrace\n$$\n\nFull cross-section, where $x=2 \\hbar\\omega/m_e c^2$ is double relative photon enery.\n$$\n\\sigma=2\\pi r_e^2\\frac{1}{x}\\lbrace \\left ( 1 - \\frac{4}{x} - \\frac{8}{x^2} \\right ) \\log{(1+x) +\\frac{1}{2} + \\frac{8}{x}-\\frac{1}{2(1+x)^2}} \\rbrace\n$$\n\nThen we could divide partial cross-section by total cross-section and get probability of scattered photon energy for different incoming photons. Lets plot few graphs. As one can see, cross-section has dimension of area. They are very small, therefore cross-sections are measured in barns, one barn being $10^-{24}$ centimeter squared.\n\nLet's for reference add expression how to compute angular differential cross-section\n$$\n\\frac{d\\sigma}{d\\omicron'} = \\frac{1}{2} r_e^2 \\left( \\frac{\\hbar\\omega'}{\\hbar\\omega}\\right)^2 \\left(\\frac{\\hbar\\omega}{\\hbar\\omega'} + \\frac{\\hbar\\omega'}{\\hbar\\omega} - \\sin^2{\\theta}\\right)\n$$",
"_____no_output_____"
],
[
"Let's move to more appropriate units: energy would be always in MeV, unit of length for cross-sections would be in femtometers (1fm = $10^{-15}m$). Barn is 100 femtometers squa.",
"_____no_output_____"
]
],
[
[
"# usefule constants\nMeC2 = 0.511 # in MeV\nRe = 2.82 # femtometers",
"_____no_output_____"
],
[
"# main functions to deal with cross-sections\ndef hw_prime(hw, cos_theta):\n \"\"\"computes outgoing photon energy vs cosine of the scattered angle\"\"\"\n hwp = hw/(1.0 + (1.0 - cos_theta)*hw/MeC2)\n return hwp\n\ndef cosθ_from_hwp(hw, hwp):\n return 1.0 - (MeC2/hwp - MeC2/hw)\n\ndef hwp_minimum(hw):\n \"\"\"Computes minimum scattere energy in MeV given incoming photon energy hw\"\"\"\n return hw/(1.0 + 2.0*hw/MeC2)\n\ndef total_cross_section(hw):\n \"\"\"Klein-Nishina total cross-section, LDL p.358, eq (86.16)\"\"\"\n if hw <= 0.0:\n raise RuntimeError(f\"Photon energy is negative: {hw}\")\n x = 2.0 * hw / MeC2\n q = 1.0/x\n z = (1.0 + x)\n \n σ = 2.0*np.pi*Re*Re * q * ((1.0 - 4.0*q - 8.0*q*q)*np.log(z) + 0.5 + 8.0*q - 0.5/z/z)\n return σ\n\ndef diff_cross_section_dhwp(hw, hwp):\n \"\"\"Differential cross-section over outgoing photon energy\"\"\"\n if hw <= 0.0:\n raise RuntimeError(f\"Photon energy is negative or zero: {hw}\")\n \n if hwp <= 0.0:\n raise RuntimeError(f\"Scattered photon energy is negative or zero: {hwp}\")\n\n if hwp < hwp_minimum(hw): # outgoing energy cannot be less than minimum allowed\n return 0.0\n\n ei = MeC2/hw\n eo = MeC2/hwp\n\n dσ_dhwp = np.pi*Re*Re * (ei/hw) * (ei/eo + eo/ei + (eo-ei)**2 - 2.0*(eo-ei))\n return dσ_dhwp\n\ndef diff_cross_section_dOp(hw, θ):\n \"\"\"Differential cross-section over outgoing photon differential angle\"\"\"\n cst = np.cos(θ)\n hwp = hw_prime(hw, cst)\n rhw = hwp/hw\n dσ_dOp = 0.5*np.pi*Re*Re * rhw*rhw*(rhw + 1.0/rhw - (1.0 - cst)*(1.0 + cst))\n return dσ_dOp",
"_____no_output_____"
],
[
"def make_energyloss_curve(hw):\n N = 101\n hwm = hwp_minimum(hw)\n hws = np.linspace(0.0, hw-hwm, N)\n st = total_cross_section(hw)\n sc = np.empty(101)\n for k in range(0, len(hws)):\n hwp = hw - hws[k]\n sc[k] = diff_cross_section_dhwp(hw, hwp)/st\n\n return hws, sc\n\nq_p25, s_p25 = make_energyloss_curve(0.25)\nq_p50, s_p50 = make_energyloss_curve(0.50)\nq_1p0, s_1p0 = make_energyloss_curve(1.00)\n\nfig, ax = plt.subplots(1, 1)\n\nax.plot(q_p25, s_p25, 'r-', lw=2, label='Scattering probability vs energy loss, 0.25MeV')\nax.plot(q_p50, s_p50, 'g-', lw=2, label='Scattering probability vs energy loss, 0.50MeV')\nax.plot(q_1p0, s_1p0, 'b-', lw=2, label='Scattering probability vs energy loss, 1.00MeV')\nplt.title(\"Klein-Nishina\")\nplt.show()",
"_____no_output_____"
],
[
"def make_angular_curve(hw):\n \"\"\"Helper function to make angular probability x,y arrays given incoming photon enenrgy, MeV\"\"\"\n N = 181\n\n theta_d = np.linspace(0.0, 180.0, N) # angles in degrees\n theta_r = theta_d * np.pi / 180.0\n st = total_cross_section(hw)\n so = np.empty(N)\n\n for k in range(0, len(so)):\n so[k] = diff_cross_section_dOp(hw, theta_r[k]) * 2.0*np.pi / st\n\n return theta_d, so\n\na_p25, s_p25 = make_angular_curve(0.25)\na_p50, s_p50 = make_angular_curve(0.50)\na_1p0, s_1p0 = make_angular_curve(1.00)\n\nfig, ax = plt.subplots(1, 1)\n\nax.plot(a_p25, s_p25, 'r-', lw=2, label='Scattering angular probability, 0.25MeV')\nax.plot(a_p50, s_p50, 'g-', lw=2, label='Scattering angular probability, 0.50MeV')\nax.plot(a_1p0, s_1p0, 'b-', lw=2, label='Scattering angular probability, 1.00MeV')\nplt.title(\"Klein-Nishina\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Cross-sections",
"_____no_output_____"
],
[
"### Microscopic and Macroscopic cross-sections",
"_____no_output_____"
],
[
"We learned about so-called microscopic cross-sections, which is oneabout one photon scattering on one electron. It is very small, measured in barns which is $10^{-24}$ cm$^2$. In real life photons interacti with material objects measured in grams and kilograms. For that, we need macroscopic cross-section. For macroscopic cross-section, we have to multiply microscopic one by $N$, which is density of scatterers, as well as atomic number $Z$ (remember, we are scattering on electrons)\n\nFor Compton scattering in water, we could write\n\n$$\n\\Sigma = \\rho Z \\frac{N_A}{M} \\sigma\n$$\n\nwhere $N_A$ is Avogadro constant, $M$ is molar mass (total mass of $N_A$ molecules) and $\\rho$ is the density. Lets check the units. Suppose density is in $g/cm^3$, Avogadro Constant is in mol$^{-1}$ and molar mass is in $g/mol$. Therefore, macroscopic cross-section is measured in $cm^{-1}$ and gives the base for linear attenuation coefficient\n\n$$\nP(x) = \\exp{(-\\Sigma x)}\n$$\n\nwhere one can see that value under exponent is dimensionless.",
"_____no_output_____"
],
[
"### NIST cross-sections database",
"_____no_output_____"
],
[
"National Institute of Standards and Technologies provides a lot of precomputed corss-sections for elements and mixtures, for energies from 1keV up to 10GeV. One can find cross-sections from [XCOM place](https://www.nist.gov/pml/xcom-photon-cross-sections-database). One can pick elements, materials, mixtures and save them into local file. What is worth mentioning is that XCOM provides data as \n\n$$\n\\Sigma = Z \\frac{N_A}{M}\\sigma\n$$\n\nwhere density is specifically excluded. It is called mass attenuation coefficient. It is measured in $cm^2/g$. Using such units has certaint advantages, e.g. if you compute photon transport in media where density could change (say, inside nuclear reator where due to heating density of water goes from $\\sim$ 1$\\;g/cm^3$ to about 0.75$\\;g/cm^3$) allows to keep intercation physics separate from density. Multiplying mass attenuation coefficient by density gives you back linear attenuation coefficient.",
"_____no_output_____"
],
[
"### Cross-sections for Water",
"_____no_output_____"
],
[
"Lets read water cross-sections and plot them",
"_____no_output_____"
]
],
[
[
"lines = None\nwith open('H2o.data', \"r\") as f: \n lines = f.readlines()\n\nheader_len = 3\n\nlines = lines[header_len:41] # remove header, and limit energy to 10MeV\n\nenergy = np.empty(len(lines)) # energy scale\ncoh_xs = np.empty(len(lines)) # coherent cross-section\ninc_xs = np.empty(len(lines)) # incoherent cross-section\npht_xs = np.empty(len(lines)) # photo-effect cross-section\nnpp_xs = np.empty(len(lines)) # nuclear pair production\nepp_xs = np.empty(len(lines)) # electron pair production\n\nfor k in range(0, len(lines)):\n s = lines[k].split('|')\n energy[k] = float(s[0])\n coh_xs[k] = float(s[1])\n inc_xs[k] = float(s[2])\n pht_xs[k] = float(s[3])\n npp_xs[k] = float(s[4])\n epp_xs[k] = float(s[5])",
"_____no_output_____"
]
],
[
[
"Now we will plot together photoeffect, coherent, incoherent and total mass attenuation cross-sections.",
"_____no_output_____"
]
],
[
[
"plt.xscale(\"log\")\nplt.yscale(\"log\")\nplt.plot(energy, coh_xs, 'g-', linewidth=2)\nplt.plot(energy, inc_xs, 'r-', linewidth=2)\nplt.plot(energy, pht_xs, 'b-', linewidth=2)\nplt.plot(energy, pht_xs+coh_xs+inc_xs, 'o-', linewidth=2) # total cross-section\n#plt.plot(energy, npp_xs, 'c-', linewidth=2)\n#plt.plot(energy, epp_xs, 'm-', linewidth=2)\nplt.show()",
"_____no_output_____"
]
],
[
[
"One can see that for all practical reasons considering only photo-effect and compton (aka incoherent) scatterin is good enough approximation,",
"_____no_output_____"
],
[
"## Compton Scattering Sampling",
"_____no_output_____"
],
[
"W will use Khan's method to sample Compton scattering.",
"_____no_output_____"
]
],
[
[
"def KhanComptonSampling(hw, rng):\n \"\"\"Sample scattering energy after Compton interaction\"\"\"\n α = 2.0*hw/MeC2 # double relative incoming photon energy\n t = (α + 1.0)/(α + 9.0)\n\n x = 0.0\n while True:\n y = 1.0 + α*rng.random()\n if rng.random() < t:\n if rng.random() < 4.0*(1.0 - 1.0/y)/y:\n x = y\n break\n else:\n y = (1.0 + α) / y\n c = 2.0*y/α + 1.0\n if rng.random() < 0.5*(c*c + 1.0/y):\n x = y\n break\n return hw/x # scattered photon energy back",
"_____no_output_____"
]
],
[
[
"Let's test Compton sampling and compare it with microscopic differential cross-section",
"_____no_output_____"
]
],
[
[
"hw = 1.0 # MeV\nhwm = hwp_minimum(hw)\n\nNt = 1000000\nhwp = np.empty(Nt)\nrng = np.random.default_rng(312345)\n\nfor k in range(0, len(hwp)):\n hwp[k] = KhanComptonSampling(hw, rng)",
"_____no_output_____"
]
],
[
[
"Ok, lets check first the minimum energy in sampled values, should be within allowed range.",
"_____no_output_____"
]
],
[
[
"hwm_sampled = np.min(hwp)\nprint(f\"Minimum allowed scattered energy: {hwm} vs actual sampled minimum {hwm_sampled}\")\nif hwm_sampled < hwm:\n print(\"We have a problem with kinematics!\")",
"Minimum allowed scattered energy: 0.20350457984866588 vs actual sampled minimum 0.20350469296707585\n"
],
[
"count, bins, ignored = plt.hist(hwp, 20, density=True)\nplt.show()",
"_____no_output_____"
],
[
"# plotting angular distribution\ncosθ = cosθ_from_hwp(hw, hwp)\ncount, bins, ignored = plt.hist(cosθ, 20, density=True)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Monte Carlo photon transport code",
"_____no_output_____"
]
],
[
[
"# several helper functions and constants\nX = 0\nY = 1\nZ = 2\n\ndef isotropic_source(rng):\n cosθ = 2.0*rng.random() - 1.0 # uniform cosine of the azimuth angle\n sinθ = np.sqrt((1.0 - cosθ)*(1.0 + cosθ))\n φ = 2.0*np.pi*rng.random() # uniform polar angle\n return np.array((sinθ*np.cos(φ), sinθ*np.sin(φ), cosθ))\n\ndef find_energy_index(scale, hw):\n return np.searchsorted(scale, hw, side='right') - 1\n\ndef calculate_xs(xs, scale, hw, idx):\n q = (hw - scale[idx])/(scale[idx+1] - scale[idx])\n return xs[idx]*(1.0 - q) + xs[idx+1]*q\n\ndef transform_cosines(wx, wy, wz, cosθ, φ):\n \"\"\"https://www.scratchapixel.com/lessons/mathematics-physics-for-computer-graphics/monte-carlo-methods-in-practice/monte-carlo-simulation\"\"\"\n # print(wx, wy, wz, cosθ)\n sinθ = np.sqrt((1.0 - cosθ)*(1.0 + cosθ))\n cosφ = np.cos(φ)\n sinφ = np.sin(φ)\n \n if wz == 1.0:\n return np.array((sinθ * cosφ, sinθ * sinφ, cosθ))\n \n if wz == -1.0:\n return np.array((sinθ * cosφ, -sinθ * sinφ, -cosθ))\n \n denom = np.sqrt((1.0 - wz)*(1.0 + wz)) # denominator\n wzcosφ = wz * cosφ\n \n return np.array((wx * cosθ + sinθ * (wx * wzcosφ - wy * sinφ)/denom,\n wy * cosθ + sinθ * (wy * wzcosφ + wx * sinφ)/denom,\n wz * cosθ - denom * sinθ * cosφ)) ",
"_____no_output_____"
],
[
"def is_inside(pos):\n \"\"\"Check is photon is inside world box\"\"\"\n if pos[X] > 20.0:\n return False\n if pos[X] < -20.0:\n return False\n if pos[Y] > 20.0:\n return False\n if pos[Y] < -20.0:\n return False\n if pos[Z] > 20.0:\n return False\n if pos[Z] < -20.0:\n return False\n return True\n\n# main MC loop\nrng = np.random.default_rng(312345) # set RNG seed\n \nNt = 100 # number of trajectories\n\nhw_src = 1.0 # initial energy, MeV\nhw_max = energy[-1] # maximum energy in xs tables\n\npos_src = (0.0, 0.0, 0.0) # initial position\ndir_src = (0.0, 0.0, 1.0) # initial direction\n\ndensity = 1.0 # g/cm^3\n\nfor k in range(0, Nt): # loop over all trajectories\n \n print(f\"Particle # {k}\")\n \n # set energy, position and direction from source terms\n hw = hw_src\n gpos = np.array(pos_src, dtype=np.float64)\n gdir = np.array(dir_src, dtype=np.float64) # could try isotropic source here\n \n if hw < 0.0:\n raise ValueError(f\"Energy is negative: {hw}\")\n if hw > hw_max:\n raise ValueError(f\"Energy is too large: {hw}\") \n \n while True: # infinite loop over single trajectory till photon is absorbed or out of the box or out of energy range\n \n idx = find_energy_index(energy, hw)\n if idx < 0: # photon fell below 1keV energy threshold, kill it\n break\n \n phxs = calculate_xs(pht_xs, energy, hw, idx) # photo-effect cross-section\n inxs = calculate_xs(inc_xs, energy, hw, idx) # incoherent, aka Compton cross-section\n toxs = (phxs + inxs) # total cross-section\n \n pathlength = - np.log(1.0 - rng.random()) # exponential distribution\n pathlength /= (toxs*density) # path length now in cm, because we move from mass attenuation toxs to linear attenuation\n \n #gpos = (gpos[X] + gdir[X]*pathlength, gpos[Y] + gdir[Y]*pathlength, gpos[Z] + gdir[Z]*pathlength) # move to the next interaction point\n gpos = gpos + np.multiply(gdir, pathlength)\n \n if not is_inside(gpos): # check if we are in volume of interest\n break # we'out, done with trajectory\n \n p_abs = phxs/toxs # probability of absorbtion\n if rng.random() < p_abs: # sample absorbtion\n break # photoeffect, photon is gone\n \n # compton scattering\n hwp = KhanComptonSampling(hw, rng)\n cosθ = cosθ_from_hwp(hw, hwp)\n\n φ = 2.0*np.pi*rng.random() # uniform azimuth angle\n gdir = transform_cosines(*gdir, cosθ, φ)\n gdir = gdir/np.linalg.norm(gdir) # normalization\n \n hw = hwp\n # here we have new energy, new position and new direction",
"Particle # 0\nParticle # 1\nParticle # 2\nParticle # 3\nParticle # 4\nParticle # 5\nParticle # 6\nParticle # 7\nParticle # 8\nParticle # 9\nParticle # 10\nParticle # 11\nParticle # 12\nParticle # 13\nParticle # 14\nParticle # 15\nParticle # 16\nParticle # 17\nParticle # 18\nParticle # 19\nParticle # 20\nParticle # 21\nParticle # 22\nParticle # 23\nParticle # 24\nParticle # 25\nParticle # 26\nParticle # 27\nParticle # 28\nParticle # 29\nParticle # 30\nParticle # 31\nParticle # 32\nParticle # 33\nParticle # 34\nParticle # 35\nParticle # 36\nParticle # 37\nParticle # 38\nParticle # 39\nParticle # 40\nParticle # 41\nParticle # 42\nParticle # 43\nParticle # 44\nParticle # 45\nParticle # 46\nParticle # 47\nParticle # 48\nParticle # 49\nParticle # 50\nParticle # 51\nParticle # 52\nParticle # 53\nParticle # 54\nParticle # 55\nParticle # 56\nParticle # 57\nParticle # 58\nParticle # 59\nParticle # 60\nParticle # 61\nParticle # 62\nParticle # 63\nParticle # 64\nParticle # 65\nParticle # 66\nParticle # 67\nParticle # 68\nParticle # 69\nParticle # 70\nParticle # 71\nParticle # 72\nParticle # 73\nParticle # 74\nParticle # 75\nParticle # 76\nParticle # 77\nParticle # 78\nParticle # 79\nParticle # 80\nParticle # 81\nParticle # 82\nParticle # 83\nParticle # 84\nParticle # 85\nParticle # 86\nParticle # 87\nParticle # 88\nParticle # 89\nParticle # 90\nParticle # 91\nParticle # 92\nParticle # 93\nParticle # 94\nParticle # 95\nParticle # 96\nParticle # 97\nParticle # 98\nParticle # 99\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d05634d99c6191bca98f158442758c4c9bfddc9e | 412,880 | ipynb | Jupyter Notebook | random_signals/power_spectral_densities.ipynb | TA1DB/digital-signal-processing-lecture | fc2219d9ab2217ce96c59e6e8be1f1e270bae08d | [
"MIT"
] | 630 | 2016-01-05T17:11:43.000Z | 2022-03-30T07:48:27.000Z | random_signals/power_spectral_densities.ipynb | patel999jay/digital-signal-processing-lecture | eea6f46284a903297452d2c6fc489cb4d26a4a54 | [
"MIT"
] | 12 | 2016-11-07T15:49:55.000Z | 2022-03-10T13:05:50.000Z | random_signals/power_spectral_densities.ipynb | patel999jay/digital-signal-processing-lecture | eea6f46284a903297452d2c6fc489cb4d26a4a54 | [
"MIT"
] | 172 | 2015-12-26T21:05:40.000Z | 2022-03-10T23:13:30.000Z | 67.530258 | 38,018 | 0.615075 | [
[
[
"# Random Signals\n\n*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).*",
"_____no_output_____"
],
[
"## Auto-Power Spectral Density\n\nThe (auto-) [power spectral density](https://en.wikipedia.org/wiki/Spectral_density#Power_spectral_density) (PSD) is defined as the Fourier transformation of the [auto-correlation function](correlation_functions.ipynb) (ACF).",
"_____no_output_____"
],
[
"### Definition\n\nFor a continuous-amplitude, real-valued, wide-sense stationary (WSS) random signal $x[k]$ the PSD is given as\n\n\\begin{equation}\n\\Phi_{xx}(\\mathrm{e}^{\\,\\mathrm{j}\\,\\Omega}) = \\mathcal{F}_* \\{ \\varphi_{xx}[\\kappa] \\},\n\\end{equation}\n\nwhere $\\mathcal{F}_* \\{ \\cdot \\}$ denotes the [discrete-time Fourier transformation](https://en.wikipedia.org/wiki/Discrete-time_Fourier_transform) (DTFT) and $\\varphi_{xx}[\\kappa]$ the ACF of $x[k]$. Note that the DTFT is performed with respect to $\\kappa$. The ACF of a random signal of finite length $N$ can be expressed by way of a linear convolution\n\n\\begin{equation}\n\\varphi_{xx}[\\kappa] = \\frac{1}{N} \\cdot x_N[k] * x_N[-k].\n\\end{equation}\n\nTaking the DTFT of the left- and right-hand side results in\n\n\\begin{equation}\n\\Phi_{xx}(\\mathrm{e}^{\\,\\mathrm{j}\\,\\Omega}) = \\frac{1}{N} \\, X_N(\\mathrm{e}^{\\,\\mathrm{j}\\,\\Omega})\\, X_N(\\mathrm{e}^{-\\,\\mathrm{j}\\,\\Omega}) = \n\\frac{1}{N} \\, | X_N(\\mathrm{e}^{\\,\\mathrm{j}\\,\\Omega}) |^2.\n\\end{equation}\n\nThe last equality results from the definition of the magnitude and the symmetry of the DTFT for real-valued signals. The spectrum $X_N(\\mathrm{e}^{\\,\\mathrm{j}\\,\\Omega})$ quantifies the amplitude density of the signal $x_N[k]$. It can be concluded from above result that the PSD quantifies the squared amplitude or power density of a random signal. This explains the term power spectral density.",
"_____no_output_____"
],
[
"### Properties\n\nThe properties of the PSD can be deduced from the properties of the ACF and the DTFT as:\n\n1. From the link between the PSD $\\Phi_{xx}(\\mathrm{e}^{\\,\\mathrm{j}\\,\\Omega})$ and the spectrum $X_N(\\mathrm{e}^{\\,\\mathrm{j}\\,\\Omega})$ derived above it can be concluded that the PSD is real valued\n\n $$\\Phi_{xx}(\\mathrm{e}^{\\,\\mathrm{j}\\,\\Omega}) \\in \\mathbb{R}$$\n\n2. From the even symmetry $\\varphi_{xx}[\\kappa] = \\varphi_{xx}[-\\kappa]$ of the ACF it follows that\n\n $$ \\Phi_{xx}(\\mathrm{e}^{\\,\\mathrm{j} \\, \\Omega}) = \\Phi_{xx}(\\mathrm{e}^{\\,-\\mathrm{j}\\, \\Omega}) $$\n\n3. The PSD of an uncorrelated random signal is given as\n\n $$ \\Phi_{xx}(\\mathrm{e}^{\\,\\mathrm{j} \\, \\Omega}) = \\sigma_x^2 + \\mu_x^2 \\cdot {\\bot \\!\\! \\bot \\!\\! \\bot}\\left( \\frac{\\Omega}{2 \\pi} \\right) ,$$\n \n which can be deduced from the [ACF of an uncorrelated signal](correlation_functions.ipynb#Properties).\n\n4. The quadratic mean of a random signal is given as\n\n $$ E\\{ x[k]^2 \\} = \\varphi_{xx}[\\kappa=0] = \\frac{1}{2\\pi} \\int\\limits_{-\\pi}^{\\pi} \\Phi_{xx}(\\mathrm{e}^{\\,\\mathrm{j}\\, \\Omega}) \\,\\mathrm{d} \\Omega $$\n\n The last relation can be found by expressing the ACF via the inverse DTFT of $\\Phi_{xx}$ and considering that $\\mathrm{e}^{\\mathrm{j} \\Omega \\kappa} = 1$ when evaluating the integral for $\\kappa=0$.",
"_____no_output_____"
],
[
"### Example - Power Spectral Density of a Speech Signal\n\nIn this example the PSD $\\Phi_{xx}(\\mathrm{e}^{\\,\\mathrm{j} \\,\\Omega})$ of a speech signal of length $N$ is estimated by applying a discrete Fourier transformation (DFT) to its ACF. For a better interpretation of the PSD, the frequency axis $f = \\frac{\\Omega}{2 \\pi} \\cdot f_s$ has been chosen for illustration, where $f_s$ denotes the sampling frequency of the signal. The speech signal constitutes a recording of the vowel 'o' spoken from a German male, loaded into variable `x`.\n\nIn Python the ACF is stored in a vector with indices $0, 1, \\dots, 2N - 2$ corresponding to the lags $\\kappa = (0, 1, \\dots, 2N - 2)^\\mathrm{T} - (N-1)$. When computing the discrete Fourier transform (DFT) of the ACF numerically by the fast Fourier transform (FFT) one has to take this shift into account. For instance, by multiplying the DFT $\\Phi_{xx}[\\mu]$ by $\\mathrm{e}^{\\mathrm{j} \\mu \\frac{2 \\pi}{2N - 1} (N-1)}$.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.io import wavfile\n\n# read audio file\nfs, x = wavfile.read('../data/vocal_o_8k.wav')\nx = np.asarray(x, dtype=float)\nN = len(x)\n\n# compute ACF\nacf = 1/N * np.correlate(x, x, mode='full')\n# compute PSD\npsd = np.fft.fft(acf)\npsd = psd * np.exp(1j*np.arange(2*N-1)*2*np.pi*(N-1)/(2*N-1))\nf = np.fft.fftfreq(2*N-1, d=1/fs)\n\n# plot PSD\nplt.figure(figsize=(10, 4))\nplt.plot(f, np.real(psd))\nplt.title('Estimated power spectral density')\nplt.ylabel(r'$\\hat{\\Phi}_{xx}(e^{j \\Omega})$')\nplt.xlabel(r'$f / Hz$')\nplt.axis([0, 500, 0, 1.1*max(np.abs(psd))])\nplt.grid()",
"_____no_output_____"
]
],
[
[
"**Exercise**\n\n* What does the PSD tell you about the average spectral contents of a speech signal?\n\nSolution: The speech signal exhibits a harmonic structure with the dominant fundamental frequency $f_0 \\approx 100$ Hz and a number of harmonics $f_n \\approx n \\cdot f_0$ for $n > 0$. This due to the fact that vowels generate random signals which are in good approximation periodic. To generate vowels, the sound produced by the periodically vibrating vowel folds is filtered by the resonance volumes and articulators above the voice box. The spectrum of periodic signals is a line spectrum.",
"_____no_output_____"
],
[
"## Cross-Power Spectral Density\n\nThe cross-power spectral density is defined as the Fourier transformation of the [cross-correlation function](correlation_functions.ipynb#Cross-Correlation-Function) (CCF).",
"_____no_output_____"
],
[
"### Definition\n\nFor two continuous-amplitude, real-valued, wide-sense stationary (WSS) random signals $x[k]$ and $y[k]$, the cross-power spectral density is given as\n\n\\begin{equation}\n\\Phi_{xy}(\\mathrm{e}^{\\,\\mathrm{j} \\, \\Omega}) = \\mathcal{F}_* \\{ \\varphi_{xy}[\\kappa] \\},\n\\end{equation}\n\nwhere $\\varphi_{xy}[\\kappa]$ denotes the CCF of $x[k]$ and $y[k]$. Note again, that the DTFT is performed with respect to $\\kappa$. The CCF of two random signals of finite length $N$ and $M$ can be expressed by way of a linear convolution\n\n\\begin{equation}\n\\varphi_{xy}[\\kappa] = \\frac{1}{N} \\cdot x_N[k] * y_M[-k].\n\\end{equation}\n\nNote the chosen $\\frac{1}{N}$-averaging convention corresponds to the length of signal $x$. If $N \\neq M$, care should be taken on the interpretation of this normalization. In case of $N=M$ the $\\frac{1}{N}$-averaging yields a [biased estimator](https://en.wikipedia.org/wiki/Bias_of_an_estimator) of the CCF, which consistently should be denoted with $\\hat{\\varphi}_{xy,\\mathrm{biased}}[\\kappa]$.\n\n\nTaking the DTFT of the left- and right-hand side from above cross-correlation results in\n\n\\begin{equation}\n\\Phi_{xy}(\\mathrm{e}^{\\,\\mathrm{j}\\,\\Omega}) = \\frac{1}{N} \\, X_N(\\mathrm{e}^{\\,\\mathrm{j}\\,\\Omega})\\, Y_M(\\mathrm{e}^{-\\,\\mathrm{j}\\,\\Omega}).\n\\end{equation}",
"_____no_output_____"
],
[
"### Properties\n\n1. The symmetries of $\\Phi_{xy}(\\mathrm{e}^{\\,\\mathrm{j}\\, \\Omega})$ can be derived from the symmetries of the CCF and the DTFT as\n\n $$ \\underbrace {\\Phi_{xy}(\\mathrm{e}^{\\,\\mathrm{j}\\, \\Omega}) = \\Phi_{xy}^*(\\mathrm{e}^{-\\,\\mathrm{j}\\, \\Omega})}_{\\varphi_{xy}[\\kappa] \\in \\mathbb{R}} = \n\\underbrace {\\Phi_{yx}(\\mathrm{e}^{\\,- \\mathrm{j}\\, \\Omega}) = \\Phi_{yx}^*(\\mathrm{e}^{\\,\\mathrm{j}\\, \\Omega})}_{\\varphi_{yx}[-\\kappa] \\in \\mathbb{R}},$$\n\n from which $|\\Phi_{xy}(\\mathrm{e}^{\\,\\mathrm{j}\\, \\Omega})| = |\\Phi_{yx}(\\mathrm{e}^{\\,\\mathrm{j}\\, \\Omega})|$ can be concluded.\n\n2. The cross PSD of two uncorrelated random signals is given as\n\n $$ \\Phi_{xy}(\\mathrm{e}^{\\,\\mathrm{j} \\, \\Omega}) = \\mu_x^2 \\mu_y^2 \\cdot {\\bot \\!\\! \\bot \\!\\! \\bot}\\left( \\frac{\\Omega}{2 \\pi} \\right) $$\n \n which can be deduced from the CCF of an uncorrelated signal.",
"_____no_output_____"
],
[
"### Example - Cross-Power Spectral Density\n\nThe following example estimates and plots the cross PSD $\\Phi_{xy}(\\mathrm{e}^{\\,\\mathrm{j}\\, \\Omega})$ of two random signals $x_N[k]$ and $y_M[k]$ of finite lengths $N = 64$ and $M = 512$.",
"_____no_output_____"
]
],
[
[
"N = 64 # length of x\nM = 512 # length of y\n\n# generate two uncorrelated random signals\nnp.random.seed(1)\nx = 2 + np.random.normal(size=N)\ny = 3 + np.random.normal(size=M)\nN = len(x)\nM = len(y)\n\n# compute cross PSD via CCF\nacf = 1/N * np.correlate(x, y, mode='full')\npsd = np.fft.fft(acf)\npsd = psd * np.exp(1j*np.arange(N+M-1)*2*np.pi*(M-1)/(2*M-1))\npsd = np.fft.fftshift(psd)\nOm = 2*np.pi * np.arange(0, N+M-1) / (N+M-1)\nOm = Om - np.pi\n\n# plot results\nplt.figure(figsize=(10, 4))\nplt.stem(Om, np.abs(psd), basefmt='C0:', use_line_collection=True)\nplt.title('Biased estimator of cross power spectral density')\nplt.ylabel(r'$|\\hat{\\Phi}_{xy}(e^{j \\Omega})|$')\nplt.xlabel(r'$\\Omega$')\nplt.grid()",
"_____no_output_____"
]
],
[
[
"**Exercise**\n\n* What does the cross PSD $\\Phi_{xy}(\\mathrm{e}^{\\,\\mathrm{j} \\, \\Omega})$ tell you about the statistical properties of the two random signals?\n\nSolution: The cross PSD $\\Phi_{xy}(\\mathrm{e}^{\\,\\mathrm{j} \\, \\Omega})$ is essential only non-zero for $\\Omega=0$. It hence can be concluded that the two random signals are not mean-free and uncorrelated to each other.",
"_____no_output_____"
],
[
"**Copyright**\n\nThis notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d056436cd2b1754ce41a6f823f3c813a6f08687a | 40,463 | ipynb | Jupyter Notebook | docs/tutorial/submissions/ipynbs/demo-fails2Hidden.ipynb | chrispyles/otter-grader | bca8061450412c9d8e4a53f1641711fb522b6b33 | [
"BSD-3-Clause"
] | 76 | 2020-01-24T07:18:34.000Z | 2022-03-16T01:16:28.000Z | docs/tutorial/submissions/ipynbs/demo-fails2Hidden.ipynb | chrispyles/otter-grader | bca8061450412c9d8e4a53f1641711fb522b6b33 | [
"BSD-3-Clause"
] | 413 | 2019-10-07T03:49:51.000Z | 2022-03-29T18:23:05.000Z | docs/tutorial/submissions/ipynbs/demo-fails2Hidden.ipynb | chrispyles/otter-grader | bca8061450412c9d8e4a53f1641711fb522b6b33 | [
"BSD-3-Clause"
] | 41 | 2020-01-24T21:45:43.000Z | 2022-03-14T16:11:55.000Z | 44.318729 | 6,508 | 0.537182 | [
[
[
"# Otter-Grader Tutorial\n\nThis notebook is part of the Otter-Grader tutorial. For more information about Otter, see our [documentation](https://otter-grader.rtfd.io).",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n%matplotlib inline\nimport otter\ngrader = otter.Notebook()",
"_____no_output_____"
]
],
[
[
"**Question 1:** Write a function `square` that returns the square of its argument.",
"_____no_output_____"
]
],
[
[
"def square(x):\n return x**2",
"_____no_output_____"
],
[
"grader.check(\"q1\")",
"_____no_output_____"
]
],
[
[
"**Question 2:** Write an infinite generator of the Fibonacci sequence `fibferator` that is *not* recursive.",
"_____no_output_____"
]
],
[
[
"def fiberator():\n yield 0\n yield 1\n while True:\n yield 1",
"_____no_output_____"
],
[
"grader.check(\"q2\")",
"_____no_output_____"
]
],
[
[
"**Question 3:** Create a DataFrame mirroring the table below and assign this to `data`. Then group by the `flavor` column and find the mean price for each flavor; assign this **series** to `price_by_flavor`.\n\n| flavor | scoops | price |\n|-----|-----|-----|\n| chocolate | 1 | 2 |\n| vanilla | 1 | 1.5 |\n| chocolate | 2 | 3 |\n| strawberry | 1 | 2 |\n| strawberry | 3 | 4 |\n| vanilla | 2 | 2 |\n| mint | 1 | 4 |\n| mint | 2 | 5 |\n| chocolate | 3 | 5 |",
"_____no_output_____"
]
],
[
[
"data = pd.DataFrame({\n \"flavor\": [\"chocolate\", \"vanilla\", \"chocolate\", \"strawberry\", \"strawberry\", \"vanilla\", \"mint\", \n \"mint\", \"chocolate\"],\n \"scoops\": [1, 1, 2, 1, 3, 2, 1, 2, 3],\n \"price\": [2, 1.5, 3, 2, 4, 2, 4, 5, 5]\n})\nprice_by_flavor = data.groupby(\"flavor\").mean()[\"price\"]\nprice_by_flavor",
"_____no_output_____"
],
[
"grader.check(\"q3\")",
"_____no_output_____"
]
],
[
[
"<!-- BEGIN QUESTION -->\n\n**Question 1.4:** Create a barplot of `price_by_flavor`.",
"_____no_output_____"
]
],
[
[
"price_by_flavor.plot.bar()",
"_____no_output_____"
]
],
[
[
"<!-- END QUESTION -->",
"_____no_output_____"
],
[
"<!-- BEGIN QUESTION -->\n\n**Question 1.5:** What do you notice about the bar plot?",
"_____no_output_____"
],
[
"_Type your answer here, replacing this text._",
"_____no_output_____"
],
[
"<!-- END QUESTION -->",
"_____no_output_____"
],
[
"The cell below allows you run all checks again.",
"_____no_output_____"
]
],
[
[
"grader.check_all()",
"_____no_output_____"
],
[
"grader.export()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d05652464bd228a7eeaedb0c899adc7b40e040eb | 18,295 | ipynb | Jupyter Notebook | Useful_Codes/find_centroid.ipynb | bhishanpdl/Research | 7868d6b01cb58dd295971a62bce8178dd673ed8c | [
"MIT"
] | null | null | null | Useful_Codes/find_centroid.ipynb | bhishanpdl/Research | 7868d6b01cb58dd295971a62bce8178dd673ed8c | [
"MIT"
] | null | null | null | Useful_Codes/find_centroid.ipynb | bhishanpdl/Research | 7868d6b01cb58dd295971a62bce8178dd673ed8c | [
"MIT"
] | null | null | null | 28.408385 | 2,518 | 0.593058 | [
[
[
"# Table of Contents\n <p>",
"_____no_output_____"
]
],
[
[
"#!python\n\"\"\"\nFind the brightest pixel coordinate of a image.\n\n@author: Bhishan Poudel\n\n@date: Oct 27, 2017\n\n@email: [email protected]\n\n\"\"\"\n# Imports\nimport time\nimport numpy as np\nfrom astropy.io import fits\nimport subprocess\nfrom scipy.ndimage import measurements\n\n\n\ndef brightest_coord():\n with open('centroids_f8.txt','w') as fo:\n for i in range(201):\n pre = '/Users/poudel/Research/a01_data/original_data/HST_ACS_WFC_f814w/'\n infile = '{}/sect23_f814w_gal{}.fits'.format(pre,i)\n dat = fits.getdata(infile)\n x,y = np.unravel_index(np.argmax(dat), dat.shape)\n x,y = int(y+1) , int(x+1)\n print(\"{} {}\".format(x, y), file=fo)\n \ndef find_centroid():\n with open('centroids_f8_scipy.txt','w') as fo:\n for i in range(201):\n pre = '/Users/poudel/Research/a01_data/original_data/HST_ACS_WFC_f814w/'\n infile = '{}/sect23_f814w_gal{}.fits'.format(pre,i)\n dat = fits.getdata(infile)\n x,y = measurements.center_of_mass(dat)\n x,y = int(y+1) , int(x+1)\n print(\"{} {}\".format(x, y), file=fo)\n\n \n\ndef main():\n \"\"\"Run main function.\"\"\"\n \n# bright_coord()\n# find_centroid()\n \n # # checking\n # i = 0\n # pre = '/Users/poudel/Research/a01_data/original_data/HST_ACS_WFC_f814w/'\n # infile = '{}/sect23_f814w_gal{}.fits'.format(pre,i)\n # ds9 = '/Applications/ds9.app/Contents/MacOS/ds9'\n # subprocess.call('{} {}'.format(ds9, infile), shell=True)\n # when zooming we can see brightest pixel is at 296, 307 image coord.\n \n \nif __name__ == \"__main__\":\n import time, os\n \n # Beginning time\n program_begin_time = time.time()\n begin_ctime = time.ctime()\n \n # Run the main program\n main()\n \n # Print the time taken\n program_end_time = time.time()\n end_ctime = time.ctime()\n seconds = program_end_time - program_begin_time\n m, s = divmod(seconds, 60)\n h, m = divmod(m, 60)\n d, h = divmod(h, 24)\n print(\"\\n\\nBegin time: \", begin_ctime)\n print(\"End time: \", end_ctime, \"\\n\")\n print(\"Time taken: {0: .0f} days, {1: .0f} hours, \\\n {2: .0f} minutes, {3: f} seconds.\".format(d, h, m, s))\n print(\"\\n\")\n \n",
"\n\nBegin time: Thu May 23 10:59:20 2019\nEnd time: Thu May 23 10:59:20 2019 \n\nTime taken: 0 days, 0 hours, 0 minutes, 0.000011 seconds.\n\n\n"
],
[
"!head -n 5 centroids_f8.txt",
"296 307\r\n313 306\r\n302 312\r\n310 304\r\n303 302\r\n"
],
[
"!head -n 5 centroids_f8_scipy.txt",
"295 306\r\n312 306\r\n301 311\r\n309 303\r\n304 302\r\n"
],
[
"def find_max_coord(dat):\n print(\"dat = \\n{}\".format(dat))\n maxpos = np.unravel_index(np.argmax(dat), dat.shape)\n print(\"maxpos = {}\".format(maxpos))",
"_____no_output_____"
],
[
"with open('example_data.txt','w') as fo:\n data = \"\"\"0.1 0.5\n 0.0 0.0\n 4.0 3.0\n 0.0 0.0\n 1.0 1.0\n \"\"\"\n fo.write(data)\n\ndat = np.genfromtxt('example_data.txt')\nfind_max_coord(dat)",
"dat = \n[[ 0.1 0.5]\n [ 0. 0. ]\n [ 4. 3. ]\n [ 0. 0. ]\n [ 1. 1. ]]\nmaxpos = (2, 0)\n"
],
[
"x,y = measurements.center_of_mass(dat)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"plt.imshow(dat) # default is RGB",
"_____no_output_____"
],
[
"plt.imshow(dat,cmap='gray', vmin=int(dat.min()), vmax=int(dat.max()))",
"_____no_output_____"
],
[
"# we can see brightest pixel is x=0 and y = 2\n# or, if we count from 1, x = 1 and y =3",
"_____no_output_____"
],
[
"measurements.center_of_mass(dat)",
"_____no_output_____"
],
[
"x,y = measurements.center_of_mass(dat)\nx,y = int(x), int(y)\nx,y",
"_____no_output_____"
],
[
"dat",
"_____no_output_____"
],
[
"dat[2][0]",
"_____no_output_____"
],
[
"# Numpy index is dat[2][0]\n# but image shows x=0 and y =2.",
"_____no_output_____"
],
[
"x,y = measurements.center_of_mass(dat)\nx,y = int(y), int(x)\n\nx,y",
"_____no_output_____"
],
[
"dat[2][0]",
"_____no_output_____"
],
[
"# Looking at mean",
"_____no_output_____"
],
[
"dat.mean(axis=0)",
"_____no_output_____"
],
[
"np.argmax(dat)",
"_____no_output_____"
],
[
"np.unravel_index(4,dat.shape)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d056544f0b4d03ce1ac366cb9fd7eb62f9f62ad4 | 4,799 | ipynb | Jupyter Notebook | ipynb/Poland.ipynb | oscovida/oscovida.github.io | c74d6da79feda1b5ccce107ad3acd48cf0e74c1c | [
"CC-BY-4.0"
] | 2 | 2020-06-19T09:16:14.000Z | 2021-01-24T17:47:56.000Z | ipynb/Poland.ipynb | oscovida/oscovida.github.io | c74d6da79feda1b5ccce107ad3acd48cf0e74c1c | [
"CC-BY-4.0"
] | 8 | 2020-04-20T16:49:49.000Z | 2021-12-25T16:54:19.000Z | ipynb/Poland.ipynb | oscovida/oscovida.github.io | c74d6da79feda1b5ccce107ad3acd48cf0e74c1c | [
"CC-BY-4.0"
] | 4 | 2020-04-20T13:24:45.000Z | 2021-01-29T11:12:12.000Z | 28.736527 | 160 | 0.509481 | [
[
[
"# Poland\n\n* Homepage of project: https://oscovida.github.io\n* Plots are explained at http://oscovida.github.io/plots.html\n* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Poland.ipynb)",
"_____no_output_____"
]
],
[
[
"import datetime\nimport time\n\nstart = datetime.datetime.now()\nprint(f\"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}\")",
"_____no_output_____"
],
[
"%config InlineBackend.figure_formats = ['svg']\nfrom oscovida import *",
"_____no_output_____"
],
[
"overview(\"Poland\", weeks=5);",
"_____no_output_____"
],
[
"overview(\"Poland\");",
"_____no_output_____"
],
[
"compare_plot(\"Poland\", normalise=True);\n",
"_____no_output_____"
],
[
"# load the data\ncases, deaths = get_country_data(\"Poland\")\n\n# get population of the region for future normalisation:\ninhabitants = population(\"Poland\")\nprint(f'Population of \"Poland\": {inhabitants} people')\n\n# compose into one table\ntable = compose_dataframe_summary(cases, deaths)\n\n# show tables with up to 1000 rows\npd.set_option(\"max_rows\", 1000)\n\n# display the table\ntable",
"_____no_output_____"
]
],
[
[
"# Explore the data in your web browser\n\n- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Poland.ipynb)\n- and wait (~1 to 2 minutes)\n- Then press SHIFT+RETURN to advance code cell to code cell\n- See http://jupyter.org for more details on how to use Jupyter Notebook",
"_____no_output_____"
],
[
"# Acknowledgements:\n\n- Johns Hopkins University provides data for countries\n- Robert Koch Institute provides data for within Germany\n- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)\n- Open source and scientific computing community for the data tools\n- Github for hosting repository and html files\n- Project Jupyter for the Notebook and binder service\n- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))\n\n--------------------",
"_____no_output_____"
]
],
[
[
"print(f\"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and \"\n f\"deaths at {fetch_deaths_last_execution()}.\")",
"_____no_output_____"
],
[
"# to force a fresh download of data, run \"clear_cache()\"",
"_____no_output_____"
],
[
"print(f\"Notebook execution took: {datetime.datetime.now()-start}\")\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
d05654cf15a009ff75aae8306ad4d812cdc8a32f | 16,193 | ipynb | Jupyter Notebook | VGG/VGG.ipynb | gowriaddepalli/papers | d65ebead826cae752cb8b83ad40e5daf79c8e30a | [
"MIT"
] | 4 | 2021-01-24T05:39:13.000Z | 2021-04-03T17:00:33.000Z | VGG/VGG.ipynb | gowriaddepalli/papers | d65ebead826cae752cb8b83ad40e5daf79c8e30a | [
"MIT"
] | null | null | null | VGG/VGG.ipynb | gowriaddepalli/papers | d65ebead826cae752cb8b83ad40e5daf79c8e30a | [
"MIT"
] | 3 | 2020-11-20T10:24:56.000Z | 2021-03-29T12:58:00.000Z | 31.875984 | 126 | 0.51337 | [
[
[
"# Implementation of VGG16\n> In this notebook I have implemented VGG16 on CIFAR10 dataset using Pytorch",
"_____no_output_____"
]
],
[
[
"#importing libraries\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torchvision import transforms\nimport torch.optim as optim\nimport tqdm\nimport matplotlib.pyplot as plt\nfrom torchvision.datasets import CIFAR10\nfrom torch.utils.data import random_split\nfrom torch.utils.data.dataloader import DataLoader",
"_____no_output_____"
]
],
[
[
"Load the data and do standard preprocessing steps,such as resizing and converting the images into tensor",
"_____no_output_____"
]
],
[
[
"transform = transforms.Compose([transforms.Resize(224),\n transforms.ToTensor(),\n transforms.Normalize(mean=[0.485,0.456,0.406],\n std=[0.229,0.224,0.225])])\n\ntrain_ds = CIFAR10(root='data/',train = True,download=True,transform = transform)\nval_ds = CIFAR10(root='data/',train = False,download=True,transform = transform)\n\nbatch_size = 128\ntrain_loader = DataLoader(train_ds,batch_size,shuffle=True,num_workers=4,pin_memory=True)\nval_loader = DataLoader(val_ds,batch_size,num_workers=4,pin_memory=True)",
"Files already downloaded and verified\nFiles already downloaded and verified\n"
]
],
[
[
"A custom utility class to print out the accuracy and losses during training and testing",
"_____no_output_____"
]
],
[
[
"def accuracy(outputs,labels):\n _,preds = torch.max(outputs,dim=1)\n return torch.tensor(torch.sum(preds==labels).item()/len(preds))\n \nclass ImageClassificationBase(nn.Module):\n def training_step(self,batch):\n images, labels = batch\n out = self(images)\n loss = F.cross_entropy(out,labels)\n return loss\n \n def validation_step(self,batch):\n images, labels = batch\n out = self(images)\n loss = F.cross_entropy(out,labels)\n acc = accuracy(out,labels)\n return {'val_loss': loss.detach(),'val_acc': acc}\n \n def validation_epoch_end(self,outputs):\n batch_losses = [x['val_loss'] for x in outputs]\n epoch_loss = torch.stack(batch_losses).mean()\n batch_accs = [x['val_acc'] for x in outputs]\n epoch_acc = torch.stack(batch_accs).mean()\n return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()}\n \n def epoch_end(self, epoch, result):\n print(\"Epoch [{}], train_loss: {:.4f}, val_loss: {:.4f}, val_acc: {:.4f}\".format(\n epoch, result['train_loss'], result['val_loss'], result['val_acc']))",
"_____no_output_____"
]
],
[
[
"### Creating a network",
"_____no_output_____"
]
],
[
[
"VGG_types = {\n 'VGG11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],\n 'VGG13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],\n 'VGG16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],\n 'VGG19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],\n}\n\n\nclass VGG_net(ImageClassificationBase):\n def __init__(self, in_channels=3, num_classes=1000):\n super(VGG_net, self).__init__()\n self.in_channels = in_channels\n self.conv_layers = self.create_conv_layers(VGG_types['VGG16'])\n \n self.fcs = nn.Sequential(\n nn.Linear(512*7*7, 4096),\n nn.ReLU(),\n nn.Dropout(p=0.5),\n nn.Linear(4096, 4096),\n nn.ReLU(),\n nn.Dropout(p=0.5),\n nn.Linear(4096, num_classes)\n )\n \n def forward(self, x):\n x = self.conv_layers(x)\n x = x.reshape(x.shape[0], -1)\n x = self.fcs(x)\n return x\n\n def create_conv_layers(self, architecture):\n layers = []\n in_channels = self.in_channels\n \n for x in architecture:\n if type(x) == int:\n out_channels = x\n \n layers += [nn.Conv2d(in_channels=in_channels,out_channels=out_channels,\n kernel_size=(3,3), stride=(1,1), padding=(1,1)),\n nn.BatchNorm2d(x),\n nn.ReLU()]\n in_channels = x\n elif x == 'M':\n layers += [nn.MaxPool2d(kernel_size=(2,2), stride=(2,2))]\n \n return nn.Sequential(*layers)",
"_____no_output_____"
]
],
[
[
"A custom function to pick a default device",
"_____no_output_____"
]
],
[
[
"def get_default_device():\n \"\"\"Pick GPU if available else CPU\"\"\"\n if torch.cuda.is_available():\n return torch.device('cuda')\n else:\n return torch.device('cpu')",
"_____no_output_____"
],
[
"device = get_default_device()\ndevice",
"_____no_output_____"
],
[
"def to_device(data,device):\n \"\"\"Move tensors to chosen device\"\"\"\n if isinstance(data,(list,tuple)):\n return [to_device(x,device) for x in data]\n return data.to(device,non_blocking=True)",
"_____no_output_____"
],
[
"for images, labels in train_loader:\n print(images.shape)\n images = to_device(images,device)\n print(images.device)\n break",
"torch.Size([128, 3, 224, 224])\ncuda:0\n"
],
[
"class DeviceDataLoader():\n \"\"\"Wrap a DataLoader to move data to a device\"\"\"\n def __init__(self,dl,device):\n self.dl = dl\n self.device = device\n def __iter__(self):\n \"\"\"Yield a batch of data to a dataloader\"\"\"\n for b in self.dl:\n yield to_device(b, self.device)\n def __len__(self):\n \"\"\"Number of batches\"\"\"\n return len(self.dl)",
"_____no_output_____"
],
[
"train_loader = DeviceDataLoader(train_loader,device)\nval_loader = DeviceDataLoader(val_loader,device)\nmodel = VGG_net(in_channels=3,num_classes=10)\nto_device(model,device)",
"_____no_output_____"
]
],
[
[
"### Training the model",
"_____no_output_____"
]
],
[
[
"@torch.no_grad()\ndef evaluate(model, val_loader):\n model.eval()\n outputs = [model.validation_step(batch) for batch in val_loader]\n return model.validation_epoch_end(outputs)\n \ndef fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):\n history = []\n train_losses =[]\n optimizer = opt_func(model.parameters(), lr)\n for epoch in range(epochs):\n # Training Phase \n model.train()\n for batch in train_loader:\n loss = model.training_step(batch)\n train_losses.append(loss)\n loss.backward()\n optimizer.step()\n optimizer.zero_grad()\n # Validation phase\n result = evaluate(model, val_loader)\n result['train_loss'] = torch.stack(train_losses).mean().item()\n model.epoch_end(epoch, result)\n history.append(result)\n return history",
"_____no_output_____"
],
[
"history = [evaluate(model, val_loader)]\nhistory",
"_____no_output_____"
],
[
"#history = fit(2,0.1,model,train_loader,val_loader)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0566442e4f53c5e6ac383304605cd3ef7e4470f | 1,000,955 | ipynb | Jupyter Notebook | Exploration.ipynb | ParthS28/Uber-vs-Lyft | bde8c147c8e2a362ea00a54431e2b35535cbcac2 | [
"Apache-2.0"
] | null | null | null | Exploration.ipynb | ParthS28/Uber-vs-Lyft | bde8c147c8e2a362ea00a54431e2b35535cbcac2 | [
"Apache-2.0"
] | null | null | null | Exploration.ipynb | ParthS28/Uber-vs-Lyft | bde8c147c8e2a362ea00a54431e2b35535cbcac2 | [
"Apache-2.0"
] | null | null | null | 836.918896 | 239,100 | 0.946637 | [
[
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"df_rides = pd.read_csv('cab_rides.csv')\ndf_weather = pd.read_csv('weather.csv')",
"_____no_output_____"
],
[
"df_rides['date'] = pd.to_datetime(df_rides['time_stamp']/ 1000, unit = 's')\ndf_weather['date'] = pd.to_datetime(df_weather['time_stamp'], unit = 's')",
"_____no_output_____"
],
[
"df_rides.head()",
"_____no_output_____"
],
[
"df_rides['merged_date'] = df_rides['source'].astype('str') + ' - ' + df_rides['date'].dt.strftime('%Y-%m-%d').astype('str') + ' - ' + df_rides['date'].dt.hour.astype('str')\ndf_weather['merged_date'] = df_weather['location'].astype('str') + ' - ' + df_weather['date'].dt.strftime('%Y-%m-%d').astype('str') + ' - ' + df_weather['date'].dt.hour.astype('str')",
"_____no_output_____"
],
[
"df_weather.index = df_weather['merged_date']",
"_____no_output_____"
],
[
"df_joined = df_rides.join(df_weather, on = ['merged_date'], rsuffix ='_w')",
"_____no_output_____"
],
[
"df_joined.head()",
"_____no_output_____"
],
[
"df_joined.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 1268639 entries, 0 to 693070\nData columns (total 22 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 distance 1268639 non-null float64 \n 1 cab_type 1268639 non-null object \n 2 time_stamp 1268639 non-null int64 \n 3 destination 1268639 non-null object \n 4 source 1268639 non-null object \n 5 price 1167730 non-null float64 \n 6 surge_multiplier 1268639 non-null float64 \n 7 id 1268639 non-null object \n 8 product_id 1268639 non-null object \n 9 name 1268639 non-null object \n 10 date 1268639 non-null datetime64[ns]\n 11 merged_date 1268639 non-null object \n 12 temp 1265675 non-null float64 \n 13 location 1265675 non-null object \n 14 clouds 1265675 non-null float64 \n 15 pressure 1265675 non-null float64 \n 16 rain 206947 non-null float64 \n 17 time_stamp_w 1265675 non-null float64 \n 18 humidity 1265675 non-null float64 \n 19 wind 1265675 non-null float64 \n 20 date_w 1265675 non-null datetime64[ns]\n 21 merged_date_w 1265675 non-null object \ndtypes: datetime64[ns](2), float64(10), int64(1), object(9)\nmemory usage: 222.6+ MB\n"
],
[
"id_group = pd.DataFrame(df_joined.groupby('id')['temp','clouds', 'pressure', 'rain', 'humidity', 'wind'].mean())\ndf_rides_weather = df_rides.join(id_group, on = ['id'])",
"<ipython-input-10-f8fb8fbd2d98>:1: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.\n id_group = pd.DataFrame(df_joined.groupby('id')['temp','clouds', 'pressure', 'rain', 'humidity', 'wind'].mean())\n"
],
[
"df_rides_weather.tail()",
"_____no_output_____"
],
[
"# Creating the columns for Month, Hour and Weekdays \ndf_rides_weather['Month'] = df_rides_weather['date'].dt.month\ndf_rides_weather['Hour'] = df_rides_weather['date'].dt.hour\ndf_rides_weather['Day'] = df_rides_weather['date'].dt.strftime('%A')",
"_____no_output_____"
],
[
"uber_day_count = df_rides_weather[df_rides_weather['cab_type'] == 'Uber']['Day'].value_counts()\nuber_day_count = uber_day_count.reindex(index = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday','Saturday','Sunday'])\nlyft_day_count = df_rides_weather[df_rides_weather['cab_type'] == 'Lyft']['Day'].value_counts()\nlyft_day_count = lyft_day_count.reindex(index = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday','Saturday','Sunday'])\n\nfig , ax = plt.subplots(figsize = (12,12))\n\nax.plot(lyft_day_count.index, lyft_day_count, label = 'Lyft')\nax.plot(uber_day_count.index, uber_day_count, label = 'Uber')\n\nax.set(ylabel = 'Number of Rides', xlabel = 'Weekdays')\nax.legend()\nplt.show()",
"_____no_output_____"
],
[
"# The ride distribution in one day \nfig , ax = plt.subplots(figsize= (12,12))\nax.plot(df_rides_weather[df_rides_weather['cab_type'] == 'Lyft'].groupby('Hour').Hour.count().index, df_rides_weather[df_rides_weather['cab_type'] == 'Lyft'].groupby('Hour').Hour.count(), label = 'Lyft')\nax.plot(df_rides_weather[df_rides_weather['cab_type'] == 'Uber'].groupby('Hour').Hour.count().index, df_rides_weather[df_rides_weather['cab_type'] =='Uber'].groupby('Hour').Hour.count(), label = 'Uber')\nax.legend()\nax.set(xlabel = 'Hours', ylabel = 'Number of Rides')\nplt.xticks(range(0,24,1))\nplt.show()",
"_____no_output_____"
],
[
"order = ['Financial District', 'Theatre District', 'Back Bay', 'Haymarket Square', 'Boston University', 'Fenway', 'North End', 'Northeastern University', 'South Station', 'West End', 'Beacon Hill', 'North Station']\n\nprint('green - Lyft\\norange - Uber')\nf = plt.figure(figsize = (40, 25))\nax = f.add_subplot(2,3,1)\nplt.xticks(rotation=45)\nsns.barplot(x='source', y='price', data=df_rides_weather[df_rides_weather['cab_type']=='Lyft'], ax=ax, order = order, color = 'green')\nsns.barplot(x='source', y='price', data=df_rides_weather[df_rides_weather['cab_type']=='Uber'], ax=ax, order = order, color = 'orange')\nax = f.add_subplot(2,3,2)\nplt.xticks(rotation=45)\nsns.barplot(x='destination', y='price', data=df_rides_weather[df_rides_weather['cab_type']=='Lyft'], ax=ax, order = order, color = 'green')\nsns.barplot(x='destination', y='price', data=df_rides_weather[df_rides_weather['cab_type']=='Uber'], ax=ax, order = order, color = 'orange')\nplt.show()",
"green - Lyft\norange - Uber\n"
],
[
"fig , ax = plt.subplots(figsize = (12,12))\nax.plot(df_rides_weather[df_rides_weather['cab_type'] == 'Lyft'].groupby('distance').price.mean().index, df_rides_weather[df_rides_weather['cab_type'] == 'Lyft'].groupby('distance')['price'].mean(), label = 'Lyft')\nax.plot(df_rides_weather[df_rides_weather['cab_type'] == 'Uber'].groupby('distance').price.mean().index, df_rides_weather[df_rides_weather['cab_type'] == 'Uber'].groupby('distance')['price'].mean(), label = 'Uber')\n\nax.set_title('The Average Price by distance', fontsize= 15)\nax.set(xlabel = 'Distance', ylabel = 'Price' )\nax.legend()\nplt.show()",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(2, 1, figsize = (20,5))\nfor i,col in enumerate(df_rides_weather[df_rides_weather['cab_type'] == 'Uber']['name'].unique()):\n ax[0].plot(df_rides_weather[ df_rides_weather['name'] == col].groupby('distance').price.mean().index, df_rides_weather[ df_rides_weather['name'] == col].groupby('distance').price.mean(), label = col)\nax[0].set_title('Uber Average Prices by Distance')\nax[0].set(xlabel = 'Distance in Mile', ylabel = 'Average price in USD')\nax[0].legend()\nfor i,col in enumerate(df_rides_weather[df_rides_weather['cab_type'] == 'Lyft']['name'].unique()):\n ax[1].plot(df_rides_weather[ df_rides_weather['name'] == col].groupby('distance').price.mean().index, df_rides_weather[ df_rides_weather['name'] == col].groupby('distance').price.mean(), label = col)\nax[1].set(xlabel = 'Distance in Mile', ylabel = 'Average price in USD')\nax[1].set_title('Lyft Average Prices by Distance')\nax[1].legend()\nplt.show()",
"_____no_output_____"
],
[
"x = df_rides_weather['surge_multiplier'].value_counts()\nx.plot.bar(x = 'multiplier', y = 'Number of times')",
"_____no_output_____"
],
[
"x = df_rides_weather[df_rides_weather['cab_type'] == 'Uber']['surge_multiplier'].value_counts()\nx.plot.bar(x = 'multipler uber', y = 'Number of rides')",
"_____no_output_____"
],
[
"x = df_rides_weather[df_rides_weather['cab_type'] == 'Lyft']['surge_multiplier'].value_counts()\nx.plot.bar(x = 'multipler lyft', y = 'Number of rides')",
"_____no_output_____"
],
[
"df_rides_weather['price/distance'] = (df_rides_weather['price'] / df_rides_weather['distance'])\n\nhigh_rates = df_rides_weather[df_rides_weather['price/distance'] > 80]\nhigh_rates['cab_type'].value_counts()",
"_____no_output_____"
],
[
"high_rates[high_rates['cab_type'] == 'Uber']['distance'].value_counts()",
"_____no_output_____"
],
[
"order = ['Financial District', 'Theatre District', 'Back Bay', 'Haymarket Square', 'Boston University', 'Fenway', 'North End', 'Northeastern University', 'South Station', 'West End', 'Beacon Hill', 'North Station']\n\nprint('source')\nfig, ax = plt.subplots(1, 2, figsize = (20,10))\ndf_uber = df_rides_weather[df_rides_weather['cab_type'] == 'Uber']\nfor i, col in enumerate(order):\n x = df_uber[df_uber['source'] == col].groupby('distance').price.mean().index\n y = df_uber[df_uber['source'] == col].groupby('distance').price.mean()\n ax[0].plot(x, y, label = col)\n\nax[0].set_title('Uber Average Prices by Distance')\nax[0].set(xlabel = 'Distance in Mile', ylabel = 'Average price in USD')\nax[0].legend()\n\ndf_lyft = df_rides_weather[df_rides_weather['cab_type'] == 'Lyft']\nfor i, col in enumerate(order):\n x = df_lyft[df_lyft['source'] == col].groupby('distance').price.mean().index\n y = df_lyft[df_lyft['source'] == col].groupby('distance').price.mean()\n ax[1].plot(x, y, label = col)\nax[1].set(xlabel = 'Distance in Mile', ylabel = 'Average price in USD')\nax[1].set_title('Lyft Average Prices by Distance')\nax[1].legend()\nplt.show()",
"source\n"
],
[
"order = ['Financial District', 'Theatre District', 'Back Bay', 'Haymarket Square', 'Boston University', 'Fenway', 'North End', 'Northeastern University', 'South Station', 'West End', 'Beacon Hill', 'North Station']\n\nprint('destination')\nfig, ax = plt.subplots(1, 2, figsize = (20,10))\ndf_uber = df_rides_weather[df_rides_weather['cab_type'] == 'Uber']\nfor i, col in enumerate(order):\n x = df_uber[df_uber['destination'] == col].groupby('distance').price.mean().index\n y = df_uber[df_uber['destination'] == col].groupby('distance').price.mean()\n ax[0].plot(x, y, label = col)\n\nax[0].set_title('Uber Average Prices by Distance')\nax[0].set(xlabel = 'Distance in Mile', ylabel = 'Average price in USD')\nax[0].legend()\n\ndf_lyft = df_rides_weather[df_rides_weather['cab_type'] == 'Lyft']\nfor i, col in enumerate(order):\n x = df_lyft[df_lyft['destination'] == col].groupby('distance').price.mean().index\n y = df_lyft[df_lyft['destination'] == col].groupby('distance').price.mean()\n ax[1].plot(x, y, label = col)\n \nax[1].set(xlabel = 'Distance in Mile', ylabel = 'Average price in USD')\nax[1].set_title('Lyft Average Prices by Distance')\nax[1].legend()\nplt.show()",
"destination\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d056667283a6c06fa5ae39f8609126bc66eb84fe | 19,177 | ipynb | Jupyter Notebook | lessons/notebooks/Course-Outline-and-Admin-Issues.ipynb | charlielu05/BMLIP | 70c4c3810e0fdea42d611b6c4aab9003506dc243 | [
"CC-BY-3.0"
] | 1 | 2021-08-07T08:06:06.000Z | 2021-08-07T08:06:06.000Z | lessons/notebooks/Course-Outline-and-Admin-Issues.ipynb | charlielu05/BMLIP | 70c4c3810e0fdea42d611b6c4aab9003506dc243 | [
"CC-BY-3.0"
] | null | null | null | lessons/notebooks/Course-Outline-and-Admin-Issues.ipynb | charlielu05/BMLIP | 70c4c3810e0fdea42d611b6c4aab9003506dc243 | [
"CC-BY-3.0"
] | null | null | null | 41.06424 | 384 | 0.56453 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d05666f82fdf379dc9b73e984baca81f8642308d | 215,951 | ipynb | Jupyter Notebook | code/model_zoo/pytorch_ipynb/convnet-resnet50-celeba-dataparallel.ipynb | wpsliu123/Sebastian_Raschka-Deep-Learning-Book | fc57a58b46921f057248bd8fd0f258e952a3cddb | [
"MIT"
] | 3 | 2019-02-19T16:42:28.000Z | 2020-10-11T05:16:12.000Z | code/model_zoo/pytorch_ipynb/convnet-resnet50-celeba-dataparallel.ipynb | wpsliu123/Sebastian_Raschka-Deep-Learning-Book | fc57a58b46921f057248bd8fd0f258e952a3cddb | [
"MIT"
] | null | null | null | code/model_zoo/pytorch_ipynb/convnet-resnet50-celeba-dataparallel.ipynb | wpsliu123/Sebastian_Raschka-Deep-Learning-Book | fc57a58b46921f057248bd8fd0f258e952a3cddb | [
"MIT"
] | 1 | 2021-11-29T12:10:14.000Z | 2021-11-29T12:10:14.000Z | 179.212448 | 126,064 | 0.886673 | [
[
[
"*Accompanying code examples of the book \"Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python\" by [Sebastian Raschka](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).*\n \nOther code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning).",
"_____no_output_____"
]
],
[
[
"%load_ext watermark\n%watermark -a 'Sebastian Raschka' -v -p torch",
"Sebastian Raschka \n\nCPython 3.6.6\nIPython 7.1.1\n\ntorch 0.4.1\n"
]
],
[
[
"# Model Zoo -- CNN Gender Classifier (ResNet-50 Architecture, CelebA) with Data Parallelism",
"_____no_output_____"
],
[
"### Network Architecture",
"_____no_output_____"
],
[
"The network in this notebook is an implementation of the ResNet-50 [1] architecture on the CelebA face dataset [2] to train a gender classifier. \n\n\nReferences\n \n- [1] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). ([CVPR Link](https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html))\n\n- [2] Zhang, K., Tan, L., Li, Z., & Qiao, Y. (2016). Gender and smile classification using deep convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 34-38).\n\n\n\n**Note that the CelebA images are 218 x 178, not 256 x 256. We resize to 128x128**",
"_____no_output_____"
],
[
"The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches the dimensions of the main path's output, which allows the network to learn identity functions. Such a residual block is illustrated below:\n\n\n\n\nThe following code implements the residual blocks with skip connections such that the input passed via the shortcut matches is resized to dimensions of the main path's output. Such a residual block is illustrated below:\n\n",
"_____no_output_____"
],
[
"For a more detailed explanation see the other notebook, [resnet-ex-1.ipynb](resnet-ex-1.ipynb).",
"_____no_output_____"
],
[
"The image below illustrates the ResNet-34 architecture (from the He et al. paper):\n\n\n\nWhile ResNet-34 has 34 layers as shown in the figure above, the 50-layer ResNet variant implemented in this notebook uses \"bottleneck\" approach instead of the basic residual blocks. Figure 5 from the He et al. paper illustrates the difference between a basic residual block (as used in ResNet-34) and the bottleneck block used in ResNet-50:\n\n",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"import os\nimport time\n\nimport numpy as np\nimport pandas as pd\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom torch.utils.data import Dataset\nfrom torch.utils.data import DataLoader\n\nfrom torchvision import datasets\nfrom torchvision import transforms\n\nimport matplotlib.pyplot as plt\nfrom PIL import Image",
"_____no_output_____"
]
],
[
[
"## Dataset",
"_____no_output_____"
],
[
"### Downloading the Dataset",
"_____no_output_____"
],
[
"Note that the ~200,000 CelebA face image dataset is relatively large (~1.3 Gb). The download link provided below was provided by the author on the official CelebA website at http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html. ",
"_____no_output_____"
],
[
"1) Download and unzip the file `img_align_celeba.zip`, which contains the images in jpeg format.\n\n2) Download the `list_attr_celeba.txt` file, which contains the class labels\n\n3) Download the `list_eval_partition.txt` file, which contains training/validation/test partitioning info",
"_____no_output_____"
],
[
"### Preparing the Dataset",
"_____no_output_____"
]
],
[
[
"df1 = pd.read_csv('list_attr_celeba.txt', sep=\"\\s+\", skiprows=1, usecols=['Male'])\n\n# Make 0 (female) & 1 (male) labels instead of -1 & 1\ndf1.loc[df1['Male'] == -1, 'Male'] = 0\n\ndf1.head()",
"_____no_output_____"
],
[
"df2 = pd.read_csv('list_eval_partition.txt', sep=\"\\s+\", skiprows=0, header=None)\ndf2.columns = ['Filename', 'Partition']\ndf2 = df2.set_index('Filename')\n\ndf2.head()",
"_____no_output_____"
],
[
"df3 = df1.merge(df2, left_index=True, right_index=True)\ndf3.head()",
"_____no_output_____"
],
[
"df3.to_csv('celeba-gender-partitions.csv')\ndf4 = pd.read_csv('celeba-gender-partitions.csv', index_col=0)\ndf4.head()",
"_____no_output_____"
],
[
"df4.loc[df4['Partition'] == 0].to_csv('celeba-gender-train.csv')\ndf4.loc[df4['Partition'] == 1].to_csv('celeba-gender-valid.csv')\ndf4.loc[df4['Partition'] == 2].to_csv('celeba-gender-test.csv')",
"_____no_output_____"
],
[
"img = Image.open('img_align_celeba/000001.jpg')\nprint(np.asarray(img, dtype=np.uint8).shape)\nplt.imshow(img);",
"(218, 178, 3)\n"
]
],
[
[
"### Implementing a Custom DataLoader Class",
"_____no_output_____"
]
],
[
[
"class CelebaDataset(Dataset):\n \"\"\"Custom Dataset for loading CelebA face images\"\"\"\n\n def __init__(self, csv_path, img_dir, transform=None):\n \n df = pd.read_csv(csv_path, index_col=0)\n self.img_dir = img_dir\n self.csv_path = csv_path\n self.img_names = df.index.values\n self.y = df['Male'].values\n self.transform = transform\n\n def __getitem__(self, index):\n img = Image.open(os.path.join(self.img_dir,\n self.img_names[index]))\n \n if self.transform is not None:\n img = self.transform(img)\n \n label = self.y[index]\n return img, label\n\n def __len__(self):\n return self.y.shape[0]",
"_____no_output_____"
],
[
"# Note that transforms.ToTensor()\n# already divides pixels by 255. internally\n\ncustom_transform = transforms.Compose([transforms.CenterCrop((178, 178)),\n transforms.Resize((128, 128)),\n #transforms.Grayscale(), \n #transforms.Lambda(lambda x: x/255.),\n transforms.ToTensor()])\n\ntrain_dataset = CelebaDataset(csv_path='celeba-gender-train.csv',\n img_dir='img_align_celeba/',\n transform=custom_transform)\n\nvalid_dataset = CelebaDataset(csv_path='celeba-gender-valid.csv',\n img_dir='img_align_celeba/',\n transform=custom_transform)\n\ntest_dataset = CelebaDataset(csv_path='celeba-gender-test.csv',\n img_dir='img_align_celeba/',\n transform=custom_transform)\n\nBATCH_SIZE=256*torch.cuda.device_count()\n\n\ntrain_loader = DataLoader(dataset=train_dataset,\n batch_size=BATCH_SIZE,\n shuffle=True,\n num_workers=4)\n\nvalid_loader = DataLoader(dataset=valid_dataset,\n batch_size=BATCH_SIZE,\n shuffle=False,\n num_workers=4)\n\ntest_loader = DataLoader(dataset=test_dataset,\n batch_size=BATCH_SIZE,\n shuffle=False,\n num_workers=4)",
"_____no_output_____"
],
[
"device = torch.device(\"cuda:0\")\ntorch.manual_seed(0)\n\nfor epoch in range(2):\n\n for batch_idx, (x, y) in enumerate(train_loader):\n \n print('Epoch:', epoch+1, end='')\n print(' | Batch index:', batch_idx, end='')\n print(' | Batch size:', y.size()[0])\n \n x = x.to(device)\n y = y.to(device)\n break",
"Epoch: 1 | Batch index: 0 | Batch size: 1024\nEpoch: 2 | Batch index: 0 | Batch size: 1024\n"
]
],
[
[
"## Model",
"_____no_output_____"
]
],
[
[
"##########################\n### SETTINGS\n##########################\n\n# Hyperparameters\nrandom_seed = 1\nlearning_rate = 0.001\nnum_epochs = 5\n\n# Architecture\nnum_features = 128*128\nnum_classes = 2",
"_____no_output_____"
]
],
[
[
"The following code cell that implements the ResNet-34 architecture is a derivative of the code provided at https://pytorch.org/docs/0.4.0/_modules/torchvision/models/resnet.html.",
"_____no_output_____"
]
],
[
[
"##########################\n### MODEL\n##########################\n\n\ndef conv3x3(in_planes, out_planes, stride=1):\n \"\"\"3x3 convolution with padding\"\"\"\n return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,\n padding=1, bias=False)\n\n\n\nclass Bottleneck(nn.Module):\n expansion = 4\n\n def __init__(self, inplanes, planes, stride=1, downsample=None):\n super(Bottleneck, self).__init__()\n self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)\n self.bn1 = nn.BatchNorm2d(planes)\n self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,\n padding=1, bias=False)\n self.bn2 = nn.BatchNorm2d(planes)\n self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)\n self.bn3 = nn.BatchNorm2d(planes * 4)\n self.relu = nn.ReLU(inplace=True)\n self.downsample = downsample\n self.stride = stride\n\n def forward(self, x):\n residual = x\n\n out = self.conv1(x)\n out = self.bn1(out)\n out = self.relu(out)\n\n out = self.conv2(out)\n out = self.bn2(out)\n out = self.relu(out)\n\n out = self.conv3(out)\n out = self.bn3(out)\n\n if self.downsample is not None:\n residual = self.downsample(x)\n\n out += residual\n out = self.relu(out)\n\n return out\n\n\nclass ResNet(nn.Module):\n\n def __init__(self, block, layers, num_classes):\n self.inplanes = 64\n super(ResNet, self).__init__()\n self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,\n bias=False)\n self.bn1 = nn.BatchNorm2d(64)\n self.relu = nn.ReLU(inplace=True)\n self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n self.layer1 = self._make_layer(block, 64, layers[0])\n self.layer2 = self._make_layer(block, 128, layers[1], stride=2)\n self.layer3 = self._make_layer(block, 256, layers[2], stride=2)\n self.layer4 = self._make_layer(block, 512, layers[3], stride=2)\n self.avgpool = nn.AvgPool2d(7, stride=1, padding=2)\n self.fc = nn.Linear(2048 * block.expansion, num_classes)\n\n for m in self.modules():\n if isinstance(m, nn.Conv2d):\n n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n m.weight.data.normal_(0, (2. / n)**.5)\n elif isinstance(m, nn.BatchNorm2d):\n m.weight.data.fill_(1)\n m.bias.data.zero_()\n\n def _make_layer(self, block, planes, blocks, stride=1):\n downsample = None\n if stride != 1 or self.inplanes != planes * block.expansion:\n downsample = nn.Sequential(\n nn.Conv2d(self.inplanes, planes * block.expansion,\n kernel_size=1, stride=stride, bias=False),\n nn.BatchNorm2d(planes * block.expansion),\n )\n\n layers = []\n layers.append(block(self.inplanes, planes, stride, downsample))\n self.inplanes = planes * block.expansion\n for i in range(1, blocks):\n layers.append(block(self.inplanes, planes))\n\n return nn.Sequential(*layers)\n\n def forward(self, x):\n x = self.conv1(x)\n x = self.bn1(x)\n x = self.relu(x)\n x = self.maxpool(x)\n\n x = self.layer1(x)\n x = self.layer2(x)\n x = self.layer3(x)\n x = self.layer4(x)\n\n x = self.avgpool(x)\n x = x.view(x.size(0), -1)\n logits = self.fc(x)\n probas = F.softmax(logits, dim=1)\n return logits, probas\n\n\n\ndef resnet50(num_classes):\n \"\"\"Constructs a ResNet-34 model.\"\"\"\n model = ResNet(Bottleneck, [3, 4, 6, 3], num_classes=num_classes)\n return model\n",
"_____no_output_____"
],
[
"torch.manual_seed(random_seed)\n\n##########################\n### COST AND OPTIMIZER\n##########################\n\n\n#### DATA PARALLEL START ####\n\nmodel = resnet50(num_classes)\nif torch.cuda.device_count() > 1:\n print(\"Using\", torch.cuda.device_count(), \"GPUs\")\n model = nn.DataParallel(model)\n\n#### DATA PARALLEL END ####\n \nmodel.to(device)\n\n#### DATA PARALLEL START ####\n\n\ncost_fn = torch.nn.CrossEntropyLoss() \noptimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) ",
"Using 4 GPUs\n"
]
],
[
[
"## Training",
"_____no_output_____"
]
],
[
[
"def compute_accuracy(model, data_loader):\n correct_pred, num_examples = 0, 0\n for i, (features, targets) in enumerate(data_loader):\n \n features = features.to(device)\n targets = targets.to(device)\n\n logits, probas = model(features)\n _, predicted_labels = torch.max(probas, 1)\n num_examples += targets.size(0)\n correct_pred += (predicted_labels == targets).sum()\n return correct_pred.float()/num_examples * 100\n \n\nstart_time = time.time()\nfor epoch in range(num_epochs):\n \n model.train()\n for batch_idx, (features, targets) in enumerate(train_loader):\n \n features = features.to(device)\n targets = targets.to(device)\n \n ### FORWARD AND BACK PROP\n logits, probas = model(features)\n cost = cost_fn(logits, targets)\n optimizer.zero_grad()\n \n cost.backward()\n \n ### UPDATE MODEL PARAMETERS\n optimizer.step()\n \n ### LOGGING\n if not batch_idx % 50:\n print ('Epoch: %03d/%03d | Batch %04d/%04d | Cost: %.4f' \n %(epoch+1, num_epochs, batch_idx, \n len(train_dataset)//BATCH_SIZE, cost))\n\n \n\n model.eval()\n with torch.set_grad_enabled(False): # save memory during inference\n print('Epoch: %03d/%03d | Train: %.3f%% | Valid: %.3f%%' % (\n epoch+1, num_epochs, \n compute_accuracy(model, train_loader),\n compute_accuracy(model, valid_loader)))\n \n print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))\n \nprint('Total Training Time: %.2f min' % ((time.time() - start_time)/60))",
"Epoch: 001/005 | Batch 0000/0158 | Cost: 0.7133\nEpoch: 001/005 | Batch 0050/0158 | Cost: 0.1586\nEpoch: 001/005 | Batch 0100/0158 | Cost: 0.1041\nEpoch: 001/005 | Batch 0150/0158 | Cost: 0.1345\nEpoch: 001/005 | Train: 93.080% | Valid: 94.050%\nTime elapsed: 2.74 min\nEpoch: 002/005 | Batch 0000/0158 | Cost: 0.1176\nEpoch: 002/005 | Batch 0050/0158 | Cost: 0.0857\nEpoch: 002/005 | Batch 0100/0158 | Cost: 0.0789\nEpoch: 002/005 | Batch 0150/0158 | Cost: 0.0594\nEpoch: 002/005 | Train: 97.245% | Valid: 97.086%\nTime elapsed: 5.43 min\nEpoch: 003/005 | Batch 0000/0158 | Cost: 0.0635\nEpoch: 003/005 | Batch 0050/0158 | Cost: 0.0747\nEpoch: 003/005 | Batch 0100/0158 | Cost: 0.0778\nEpoch: 003/005 | Batch 0150/0158 | Cost: 0.0583\nEpoch: 003/005 | Train: 96.920% | Valid: 96.824%\nTime elapsed: 8.12 min\nEpoch: 004/005 | Batch 0000/0158 | Cost: 0.0578\nEpoch: 004/005 | Batch 0050/0158 | Cost: 0.0701\nEpoch: 004/005 | Batch 0100/0158 | Cost: 0.0721\nEpoch: 004/005 | Batch 0150/0158 | Cost: 0.0504\nEpoch: 004/005 | Train: 96.846% | Valid: 96.477%\nTime elapsed: 10.81 min\nEpoch: 005/005 | Batch 0000/0158 | Cost: 0.0448\nEpoch: 005/005 | Batch 0050/0158 | Cost: 0.0456\nEpoch: 005/005 | Batch 0100/0158 | Cost: 0.0584\nEpoch: 005/005 | Batch 0150/0158 | Cost: 0.0396\nEpoch: 005/005 | Train: 97.287% | Valid: 96.804%\nTime elapsed: 13.50 min\nTotal Training Time: 13.50 min\n"
]
],
[
[
"## Evaluation",
"_____no_output_____"
]
],
[
[
"with torch.set_grad_enabled(False): # save memory during inference\n print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))",
"Test accuracy: 95.87%\n"
],
[
"for batch_idx, (features, targets) in enumerate(test_loader):\n\n features = features\n targets = targets\n break\n \nplt.imshow(np.transpose(features[0], (1, 2, 0)))",
"_____no_output_____"
],
[
"model.eval()\nlogits, probas = model(features.to(device)[0, None])\nprint('Probability Female %.2f%%' % (probas[0][0]*100))",
"Probability Female 99.19%\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0566e028f186d589f2f02a046ec8345d870cc86 | 13,231 | ipynb | Jupyter Notebook | spark/SparkBasic/DataFrames_Basic_Operations.ipynb | AlphaSunny/RecSys | 6e14a910ea810e2ec6501ee7a9a0ac9205e2232e | [
"MIT"
] | null | null | null | spark/SparkBasic/DataFrames_Basic_Operations.ipynb | AlphaSunny/RecSys | 6e14a910ea810e2ec6501ee7a9a0ac9205e2232e | [
"MIT"
] | null | null | null | spark/SparkBasic/DataFrames_Basic_Operations.ipynb | AlphaSunny/RecSys | 6e14a910ea810e2ec6501ee7a9a0ac9205e2232e | [
"MIT"
] | null | null | null | 36.958101 | 138 | 0.44751 | [
[
[
"from pyspark.sql import SparkSession ",
"_____no_output_____"
],
[
"spark = SparkSession.builder.appName('basic').getOrCreate()",
"_____no_output_____"
],
[
"df = spark.read.csv('appl_stock.csv', inferSchema=True, header=True)",
"_____no_output_____"
],
[
"df.printSchema()",
"root\n |-- Date: timestamp (nullable = true)\n |-- Open: double (nullable = true)\n |-- High: double (nullable = true)\n |-- Low: double (nullable = true)\n |-- Close: double (nullable = true)\n |-- Volume: integer (nullable = true)\n |-- Adj Close: double (nullable = true)\n\n"
]
],
[
[
"## Filtering data",
"_____no_output_____"
]
],
[
[
"df.filter('Close>500').show() ",
"+--------------------+------------------+------------------+------------------+------------------+---------+-----------------+\n| Date| Open| High| Low| Close| Volume| Adj Close|\n+--------------------+------------------+------------------+------------------+------------------+---------+-----------------+\n|2012-02-13 00:00:...| 499.529991|503.83000899999996|497.08998899999995|502.60002099999997|129304000| 65.116633|\n|2012-02-14 00:00:...| 504.659988| 509.56002| 502.000008| 509.459991|115099600| 66.005408|\n|2012-02-16 00:00:...| 491.500008| 504.890007| 486.62999|502.20999900000004|236138000| 65.066102|\n|2012-02-17 00:00:...| 503.109993|507.77002000000005| 500.299995| 502.12001|133951300| 65.054443|\n|2012-02-21 00:00:...|506.88001299999996| 514.850021|504.12000300000005| 514.850021|151398800| 66.703738|\n|2012-02-22 00:00:...| 513.079994| 515.489983|509.07002300000005| 513.039993|120825600|66.46923100000001|\n|2012-02-23 00:00:...| 515.079987| 517.830009| 509.499992| 516.3899769999999|142006900| 66.903253|\n|2012-02-24 00:00:...| 519.6699980000001| 522.899979| 518.6400150000001| 522.4099809999999|103768000| 67.683203|\n|2012-02-27 00:00:...| 521.309982| 528.5| 516.2800139999999| 525.760017|136895500| 68.117232|\n|2012-02-28 00:00:...| 527.960014| 535.410011| 525.850006| 535.410011|150096800|69.36748100000001|\n|2012-02-29 00:00:...| 541.5600049999999| 547.6100230000001| 535.700005| 542.440025|238002800| 70.278286|\n|2012-03-01 00:00:...| 548.169983| 548.209984| 538.7699809999999| 544.4699780000001|170817500| 70.541286|\n|2012-03-02 00:00:...| 544.240013| 546.800018| 542.519974| 545.180008|107928100| 70.633277|\n|2012-03-05 00:00:...| 545.420013| 547.47998| 526.000023| 533.1600269999999|202281100| 69.075974|\n|2012-03-06 00:00:...| 523.659996| 533.690025| 516.2199860000001| 530.259987|202559700|68.70024599999999|\n|2012-03-07 00:00:...| 536.8000030000001| 537.779999| 523.299988| 530.6900099999999|199630200|68.75595899999999|\n|2012-03-08 00:00:...| 534.6899950000001| 542.989998| 532.120003| 541.989975|129114300| 70.219978|\n|2012-03-09 00:00:...| 544.209999| 547.740013| 543.110001| 545.170021|104729800|70.63198299999999|\n|2012-03-12 00:00:...| 548.9799879999999| 551.999977| 547.000023| 551.999977|101820600| 71.516869|\n|2012-03-13 00:00:...| 557.540024| 568.18| 555.750023| 568.099998|172713800|73.60278100000001|\n+--------------------+------------------+------------------+------------------+------------------+---------+-----------------+\nonly showing top 20 rows\n\n"
],
[
"df.filter('Close>500').select('Open').show()",
"+------------------+\n| Open|\n+------------------+\n| 499.529991|\n| 504.659988|\n| 491.500008|\n| 503.109993|\n|506.88001299999996|\n| 513.079994|\n| 515.079987|\n| 519.6699980000001|\n| 521.309982|\n| 527.960014|\n| 541.5600049999999|\n| 548.169983|\n| 544.240013|\n| 545.420013|\n| 523.659996|\n| 536.8000030000001|\n| 534.6899950000001|\n| 544.209999|\n| 548.9799879999999|\n| 557.540024|\n+------------------+\nonly showing top 20 rows\n\n"
],
[
"df.filter((df[\"Close\"]> 500) &(df[\"Open\"]< 495)).show()",
"+--------------------+------------------+------------------+------------------+------------------+---------+---------+\n| Date| Open| High| Low| Close| Volume|Adj Close|\n+--------------------+------------------+------------------+------------------+------------------+---------+---------+\n|2012-02-16 00:00:...| 491.500008| 504.890007| 486.62999|502.20999900000004|236138000|65.066102|\n|2013-01-16 00:00:...|494.63999900000005|509.44001799999995|492.49997699999994|506.08998099999997|172701200|66.151072|\n+--------------------+------------------+------------------+------------------+------------------+---------+---------+\n\n"
],
[
"df.filter((df[\"Close\"]>200) | (df[\"Open\"]< 200)).show()",
"+--------------------+------------------+------------------+------------------+------------------+---------+------------------+\n| Date| Open| High| Low| Close| Volume| Adj Close|\n+--------------------+------------------+------------------+------------------+------------------+---------+------------------+\n|2010-01-04 00:00:...| 213.429998| 214.499996|212.38000099999996| 214.009998|123432400| 27.727039|\n|2010-01-05 00:00:...| 214.599998| 215.589994| 213.249994| 214.379993|150476200|27.774976000000002|\n|2010-01-06 00:00:...| 214.379993| 215.23| 210.750004| 210.969995|138040000|27.333178000000004|\n|2010-01-07 00:00:...| 211.75| 212.000006| 209.050005| 210.58|119282800| 27.28265|\n|2010-01-08 00:00:...| 210.299994| 212.000006|209.06000500000002|211.98000499999998|111902700| 27.464034|\n|2010-01-11 00:00:...|212.79999700000002| 213.000002| 208.450005|210.11000299999998|115557400| 27.221758|\n|2010-01-12 00:00:...|209.18999499999998|209.76999500000002| 206.419998| 207.720001|148614900| 26.91211|\n|2010-01-13 00:00:...| 207.870005|210.92999500000002| 204.099998| 210.650002|151473000| 27.29172|\n|2010-01-14 00:00:...|210.11000299999998|210.45999700000002| 209.020004| 209.43|108223500| 27.133657|\n|2010-01-15 00:00:...|210.92999500000002|211.59999700000003| 205.869999| 205.93|148516900|26.680197999999997|\n|2010-01-19 00:00:...| 208.330002|215.18999900000003| 207.240004| 215.039995|182501900|27.860484999999997|\n|2010-01-20 00:00:...| 214.910006| 215.549994| 209.500002| 211.73|153038200| 27.431644|\n|2010-01-21 00:00:...| 212.079994|213.30999599999998| 207.210003| 208.069996|152038600| 26.957455|\n|2010-01-25 00:00:...|202.51000200000001| 204.699999| 200.190002| 203.070002|266424900|26.309658000000002|\n|2010-01-26 00:00:...|205.95000100000001| 213.710005| 202.580004| 205.940001|466777500| 26.681494|\n|2010-01-27 00:00:...| 206.849995| 210.58| 199.530001| 207.880005|430642100|26.932840000000002|\n|2010-02-01 00:00:...|192.36999699999998| 196.0|191.29999899999999| 194.729998|187469100| 25.229131|\n|2010-02-02 00:00:...| 195.909998| 196.319994|193.37999299999998| 195.859997|174585600|25.375532999999997|\n|2010-02-03 00:00:...| 195.169994| 200.200003| 194.420004| 199.229994|153832000|25.812148999999998|\n|2010-02-04 00:00:...| 196.730003| 198.370001| 191.570005| 192.050003|189413000| 24.881912|\n+--------------------+------------------+------------------+------------------+------------------+---------+------------------+\nonly showing top 20 rows\n\n"
],
[
"result = df.filter(df[\"open\"] == 208.330002 ).collect()",
"_____no_output_____"
],
[
"type(result[0])",
"_____no_output_____"
],
[
"row = result[0]",
"_____no_output_____"
],
[
"row.asDict()",
"_____no_output_____"
],
[
"for item in result[0]:\n print(item)",
"2010-01-19 00:00:00\n208.330002\n215.18999900000003\n207.240004\n215.039995\n182501900\n27.860484999999997\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0568686abca261947a5c78ce208e8db61ba1845 | 8,445 | ipynb | Jupyter Notebook | doc/notebooks/MatrixProduct.ipynb | rrjudd/jvsip | 56a965fff595b027139ff151d27d434f2480b9e8 | [
"MIT"
] | 10 | 2016-01-16T04:10:13.000Z | 2022-03-22T02:17:44.000Z | doc/notebooks/MatrixProduct.ipynb | rrjudd/jvsip | 56a965fff595b027139ff151d27d434f2480b9e8 | [
"MIT"
] | 1 | 2015-09-11T04:48:03.000Z | 2015-09-11T13:44:29.000Z | doc/notebooks/MatrixProduct.ipynb | rrjudd/jvsip | 56a965fff595b027139ff151d27d434f2480b9e8 | [
"MIT"
] | 4 | 2017-06-13T21:48:23.000Z | 2020-08-26T15:07:44.000Z | 22.34127 | 93 | 0.46939 | [
[
[
"### Examples of matrix products",
"_____no_output_____"
]
],
[
[
"import pyJvsip as pjv",
"_____no_output_____"
]
],
[
[
"#### Example of matrix product prod",
"_____no_output_____"
]
],
[
[
"inA=pjv.create('mview_d',2,5).randn(5)\ninB=pjv.create('mview_d',5,5).identity\noutC=inA.prod(inB)\noutC.mprint('%.3f')\nprint(\"Frobenius of difference %.2f\"%(inA-outC).normFro)",
"[ 0.508 0.535 0.699 -0.960 0.231;\n 0.040 -0.477 0.208 0.506 -0.383]\n\nFrobenius of difference 0.00\n"
],
[
"inB=pjv.create('mview_d',2,2).identity\noutC=inB.prod(inA)\noutC.mprint('%.3f')\nprint(\"Frobenius of difference %.2f\"%(inA-outC).normFro)",
"[ 0.508 0.535 0.699 -0.960 0.231;\n 0.040 -0.477 0.208 0.506 -0.383]\n\nFrobenius of difference 0.00\n"
]
],
[
[
"#### Example of prodj\n\nConjugate matrix product",
"_____no_output_____"
]
],
[
[
"inA=pjv.create('cmview_f',3,4).randn(3)\ninB=pjv.create('cmview_f',4,2).randn(4)\noutC=inA.prodj(inB)\nprint('C=A.prodj(B)');\nprint('A');inA.mprint('%.3f')\nprint('B');inB.mprint('%.3f')\nprint('C');outC.mprint('%.3f')",
"C=A.prodj(B)\nA\n[ 1.071+1.203i 0.995-0.388i -0.396-1.182i -0.124+0.165i;\n -0.494+0.573i -0.837-0.167i 0.973-1.060i -1.526+0.190i;\n 0.254+1.634i 0.418+0.966i 0.351+2.008i -0.443+0.731i]\n\nB\n[-0.156-0.472i -1.057-0.728i;\n 0.813+0.340i 0.695+0.736i;\n 1.153-1.463i 0.531-1.387i;\n -0.372+0.749i 0.616-0.228i]\n\nC\n[ 1.384-2.247i -0.287-2.598i;\n 2.453+1.101i 0.404+0.091i;\n -1.963+3.397i -3.496+0.722i]\n\n"
],
[
"print('test using prod and inB.conj');pjv.prod(inA,(inB.conj),outC).mprint('%.3f')",
"test using prod and inB.conj\n[ 1.384-2.247i -0.287-2.598i;\n 2.453+1.101i 0.404+0.091i;\n -1.963+3.397i -3.496+0.722i]\n\n"
]
],
[
[
"#### Example of prodh\n\nHermitian matrix product",
"_____no_output_____"
]
],
[
[
"inA=pjv.create('cmview_f',3,4).randn(3)\ninB=pjv.create('cmview_f',2,4).randn(4)\noutC=inA.prodh(inB)\nprint('C=A.prodj(B)');\nprint('A');inA.mprint('%.3f')\nprint('B');inB.mprint('%.3f')\nprint('C');outC.mprint('%.3f')",
"C=A.prodj(B)\nA\n[ 1.071+1.203i 0.995-0.388i -0.396-1.182i -0.124+0.165i;\n -0.494+0.573i -0.837-0.167i 0.973-1.060i -1.526+0.190i;\n 0.254+1.634i 0.418+0.966i 0.351+2.008i -0.443+0.731i]\n\nB\n[-0.156-0.472i -1.057-0.728i 0.813+0.340i 0.695+0.736i;\n 1.153-1.463i 0.531-1.387i -0.372+0.749i 0.616-0.228i]\n\nC\n[-2.193+0.832i -0.311+4.938i;\n 0.322-0.693i -3.759-1.877i;\n -0.758+1.496i -2.282+2.687i]\n\n"
],
[
"print('test using prod and inB.herm');pjv.prod(inA,(inB.herm),outC).mprint('%.3f')",
"test using prod and inB.herm\n[-2.193+0.832i -0.311+4.938i;\n 0.322-0.693i -3.759-1.877i;\n -0.758+1.496i -2.282+2.687i]\n\n"
]
],
[
[
"#### Example of prodt\n\nTranspose matrix product.",
"_____no_output_____"
]
],
[
[
"inA=pjv.create('cmview_f',3,4).randn(3)\ninB=pjv.create('cmview_f',2,4).randn(4)\noutC=inA.prodt(inB)\nprint('C=A.prodj(B)');\nprint('A');inA.mprint('%.3f')\nprint('B');inB.mprint('%.3f')\nprint('C');outC.mprint('%.3f')",
"C=A.prodj(B)\nA\n[ 1.071+1.203i 0.995-0.388i -0.396-1.182i -0.124+0.165i;\n -0.494+0.573i -0.837-0.167i 0.973-1.060i -1.526+0.190i;\n 0.254+1.634i 0.418+0.966i 0.351+2.008i -0.443+0.731i]\n\nB\n[-0.156-0.472i -1.057-0.728i 0.813+0.340i 0.695+0.736i;\n 1.153-1.463i 0.531-1.387i -0.372+0.749i 0.616-0.228i]\n\nC\n[-1.060-2.080i 3.979-1.491i;\n 1.062-0.593i -0.872+4.044i;\n -0.251+0.233i 2.505+1.512i]\n\n"
],
[
"print('test using prod and inB.herm');pjv.prod(inA,(inB.transview),outC).mprint('%.3f')",
"test using prod and inB.herm\n[-1.060-2.080i 3.979-1.491i;\n 1.062-0.593i -0.872+4.044i;\n -0.251+0.233i 2.505+1.512i]\n\n"
],
[
"inA=pjv.create('mview_f',3,3).fill(0.0);\ninA.diagview(0).fill(1.0)\ninA.diagview(-1).fill(-1.0)\ninA.diagview(1).fill(-1.0)\ninA.mprint('%.1f')",
"[ 1.0 -1.0 0.0;\n -1.0 1.0 -1.0;\n 0.0 -1.0 1.0]\n\n"
],
[
"inB=pjv.create('mview_f',3,10).randn(14)",
"_____no_output_____"
],
[
"pjv.prod3(inA,inB,inB.empty).mprint('%.3f')",
"[ 0.555 -1.017 -0.534 0.872 -1.029 1.373 -1.534 -2.648 -0.214 2.112;\n -0.796 -0.389 0.591 -0.835 -0.654 0.024 3.188 2.993 0.176 -2.690;\n -0.202 1.608 -0.422 0.629 1.406 -1.663 -2.301 -2.322 -0.565 2.163]\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d056912906924a2071e8bddec56a0b39d12e26f0 | 3,704 | ipynb | Jupyter Notebook | _sources/shorts/master.ipynb | callysto/shorts-book | aeecdd8c475ad382388095261ce45c81c8fac764 | [
"CC0-1.0",
"CC-BY-4.0"
] | null | null | null | _sources/shorts/master.ipynb | callysto/shorts-book | aeecdd8c475ad382388095261ce45c81c8fac764 | [
"CC0-1.0",
"CC-BY-4.0"
] | null | null | null | _sources/shorts/master.ipynb | callysto/shorts-book | aeecdd8c475ad382388095261ce45c81c8fac764 | [
"CC0-1.0",
"CC-BY-4.0"
] | null | null | null | 38.185567 | 373 | 0.645248 | [
[
[
"\n\n<a href=\"https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fshorts&branch=master&subPath=master.ipynb&depth=1\" target=\"_parent\"><img src=\"https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true\" width=\"123\" height=\"24\" alt=\"Open in Callysto\"/></a>",
"_____no_output_____"
],
[
"# Master Index of Notebooks\n\nThe Callysto project is focused on creating Jupyter notebooks for the K-12 curriculum in Canadian schools, and to teach students and teachers how to create their own notebooks.\n\nThis notebook points to a collection of short demos, explaining how to include basic elements in your own Jupyter notebooks. FOr instance, how to include images, music, videos, graphs, even mathematical and geometric tools useful in science and technology classes. \n\nMore complete notebooks with these features are available on teh Callysto website (https://callysto.ca). Here, we show the simplest versions to get you started.",
"_____no_output_____"
],
[
"## Note:\n\nThis repo is specifically designed to run under \"mybinder\" as well as Jupyter Books. This limits somewhat the tools we can demonstrate.",
"_____no_output_____"
],
[
"## Demos that I want to do\n\n- [Including images](Images.ipynb)\n- [Including GIFs](GIFs.ipynb)\n- [Including YouTube videos](YouTube.ipynb)\n- [Drawing figures in HTML, SVG](HTML_Drawing.ipynb)\n- [Plotting in matplotlib](Matplotlib.ipynb)\n- [3D Plotting in matplotlib](Matplot3D.ipynb)\n- [Animation in matplotlib](MatplotAnimation.ipynb)\n- [Plotting in Plotly](Plotly.ipynb)\n- [3D Plotting in Plotly](Plotly3D.ipynb)\n- [3D graphics](3D_graphics.ipynb)\n- [Including WebGL animation](WebGL.ipynb)\n- [Animation in D3](D3.ipynb)\n- [Plotting in Pylab](Pylab.ipynb)\n- [Music and Sounds](Sounds.ipynb)\n- [Synthetic sounds](SynthSound.ipynb)\n- [Including Callysto banners](Banners.ipynb)\n- [Using widgets](Widgets.ipynb)\n- [Creating a progress bar](ProgressBar.ipynb)\n- [Including Geogebra apps](Geogebra.ipynb)\n- [Creating a slideshow](Slideshow.ipynb)\n- [A fancy slideshow](Slideshow2Callysto.ipynb)\n- [Hiding code](Hiding.ipynb)\n- [Saving your work on GitHub](Github.ipynb)\n- [Using namespaces](Namespace.ipynb)\n- [Importing Data](ImportingData.ipynb)\n",
"_____no_output_____"
],
[
"[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0569728d51aec6990a6cd8c284303f13ffc9b6f | 3,887 | ipynb | Jupyter Notebook | DataStructures/PriorityQueue.ipynb | varian97/ComputerScience-Notebook | 2fe4397e3131678057424dc112fed7fa6447118d | [
"MIT"
] | null | null | null | DataStructures/PriorityQueue.ipynb | varian97/ComputerScience-Notebook | 2fe4397e3131678057424dc112fed7fa6447118d | [
"MIT"
] | null | null | null | DataStructures/PriorityQueue.ipynb | varian97/ComputerScience-Notebook | 2fe4397e3131678057424dc112fed7fa6447118d | [
"MIT"
] | null | null | null | 25.913333 | 77 | 0.452791 | [
[
[
"# Priority Queue Reference Implementation",
"_____no_output_____"
],
[
"### Operations:\nFor the sake of simplicity, all inputs assumed to be valid\n\n\n**enqueue(data, priority)**\n* Insert data to the priority queue\n\n\n**dequeue()**\n* Remove one node from the priority queue with highest priority\n* If queue is empty, return None",
"_____no_output_____"
]
],
[
[
"class Node(object):\n def __init__(self, data, priority, next=None):\n self.data = data\n self.priority = priority\n self.next = next",
"_____no_output_____"
],
[
"class PriorityQueue(object):\n def __init__(self):\n self.head = None\n \n def enqueue(self, data, priority):\n if self.head is None:\n self.head = Node(data, priority)\n return\n if self.head.next is None:\n if self.head.priority < priority:\n self.head = Node(data, priority, self.head)\n else:\n self.head.next = Node(data, priority)\n else:\n p = self.head\n pprev = None\n while p is not None: \n if p.priority < priority:\n if p is self.head:\n temp = Node(data, priority, self.head)\n self.head = temp\n else:\n temp = Node(data, priority, p)\n pprev.next = temp\n return\n pprev = p\n p = p.next\n pprev.next = Node(data, priority)\n \n def dequeue(self):\n if self.head is None:\n return None\n node = self.head\n self.head = self.head.next\n return node",
"_____no_output_____"
],
[
"p = PriorityQueue()\np.enqueue(1, 20)\np.enqueue(2, 30)\np.enqueue(3, 15)\nx = p.head\nwhile x is not None:\n print(x.data, \" with priority of\", x.priority)\n x = x.next\n\nnode = p.dequeue()\nprint(\"Dequeue \", node.data, \" with priority of\", node.priority)\n\nx = p.head\nwhile x is not None:\n print(x.data, \" with priority of\", x.priority)\n x = x.next",
"2 with priority of 30\n1 with priority of 20\n3 with priority of 15\nDequeue 2 with priority of 30\n1 with priority of 20\n3 with priority of 15\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.